RateIt: A unified review platform for the web with a tangible interface

Team members: Debargha Dey, Eva Palaiologk, Lindsey van der Lans, Tudor Văcărețu
Advisor: Suleman Shahid, Tilburg University
Role: User researcher, Prototype developer

 

ABSTRACT

Making a clear and well-informed decision about a potential purchase is an inconvenient necessity. People often use reviews to help them come to a decision. This article is a study that describes the research and design process of “RateIt”, a review platform that unifies various social network accounts for a better product and service review experience. We developed and tested two stage prototypes. Our contribution to the field was redefining the user’s experience of reading and writing reviews with a “friend” based review portal connecting all the user’s social and professional networking accounts. An extension of this solution is to go beyond the traditional input device and have a method where users can leave feedback by physically interacting with the system.

 

INTRODUCTION

RateIt_team

Making a clear and well-informed decision about a potential purchase is an inconvenient necessity. People often use reviews to help them come to a decision. This article is a study that describes the research and design process of “RateIt”, a review platform that unifies various social network accounts for a better product and service review experience. We developed and tested two stage prototypes. Our contribution to the field was redefining the user’s experience of reading and writing reviews with a “friend” based review portal connecting all the user’s social and professional networking accounts. An extension of this solution is to go beyond the traditional input device and have a method where users can leave feedback by physically interacting with the system.

As consumers, people often write reviews before they buy something. However, there is a lot of information available on the Internet. This can cause a cognitive overload on information. This is time-consuming, inefficient and frustrating. Most people tend to write reviews of products or services when either they are very happy, or particularly frustrated with their experience. Reviews, in general, are therefore categorically biased. Given this background, people usually tend to value better the opinions of people they know rather than that of complete strangers. In general, a lot more people read reviews than write them. When a person is moderately satisfied with an experience, they are usually not motivated to go back and leave a feedback. Some motivations for writing reviews are venting negative feelings, helping the company or helping/ warning other consumers. Additionally, gamification is also a potential method to motivate people for tasks they usually do not really like.

Results of brainstorming

Results of brainstorming

While people read reviews to make decisions, often a more powerful way of making a decision is to call or have a conversation with a person who has experience with the product or service. However in most cases, users do not know if whether anyone in their close or extended network has this knowledge. There is no platform or system currently existing that gives users this information.

The proposed solution was a web application environment called RateIt, and a physical extension called FaceIt. RateIt allows users to sign in with their various social network identities and uses the social network’s friend or connection information to establish a degree or level of connection. A friend of a user is a 1st-degree contact. The friend of a friend is a 2nd-degree contact, and so forth. Using this information, RateIt will identify and highlight all reviews of any product or service based on “how close” the reviewer is to the user.

This information will be augmented in both ways as knowing that one’s feedback directly affects his friends and other people they personally know, they will be more motivated to leave a feedback. The app extension FaceIt is a physical device that can be installed in stores that provide any kind of service. It will help users to assess services and products in a fun, efficient and effective way.

 

COMPETITOR ANALYSIS

Performing competitor analysis

Performing competitor analysis

During the design process, a competitor analysis was conducted to find out what already is in the market and what this product can contribute. One of the first steps was to identify similar products, which resulted in the following list of services: Amazon, Yelp, Goodreads, Youtube, and IMDB. After an analysis on the already known implemented platforms, a notable observation was that none of these services unify all social platforms for showing relevant reviews to the user. It is expected that people tend to trust friends more than people they do not know and also expert reviews are trusted more than generic user reviews. None of the above-mentioned platforms encloses friend reviews in their implementations. All known apps only offer a screen interaction with the platform, this gave an opportunity to come up with something that offers the opportunity for a haptic feedback and physical interaction with a device for providing new review content. These apps provide ratings/reviewing for either products or services, but none of them are oriented towards friends’ reviews and ratings. Moreover, they do not provide other ways of rating.

Visualizing competitors and value propositions

Visualizing competitors and value propositions

 

  • The platforms are not unified
  • Only single-service oriented (service or product, but not both)
  • No reviews of friends

None of the above platforms address our business goals: a friend-based review system with an input alternative to smartphones and PCs for real-time service reviews. Our core objective was to deliver the prototype for an app for smartphone users that has an intuitive interface.

 

USER RESEARCH

Online Survey
Based on our analysis of the Value Proposition Canvas, and to gain insight about the correctness of our assumptions, we created a 16-question online survey. It helped us gather more information about service or product reviewing habits and the kind of social media platforms people use. The result of the survey also helped us define our target user group better. We initially wanted to focus on the largest group of demographics by age who are active social media users and use online review services a lot to make purchase decisions (namely 25 – 40). However, we decided to also expand the user base from age 18 – 60 to include even people who may be active social media users but not as active shoppers, as well as people with buying power and interest with relatively lower computer expertise. One pre-requisite was that all users own and use a smartphone, and somehow use reviews to affect their purchase decision.

Contextual task analysis

Conducting user interviews

Conducting user interviews

Given the target user base, we conducted 4 interviews with different people, and assigned them 4 different, but related tasks. They were: look for a car repair service in Eindhoven; buy a tractor; choose one of 3 different restaurants to have dinner at; and choose to buy one of 3 different cars. For the first two interviews, we asked our subjects to buy a product or look for a service without any guidance. We intentionally chose a product or a service that would be obscure or alien enough to the subject that they’ll be forced to do some research on the subject (our hope being they’d turn to reviews). We learned that in the limited time and with the obscure nature of the tasks, people became more concerned with accomplishing the actual task in time instead of even turning to reviews. We even received feedback that reviews wouldn’t be pertinent to emergency situations like a car repair service, and instead of reading reviews, people would rather call someone who knew about these products or services. We realized that people often turn to reviews when they have found alternatives that satisfy their requirement and use the reviews to make a final judgment.

 

The input of the survey and contextual task analysis was then brought together by grouping the results and labeling them. The developing themes were then compiled as the requirements of the design, which are summarized below:

  • There needs to be a personal touch.
  • There should be a motivation for writing: e.g. personal affection, need to prevent others from bad decisions, increase the quality of the product, etc.
  • It should be time-effective.
  • Expert reviews should not be ignored.
  • Information should be accessible and manageable.

 

CONCEPTUAL DESIGN

Performing Heuristic Analysis with the team

Performing Heuristic Analysis with the team

The conceptual design for the product started with a brainstorming session based on the main requirements from our users. Each team member came up with possible solutions that will address the user needs, out of which 3 design alternatives were chosen:

  • Google glass/ augmented reality device
  • Mobile app
  • FaceIt

Each of the proposed design alternatives were further discussed and a hierarchical task analysis (HTA) was developed, for main feature identification. The HTA contained a task tree view of the features of the product to be developed. The first step in writing the HTA was to identify the main features of the product and construct on them smaller features that would address user requirements. The main tasks were reading and writing reviews. The google glass was not selected for the final design due to some technological constraints, matters of practicality, expense, and information overload for the user.

Hierarchical Task Analysis of 3 different modes

Hierarchical Task Analysis of 3 different modes

Final design
For the final design we decided to develop a mobile app (RateIt) to ease the need for reading and writing reviews which was identified as the main feature, and also a haptic device (FaceIt) for a real-time way of giving reviews for services.

 

PROTOTYPING

After the conceptual design was finished, the main goal was to start prototyping for a better visualization of the final design concept. The first prototype was a low-fidelity, for a better understanding of the workflow and business logic of the application, followed by a high fidelity prototype which can give the test users a more real and close experience to what the final design will look like.


Low-fidelity prototyping

Storyboarding

Storyboarding

RateIt_interaction_with_FaceIt

Interaction with FaceIt

The low fidelity prototype was done on paper mock-ups simulating an iPhone 6 device. To test the usability of this prototype a task analysis experiment was set up in which several subjects were asked to interact with the prototypes. Based on the task analysis, conclusions were drawn on what further improvements the design needs in order to bring a better experience to the future end-users. One of the conclusions was that people did not find it easy to find a review about a particular product. They rather went to the latest reviews instead of using the search engine. Moreover, they found it hard to distinguish the first and second-degree friends based on how we originally designed the interface. Furthermore, a low-fidelity prototype was designed for FaceIT, a tangible extension of RateIT. The idea is that the FaceIt device should be placed in the area where the service is delivered, and allow the user to review and rate the services in an interactive way. The impact on FaceIt will be sent to the RateIt app, where they log in via NFC or some other form of wireless communication. An advancement of this idea is the installation of a camera which captures the user’s facial expressions as they are interacting with this device. A small experiment was set up, where 7 people were asked to interact with a rolled air mattress to give feedback about the service of the university cafeteria. The results showed that if people “really liked” it, they would hug it. Patting it was a common gesture for expressing a feedback that was “ok” while hitting it was a form of showing displeasure.

 

 

Low fidelity versus high fidelity prototypes of the same screen

Low fidelity versus high fidelity prototypes of the same screen

Hi-fidelity prototyping
A hi-fi prototype with very limited functionality was developed in Axure to judge the usability of the interface. The team designer created “screenshots” which were based on the feedback we received on our paper prototype workflow and layout. This was done in Adobe Illustrator. We used this prototype for a final usability evaluation of the app.

 

A sample of a few screens from the high fidelity prototype

A sample of a few screens from the high fidelity prototype

EVALUATION

Usability test setup

User evaluations with high fidelity prototype

User evaluations with high fidelity prototype

The participants had to complete five different tasks. However, the main tasks were writing a review and reading a review. People were walking into the room and could take a seat. Their face (webcam) and their execution (camera) were recorded. They had a booklet with the consent form, instructions, tasks, and questionnaires. One of the questionnaires was the System Usability Scale, which is a good test for small sample sizes. The test was concluded with an interview and refreshments were offered. The complete test took 15 minutes.

Results
Four people had completed several tasks in the system; afterwards, they completed the System Usability Scale (SUS) and a small questionnaire.
The SUS showed an overall score of 81.25 which says that our system scored “excellent”. Moreover, every user responded that they would like to use this system if it was functional, and rated it with an average of an 8/10.

The User Centered Design process helped us design, iterate and evaluate a product for users of a review platform that would help them identify and highlight reviews placed by their friends and other people in their extended network, to give them quick decision-making opportunities.

 

Disclaimer: Full article available upon request

This entry was posted in . Bookmark the permalink.