Bringing reviews to iOS

Here at OpenTable, we rely on diners to share their dining experiences. Our restaurants count on diner feedback to inform their business operations, and our diners use community reviews to make their dining decisions. But as our users trend more towards mobile, we wanted to give them a way to provide feedback from within our apps. We also saw this as an opportunity to rethink the reviewing experience and tailor it for a more on-the-go audience.

Lean approach

We decided to use a lean design and research approach for this project. We began with a couple of basic assumptions, sought to validate or disprove them with low-tech prototypes and testing, and ultimately, released a feature that we plan to continue to iterate on.

We began with two assumptions:

  1. Interrupting users in OpenTable’s app to prompt for a review is irritating. We focused on creating logical entry points instead of forcing diners to interact with a review form.
  2. Simplifying the review process is essential. In our desktop form, we ask diners to provide five ratings: Overall, Food, Service, Ambiance, and Noise Level. We thought this would be too much to ask a mobile user to do and we narrowed it down to Overall.

We took these two assumptions and iterated on several designs. Then we used Invision to stitch our mocks together. The result was a lightweight prototype that allowed testers to navigate through the mobile reviews flow from an iPhone. This approach worked really well because it required no engineering time and the prototype looked and felt real.

Testing and findings

Now that we had a prototype, we needed to test it. Since this was an existing feature on desktop/web and the test was mainly usability focused we decided to test internally with our colleagues and we didn’t worry about potential biases (If we were testing a brand new concept, we would have brought in participants). We looked for colleagues who were iPhone users and didn’t work on OpenTable’s product. We found six people who ranged from frequent to sporadic product reviewers. We asked open-ended questions and had them use our prototype to open the app and submit a review. In general, we looked out for:

  1. Context: When and where do people review? Where do users expect to find reviews?
  2. Usability: Does the flow make sense? What did users expect to see and when?
  3. Categories: How do users feel about multiple ratings? What categories are important?

Much to our surprise, we found that a full page takeover was expected (almost everybody mentioned the Uber review experience as their mental model for reviewing on a phone). Several testers also expressed a desire to receive notifications or email reminders. As dining occasions often occur on weekends and evenings before other activities, our testers wanted to receive these notifications within 24 hours but not necessarily right after a meal (and after a week the meal becomes too difficult to remember). A few testers also mentioned that they prefer to write reviews during idle time like when commuting to work or taking a midday break.

Our second assumption was also challenged. Our testers submitted all five ratings quickly and easily and they considered all of them to be critical to the review. In particular, several testers called out Noise Level as a useful but often overlooked feature.

From this small sample size, we also noticed two types of reviewers emerge. Some of our testers struck us as power reviewers who would likely submit a review regardless of their dining experience. Whether these people were motivated by duty or ego, they were clearly compelled to give feedback. Meanwhile, the second category of reviewers only provided feedback for noteworthy experiences (either exceptionally good or exceptionally poor meals.)

What did we do?

Even after testing disproved our two assumptions, we were still hesitant to employ a full screen takeover that included all five rating categories. Although users seem to be accustomed to this approach, it felt risky to launch a new feature in such an aggressive way.

Ultimately, we chose to balance the invasiveness of a takeover screen with a softer request and we decided to launch with a modal takeover that users could easily exit. On this takeover, we asked diners for an Overall rating score and gave users the option to provide more feedback.

(final design by Yige Wang)

Early results

We launched mobile reviews in our iPhone app to a subset of diners. We were very encouraged by how many users interacted with the modal takeover and submitted a review. However, by leading with such a soft request we found that most users chose to submit the Overall rating only. Although we succeeded in expanding our pool of reviewers, these reviews could have negative implications on the quality of our content.

As we look to iterate on this feature, how might we encourage diners to provide more feedback such as additional ratings and in-depth reviews?

The team

Sasha Nelson co-authored this post, and is the product manager for this feature.

Design: Alexa Andrzejewski, Stephanie Hon, Yige Wang
Engineering: Olivier Larivain, Ari Braginski, Seif Attar, Richard Hopton, Stanley Ho, Orlando Perri

Find your table for any occasion