Expectation Ratings (ER)

The Expectaton Ratings (ER) survey is an attitudinal research method that captures the relationship between perceived and effective usability. It is particularly useful for prioritising user experience improvement tasks and redesign ideas.

William Albert and Eleri Dixon proposed the Expectation Ratings method at the Usability Professionals’ Association 12th Annual Conference in 2003, as a way to apply the expectancy disconfirmation theory to usability research and improve the popular Single Ease Question (SEQ) method.

Arguing that customer expectations are a primary driver for customer satisfaction, Albert and Dixon note how it is critical to capture the difference between expectations (perceived task difficulty) and experiences (actual task difficulty). For example, if a user expected a task to be very difficut, and completed it successfully but rated 1 (very difficult) on the SEQ scale, this does not necessarily impact user satisfaction negatively. On the other hand, if a user expected a task to be easy, but rated it 2 or 3 on the SEQ scale (difficult), this will negatively impact satisfaction. Using only the SEQ method, a researcher might incorrectly conclude that both such tasks are equally important to improve, or even that fixing the experience for the first task has a higher priority because the user rated it as more difficult. In reality, the second task is more important to fix.

Donna Tedesco and Tom Tullis conducted a research comparing various post-task rating methods, and found that the out of all applied methods, Experience Rating survey has the strongest correlation (0.46) between actual performance efficiency (task completion time and success) and the user attitudes after the task.

How to conduct an Expectation Ratings Survey

An Expectation Ratings survey has two parts: expectation rating and experience rating.

Before a usability testing session (before all the tasks), the participants rate their expected difficulty in performing various tasks. It’s important to collect the expectations at the very start, to avoid influencing the rating of later tasks by the work on earlier ones. Dixon and Albert used a five-point scale from “very easy” to “very difficult”, and collected the data about all tasks on a single form, such as the one below:

How easy or difficult do you expect it to be to complete the following tasks?

An example Expectation Rating sheet, to capture user expectations before the entire testing session.

After each task, the participants rate the experience using a variant of the SEQ survey.

How easy or difficult was sharing a diagram to complete?
An example Experience Rating question, to capture the attitude towards actual task completion.

Tedesco and Tullis used a slightly modified version of the questions:

Using ER to prioritise improvement work

Albert and Dixon suggest a sample size of at least 12 people to get relevant results, plotting the average expected and experienced scores for each task on a graph, and then grouping items into four categories:

Expectation Rating Quadrants
Expectation Ratings can help prioritise usability improvement work.

Learn more about the Expectation Ratings Survey

Related and complementary tools to the Expectation Ratings Survey