Single Ease Question (SEQ)

The Single Ease Question (SEQ), also known as the Task Ease Question, is a post-task attitudinal survey looking at the ease of completing a task. It’s particularly useful as a diagnostic measure, complementing larger post-scenario surveys. Although very simple, it’s shown to be highly relevant and reliable.

Overall, this task was...
   
A Single Ease Question (SEQ) with a 5-point scale.

The SEQ is usually presented as a statement about task difficulty, such as “Overall, this task was…”, or “Overall, how difficult or easy was this task to complete?”, or just simply “How easy or difficult was this task?”; and then asking participants to select the answer on a 5-point or a 7-point scale ranging from Very Difficult to Very Easy.

Tedesco and Tulilis tested several variants of the SEQ, and in A Comparison of Methods for Eliciting Post-Task Subjective Ratings in Usability Testing they claim that the most reliable variant is “Overall, this task was…” with a 5-point scale, even at small sample sizes.

Interpreting SEQ scores

Sauro also published benchmarks for SEQ responses, estimating that the average SEQ score on a 7-point scale is between 5.3 and 5.5. In the same research, he suggests that there is a strong correlation between a SEQ score and the likelihood that a user will successfully complete a task.

A raw SEQ score of 4.7 will correspond to a completion rate of 58% and task time of 2.8 minutes. A raw SEQ score of 5.9 will correspond to a completion rate of 86% and task time of about 2 minutes.

– Jeff Sauro, Using Task Ease (SEQ) to Predict Completion Rates and Times

Benefits of using SEQ

In the paper Comparison of Three One-Question, Post-Task Usability Questionnaires, Sauro and Dumas claim that the SEQ performed “about as well or better” than more complicated measures such as SMEQ, and that a 7-point scale version of the SEQ is highly correlated with System Usability Scale responses. They conclude that with sample sizes above 10-12, single-ease question provides reliable results, but that below 10 participants they are not as reliable. Sauro and Dumas point out that research participants used the SEQ survey with little or no explanation, and that it was easy to score.

SEQ is a very effective way of scoring and tracking task difficulty over time, so it is a good way to measure effects of design ideas and product improvements, comparing relative scores before and after the change.

Downsides of using SEQ

The SEQ survey provides a quick diagnostic measure, but it does not allow participants to provide any context or explain their decision. When doing research in person, it’s often good to follow up the SEQ scoring by asking the users to explain why they assigned a specific score.

The lack of context in the SEQ is also problematic for relative comparisons across different tasks, and if it is used in isolation to prioritise improvement work. The Expectation Ratings method is potentially a good extension that helps to compare SEQ scores across different tasks.

Learn more about the Single Ease Question Survey

Related and complementary tools to the Single Ease Question Survey