Abstract Our learning objectives for the week are:
- Explain the purpose prototypes serve in the design process.
- Define usability and how it is measured, and explain the purpose of usability testing.
- Apply best practices to redesign an interface that supports a better user experience.
- Discuss industry design trends and identify when it makes sense to apply them.
- Describe interactions and how they influence the user experience.
Prototypes make ideas feel real without investing too much time, money, and resources into them. They are an essential part of the user testing process and enable us to generate data early enough to validate or disprove our ideas.
Early feedback and the removal of guesswork about what to build is usually a good idea. Whether you're taking a risk on something new, or you're just not sure about something, this is where prototypes can help.
There are, however, limitations to prototypes, as I have found over the years. Many of them tend not to let you set conditional triggers. Such as detecting whether you're a 'returning' or 'new' customer. So it may mean creating a bunch of prototypes for specific flows, whereas, in code, this can be set fairly easily with a couple of lines of code.
Usability Testing There are at least three testing phases:
- Internal testing: used for technical feasibility, peer sanity checks, idea generation and general usability problems—usually with subject matter experts.
- Stakeholder testing: used to ensure our product aligns with the project and company goals—usually with people of a non-technical nature.
- External testing: used to gain unbiased validation or disproval of our ideas. Whether our product provides a good user experience, meaning it's usable, equitable, enjoyable, and useful—usually with potential users.
External testing is the one that holds the most weight in product or feature validation processes. However, this data is only valid if potential users are part of the study.
Potential users There is a common misconception that running into the closest coffee shop will count as valid external user testing. This approach is called Convenience sampling or sometimes Availability sampling and is a sampling method I despise the most. I warn UX practitioners about this type of sampling as it has many disadvantages. The selection process, for instance, is susceptible to bias and influences beyond the control of the researcher. There is no way to identify inclusion criteria before selecting subjects, and it can lead to under-representation or over-representation of particular groups within the sample.
Nevertheless, convenience sampling may be the only option in certain situations. For example, you're not sure who your target audience is, and you have no money for Facebook ads or audience recruitment.
Sure, any time spent with users can prove invaluable, and an average person may be able to raise simple usability issues. But, we must formulate user testing plans that include our potential audience to make sure the data we gather is valid and actionable. Running into a coffee shop probably isn't going to get you the budget approval needed for long-term testing and platform optimisation.
The most common user testing plans contain what we are going to do, how we're going to do it, what metrics we'll capture, the number of participants we're testing, and the scenarios we'll use.
This testing plan should then be presented back to Stakeholders, and once everyone has commented, we can put the final plan into action.
Elements of a User Testing Plan Over the years, I've created many testing plans and formulated a similar structure comparable to the methodology published on Usability.gov (Assistant Secretary for Public Affairs, 2019).
The strawman of my testing plan is as follows:
- Scope: Indicate what you are testing, e.g. the navigation; a particular flow or a section of the site to assess conversion
- Purpose: Identify the concerns, questions, and goals for this test. e.g. Can users navigate to product pages from the check-out basket?"
- Schedule and Location: Indicate when and where you will do the test. Before each sprint? Remote testing or Lab testing.
- Session description: Will they be assisted? Will the session be timed? If so, how long?
- Equipment: Indicate the type of equipment you will be using in the test; desktop, laptop, mobile/Smartphone, monitor size and resolution, operating system, browser etc. Indicate any accessibility tools also required, such as screen readers and audio descriptors.
- Participants: Indicate the number and demographic of participants to be tested.
- Scenarios: Indicate the number and types of tasks included in testing.
- Subjective metrics: Include the questions you are going to ask the participants before the sessions
- Quantitative metrics: Indicate the quantitative data you will measure in your test (e.g., successful completion rates, error rates, time on task).
- Roles: Include a list of the staff who will participate in the usability testing and what role each will play.
The benefits of doing it this way make my process a lot more scientific and professional. While there are different approaches to getting buy-in, I find it particularly effective if the discussion is centred around the scenarios that will be tested and the metrics measured.
Scenarios The most effective way of understanding what works and what doesn't with a product is to track or watch how people use it (McCloskey, 2014).
These qualitative insights help us determine how to improve the design. However, to observe participants scientifically and ethically, we need to give them something to do, such as a user task.
Generally, in a user test, it is best to group a set of tasks into scenarios.
Setting scenarios provide insight into the hidden paths users will take to complete their task. Some of these scenarios need no context, while others might, such as: 'Find and buy a Black 4WD Range rover for under 50k'.
The example, 'Search and purchase' are key features of your product—such as Registration and Login, Deleting your account, Starting a conversation, or Finding and purchasing an item.
Metrics The basic metrics for any usability test should be:
- Completion rate: were they able to do the thing they wanted.
- Time on task: how did it take them to complete their task.
- Sentiment: how did they feel about the process, and what would recommend to improve it.
The reason we should choose these metrics is they all give you actionable insights. You'll learn something from each one individually, and combine the data will give you enough ammunition to inform and support your actions.
For example, in the following scenario: 'Find and buy a Black 4WD Range rover for under 50k'.
- Completion rate: the user was able to Find and Buy a Black 4WD Range rover for under 50k.'
- Time on task: however, they found it difficult to find the search button and filter the product list. Meaning it took them X minutes to complete the task.
- Sentiment: The user felt frustrated by the process and what would recommend we change the bottom menu.
Individually they told us the UI on the CTAs need to be improved, and the labelling of them needs to be improved as well.
- 1.Assistant Secretary for Public Affairs (2019). Planning a Usability Test | Usability.gov. [online] Usability.gov. Available at: https://www.usability.gov/how-to-and-tools/methods/planning-usability-testing.html.
- 2.Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R.C., Mellor, S., Schwaber, K., Sutherland, J. and Thomas, D. (2011). Manifesto for Agile Software Development. [online] Agilemanifesto.org. Available at: https://agilemanifesto.org/ [Accessed 27 Mar. 2021].
- 3.Kim, J. and Wang, L. (2011). Ward Explains Debt Metaphor. [online] C2.com. Available at: http://wiki.c2.com/?WardExplainsDebtMetaphor [Accessed 27 Mar. 2021].
I was meant to just prototype my onboarding flow, but things didn't make sense without doing the main sections and also found that I needed to increase my bottom tab to five items instead of the initial four as described in my Week 5 post on navigation.
The main reason for this was the Home section lacked a clear, frictionless flow and had too much information on it, which didn't seem scannable.