The Complete Guide to FREE Unmoderated Usability Testing

cookie-the-pom-siNDDi9RpVY-unsplash.jpg
 

You want to test your product’s usability, but you don’t want to (or can’t) shell out the money for specialized usability testing software? Usability testing is an essential step to building great product experiences but often gets tied up with overly complicated planning and high-ticket software subscriptions. This guide will show you that with a little creativity and hustle, you can perform high-quality unmoderated usability testing with software and resources that are either free or already available to you.

By the end of this guide, you should know, 1) the steps to designing a usability test your project stakeholders will love, 2) what remote unmoderated usability testing is and when to use it, and, 3) the techniques and tools you need to perform this testing without specialized usability software.

Although this guide is meant for unmoderated testing (in which users participate without a researcher present), these same basic principles can be applied to moderated testing as well (if a researcher meets with each participant to conduct the session). Also important to note, depending on your product and user types, you may need to pay or otherwise incentivize your usability participants - but you will not need to pay for any of the specific methods described in this tutorial.

First, let’s take a look at how to plan out the questions and activities for your usability test, then we will explore the basics of unmoderated testing.

Summary

  • Collaborate with key project stakeholders to plan your usability test so that it addresses the most important questions you and your team have about your product. Test methods should be selected based on your research goals.

  • Remote unmoderated usability testing is performed by users without a researcher there to guide them through the test instructions and ask questions. Sessions are typically recorded (screen and audio) so that usability metrics can be captured.

  • Use unmoderated testing if you just want usability feedback, do not need to ask probing and follow-up questions of your users, or do not have time to moderate each test session.

  • Do not use unmoderated testing (use moderated instead) if you are very early in the development cycle and would benefit most from deep, qualitative insights, if the system you are testing is very complex, or if your test will take more than 30 minutes to complete.

  • Use Google Forms (or another survey tool) to present task instructions to participants and capture their subjective task ratings.

  • Use Zoom (or another recording software) to record the screen, audio, and optionally, webcam during the session.

Creating a Usability Test Plan

Before you get started, you will need a solid usability testing plan that outlines what you are testing, who you are testing with, and what activities and questions will be included in the session. All good quality user research benefits from a thoughtfully crafted plan, but unmoderated usability will require even more structure than other methods.

This is because if you are facilitating a moderated research session, you can go in with a few very broad discussion and activity prompts, observe what users say and do with your product, and ask probing questions to guide the session along. But in unmoderated testing, you are not with participants in the session, so you need to give them a set of specific activities to complete that address your team’s learning goals.

This guide will not go into great detail on creating a research planning document, but below are the key sections of a usability test plan. Remember when creating your plan to always collaborate with key stakeholders, like product managers, designers, and engineers, on the learning goals and key outcomes of the research. The more you collaborate with your teams, the more likely they are to actually use the findings from your research, and the less likely you are to encounter resistance around the methods and results of your study.

It is never a great feeling to present what you think are solid and valuable insights, only to be met with, “But did you ask this question, or have them do this task, or meet with this type of user?” And research is fun and all, but let’s face it – at the end of the day, we are not just doing it for fun.

Sections of a Usability Test Plan

Background: Why you are doing this project and if there is any existing research data to consider.

Research Goals: What are the key learning outcomes you need? Are there any specific decisions being made from these data?

Testing Method: What type of usability testing will you use? Moderated, unmoderated, formative, summative, etc.

Tools & Software: How will you administer the test? Email, usability software, Google Forms, etc.? (The rest of this guide will give you some great tips on how to select your tools!)

Tasks: What tasks will users complete with your product during the session? Select these based on your research goals.

Sample: What type (personas, etc.) and how many users are you testing with?

Recruitment plan: How will you recruit users for your study? Type out the recruitment message (usually an email).

Test Script: Type out all the introductory text, task instructions, questions, and prompts that you will present to your users. Your task instructions should sound like something a real user would say they need to do. You should provide enough details and context for users to figure out what to do, but not so much that you give away the individual steps.

See our Usability Test Plan Template for more information and examples of how to plan out your usability test.

Each of the sections in the usability test plan helps ensure you are gathering the most useful, impactful, and valuable data possible and will put you closer to extracting insights that your stakeholders will be clamoring for. If you use these steps as a checklist and complete them in collaboration with key project stakeholders, you will have designed a stellar plan for your usability test!

If need help with things like how to create usability tasks, see this guide by Nielsen Norman Group: Usability Test Checklist.

Now that we have an idea of how to plan our usability test, we need to decide what specific methods and tools we are going to use. Since you opened this article, I am going to assume you probably want to, or think you want to, use unmoderated usability testing. Keep reading to find out more about what it is, whether or not it fits your research goals, and how to set it up!

What is remote unmoderated usability testing, and why should I use it?

Remote unmoderated usability testing is a usability evaluation completed by users in their own environment without a researcher facilitating the activity. You provide instructions for one or more tasks that users complete by themselves, and the data from those tasks is sent to you upon completion.

A great benefit of unmoderated testing is that it requires less of your time per user than moderated testing. This makes it easier to collect larger sample sizes, or to spend more time working on other research projects while collecting data for your unmoderated test.

Unmoderated testing also allows users to complete the tasks at whatever time and location they choose. Although this creates less experimental control (testing environments will vary across participants, and they may experience more distractions and interruptions), it does allow users to be in a more natural, representative setting. Users may also feel less pressure to perform since they are not being actively observed.

Another great benefit of unmoderated research is that it removes variability that may be introduced by the facilitator speaking or acting differently across different sessions. The way we introduce the research, describe tasks, and ask questions can have an impact on the results we get from users. You can also accidentally give away the task through poor probing technique if you are there to help the user out. We can do a lot in moderated testing to reduce this variability, there is still a risk of biasing users through unsystematic behavior by the facilitator. We still need to make sure we provide high-quality task instructions and questions in unmoderated testing, but we can be sure that each participant is seeing the same thing.

When utilized at the right time for the right learning objectives, unmoderated testing can provide extremely valuable insights about the usability of an interface. You can use it to identify what is easy or difficult for users, how users will respond to and overcome (or fail to overcome) certain problems in the design, and define objectives for further investigation and design explorations.

Here are some examples of projects where moderated testing can be a great idea:

  • Determine how much time it takes the average user to complete a task

  • Compare the usability (completion rate, time on task, perceived ease of use) of different tasks on a product to identify the hardest and easiest tasks

  • Determine whether design changes have reduced or eliminated usability issues identified during priori usability testing

  • Compare two different design options to show which is superior in terms of usability

  • Catch last-minute red flags and low-hanging fruit before sending a design off to be built

  • Help leaders and stakeholders have confidence in whether a design has successful usability or not, or in choosing between two designs

 When should I not do remote unmoderated usability testing?

Unmoderated testing can be very powerful and impactful when used strategically, providing valuable insights into the usability (or lack thereof) of your product. However, there are some cases where you would be better off with another technique. Below are some examples of when you might want to use a different approach, as well as options for more appropriate research methods.

  • Early in development: If the product you are testing is early in development, especially if you are in more of an exploration stage, it will not make a lot of sense to use unmoderated testing. When you are early in the product development cycle, you benefit a lot from being able to ask users follow-up questions and probe deeply into what they did and why.

  • Limited existing qualitative research data: Similarly, if you have limited experience talking to and understanding your users, you might be better off using a method such as interviews or contextual inquiry that allows you to gather deep insights about their goals, motivations, pain points, behaviors, attitudes, and usage environments. If while designing your unmoderated study, you find yourself wanting to add lots of questions between and after tasks, especially open-text response questions, you may need a more qualitative method!

  • Highly complex system: If the product you are testing is very complex with tasks that require in-depth setup and instructions, it might be a good idea to moderate the session so you can thoroughly explain the activity and effectively observe it. You can tell if this is the case if you are writing very long task instructions (paragraphs or more). If that happens, you could consider moderating the test so you can explain verbally in addition to written tasks and answer participants questions. (But always remember to not lead them by telling them what to do!)

  • Small sample size: If you feel you only need a sample of fewer than five users, it may or may not be worth the effort to try to send this off and get responses back when you could just conduct a few sessions in-person or remotely and get deeper data. Needing a small sample size probably indicates that you are more interested in qualitative data, in which case a moderated usability test may be more effective. That said, I do occasionally conduct unmoderated testing with samples of five or fewer users – just make sure you are not missing out on the awesome data you get from moderated sessions if that is really what you need at this point.

  • Long test length: Finally, if you expect the test will take more than about 20 – 30 minutes to complete. While a moderated usability test may have 30 – 45 minutes of actual testing time, it is harder to keep people’s attention that long in unmoderated testing. If you have designed a longer test, you may want to either break it into smaller tests and send to more people, or use moderated testing to make sure users aren’t getting tired or distracted in the middle. (This also depends on how much you are compensating your participants. Paying more means users will be willing to spend more time on your test.)

What if I am not sure which method to use, or want to do both?

If you are unsure whether you should conduct your usability testing moderated or unmoderated, consider how much time you have available to devote to it. If you have enough time to moderate the sessions, and you are not at a very late stage in the development process, you will almost certainly learn more by being there actively with your testers as they complete the tasks. If you have already completed lots of formative, qualitative research and have a good idea of users’ needs, goals, and pain points in the interface (especially if you’ve already designed a lot of previous pain points out), then you may not learn much extra by attending the sessions. In that case, unmoderated testing can be a great option.

Luckily, there’s not much different in terms of the actual study design between moderated and unmoderated usability testing, so if you design a high-quality test for either style, it will work for the other with a few small tweaks to the script. A technique I have used is to design a usability test, moderate one or two sessions (inviting my team to observe, of course), and then send it out as an unmoderated test to a larger sample if I want to evaluate findings with more people.

Now that we have learned what unmoderated testing is and when to use it, let’s discuss the tools and logistics you will need to create and run your test.

Tools for Remote Unmoderated Usability Testing

There are three key functions you need in place to run unmoderated usability testing:

  1. A way to show participants the task instructions

  2. A way to collect participants’ subjective data (task ratings, answers to follow-up questions)

  3. A way to collect participants’ behavioral data (time on task, completion status, task recording)

These needs hold true with either moderated or unmoderated testing, but with moderated, you can perform more of the heavy lifting yourself by recording the session, starting and stopping a timer, and asking users to rate tasks. With unmoderated testing, you have to find a way for these functions to happen without you there to perform them.

This is where usability testing software really shines and provides a lot of convenience for you as the researcher. Testing software will show participants the task instructions one-by-one, record their screen and audio, time their tasks, ask for ratings, track completion or failure (when possible, depending on what you are testing), and then send that data back to you in a handy little package.

However, you pay a premium for that convenience. Often, researchers are not in a position to ask for budget for a regular software subscription – maybe because user research is a new idea in your company that is still establishing ROI, or maybe you are just starting out and need to conduct testing for your portfolio on a student’s income. Whatever the case, as with most things in life, we can trade in our time and hustle to get the job done without money.

So, in order to conduct your testing, you just need to find a solution for each of the three key functions listed above. Those solutions will require some integration on your part to work together, and potentially a bit more effort from your users, but they can still work really well! I personally use these methods in my own UX research practice when needed.

One quick note: If you do have at least a small budget for usability testing software, it can really be worth it for the time and effort savings you gain by using one fully integrated, specialized platform. If you want to see what software is available on a small budget, see this great guide from Neilsen Norman Group on usability testing software options. In particular, Loop11, PlaybookUX, and TryMyUI are great lower-cost options.

But you came here to learn about the free options – so let’s get to it! The following sections will examine a few tools that you can use to set up and execute unmoderated usability testing for free.

First, we will take a look at Google Forms – a tool for presenting task instructions and collecting survey feedback.

Google Forms: Showing Task Instructions and Collecting Subjective Data

One great way to both display task instructions and collect subjective task ratings is by using Google Forms, a free survey platform in Google’s suite of cloud-based applications. Unlike a lot of other free survey tools out there, Google Forms allows you to have unlimited questions, unlimited responses, unlimited number of active surveys, and does not force you to have a big Google logo at the top. You can customize the color scheme and add your own banner for cohesive branding.

You can use Google Forms to show task instructions to participants and allow them to rate each task after completing it. You can also provide links and allow users to upload or link files.

Here is how to create a Google Form to administer an unmoderated usability test:

  1. Create an introductory page that thanks them for stopping by, briefly sets up what a usability test is and what to expect, and then asks them to continue once they are ready to begin.

  2. For each task, create two sections.

    • The first section provides the task instructions. There are no questions on this page, just the task instructions in the title and a text block that tells them to click “Next” once they are finished with the task.

    • The next section still has the same task description in the title, but also includes whatever task-level questions you are asking. For me, that’s the Single Ease Question (SEQ): “How easy or difficult was that task?” which users rate on a scale from 1 (very difficult) to 5 (very easy).

  3. After all the task sections, create a section with your post-test survey questions. Here are some example questions you could include, but make sure to select questions that make sense for your particular study:

    • How was that experience overall?

    • What was the easiest part of this activity?

    • What was the hardest part of this activity?

    • What could we change to improve our product?

  4. Provide an area for users to upload or link their session recording if applicable. (More details on this below)

  5. Make sure to customize your Confirmation Message to something expressing your endless gratitude and describing anything else they need to be aware of (for example, if you will be sending them a gift card).

Want a template to get started? Open this Unmoderated Test Form Template and choose “Make a copy.” This will copy a Google Sheet. Next click the “Forms” button in the header menu and “Edit Form.” Now you can insert all your own task instructions and information! Easy peasy 😊

Limitations of Google Forms

One of the biggest limitations with using any survey tool to present usability tasks (rather than dedicated usability software) is that users can’t have the task instructions on screen while completing the task unless they open them on their phone or use a second monitor.

You can recommend to your participants that they open the survey on their phone and complete the tasks on their computer, or vice versa depending on what you are testing. If you are testing a physical product, then your users can easily open the survey on their phone or computer and complete the task with the physical product.

Another limitation with this method is that it does not automatically track task duration or record the user’s screen. There are a few ways around these issues, but they require extra work and, in some cases, a little bit of trust in your participants. The sections below describe a few creative ways to get session recordings and task times for unmoderated testing.

Zoom: Recording the Testing Session (paid account only)

There are a few options for recording your users’ screen, audio, and surroundings (for physical product testing) depending on what platform your users will be using for the test.

If you already have a paid Zoom account (only paid accounts can record without the host present), then by far the easiest option is setting up a Zoom meeting that participants join when they begin the session. You can place a link to the meeting in the recruitment email and/or the Google Form to make it easy for your users to connect to the call when they need to.

There are a few specific settings you to enable when creating your Zoom meetings.

  1. Single or recurring meeting: First, you need to decide if you want to create a separate meeting for each participant or one meeting link that all participants share. If you just use one link, then two participants joining at the same time would see each other on the call. To avoid this, you can either create a separate meeting for each participant, or have participants sign up for time slots so they do not overlap. Signing up for a time slot may also help get people to actually complete your test, rather than putting it off, but use your discretion based on what you know about your participants.

  2. Let participants start without the host: “Allow participants to join anytime” so your users can start the meeting without you.

  3. Allow participants to share their screen:

  4. Recording the session: The paid version of Zoom allows you to record the meeting to the cloud automatically once the participant joins the session. To enable this, check the box labelled, “Automatically record meeting,” and select “In the cloud.” I would also recommend going into your account settings and creating a recording consent message.

Zoom accounts are relatively cheap, starting around only $150 per year – so it may be worth it for you to purchase an account for usability testing. It is much cheaper than usability testing software, which is typically at least $200 per month. But if you don’t want to or can’t purchase a Zoom account, see the sections below for other options to record your sessions.

Limitations of Zoom

Zoom is an awesome, intuitive way to have users record their screen and audio for you. It is relatively easy to use and people are familiar with it. Aside from the fact that you can only use this tool for recording if you have a paid account (ughh!), the main limitation is that users need to remember and be able to share their screen. Make sure to include instructions about this in your recruitment email and/or test instructions form. You can even provide a link to Zoom’s support page about screen sharing.

Screen Recording: Alternatives to Zoom

If for some reason you can’t get access to a paid Zoom account, there are a few other options for recording screens. None of these will be as easy to use as Zoom, but they can still do the trick.

  • If your users are on Google Chrome, they can use Screencastify, a browser extension that allows users to capture their screen, webcam, and audio and save that recording to Google Drive.

  • Macs and iPhones have their own built-in recording functions that are pretty robust. See more info on their support website here (Mac computers) and here (iPhones).

  • Unfortunately, Android phones do not all have integrated screen recording apps, so users would need to download a 3rd party app to record their screen, camera, and audio. The apps available change often, so just check out which apps have good reviews.

If you are requesting users to record their own screen and audio, you will want to include some additional instructions in your usability testing form explaining what to download and how to use it. If possible, it can be helpful to just pick one solution and ask everyone to use the same thing, such as everyone using Chrome and downloading an extension. Use your best judgement based on the type of users and product you are testing.

Don’t forget to include a question at the end of the test for participants to either upload or paste a link to the recording.

Limitations of Screen Recording Software (alternatives to Zoom)

Downloading a recording software (even just a browser extension), setting up a recording, and then sharing the link or file is a lot to ask of users. Improve your odds of success by writing very clear and helpful instructions on what you want people to do, include the instructions at the point they need them (integrated into the Google Form or however you are providing instructions), and still expect hiccups to occur.

If you are testing with a population that has limited computer skills, you may not want to bother with these alternative screen recording options – just use Zoom or don’t request session recordings. (For that matter, you may not even want to bother with unmoderated testing with a group like that at all.)

If you are unable to get people’s recordings, it can be very difficult or even impossible to determine if participants completed the tasks successfully or not. You can always ask them whether they did, just remember that it is quite common for participants to mistakenly believe they completed the task successfully, when in actuality the task was not completed.

Ultimately, screen recordings can be incredibly valuable if you can get them. You can use them to observe users struggling and gather powerful video snippets and quotes to share with your team. If you decide not to or cannot ask for video recordings, you will still learn a lot. Even if you just sent users a list of tasks in an email and had them email you back some feedback, that would still be very valuable in many cases. But if your objective is to gather very precise metrics around completion rates and task times, you will need session recordings of some kind.

Tracking Task Duration

Knowing how long it takes users to do things can provide a lot of insight into the usability of a system, but without usability software, this process will most likely be pretty manual for you.

If you are able to get session recordings, you can manually review them and mark down the task durations. If you have a relatively small number of participants, this is not too much extra work. You will likely be reviewing the recordings anyways to determine completion rates and extract qualitative insights. Just note the time that the user started the task and the time that they clicked to go to the rating, and then subtract to get the duration.

Some survey tools, like SurveyMonkey, will track the amount of time people spend on each question in your survey. If you already have access to a software like this, you can apply the Google Forms strategy outlined above to your whatever paid survey tool you are already using and save yourself some trouble tracking task durations manually.

Can I ask users to tell me how long it took them?

A last-ditch option would be to ask users to time themselves or estimate how long the task took. Questions like this could be included with the rest of your post-task questions, like the ease of use rating. Having users time themselves adds extra steps for them, which may increase test fatigue. And having them rate the amount of time it took (something like, “Very quick” to “Took too long”) may not be very useful in addition to asking for ease of use ratings because task duration has been found to correlate with subjective task ratings. This means if users take a long time on a task, they will rate it lower in subjective ease of use. This correlation does not mean you can substitute one metric for the other, but you will still be getting at least a sliver of the same insights simply from the subjective ratings people give, even if you are not able to track the time they take.

How do I know if users completed the tasks or not?

Task completion rates are notoriously tricky to track in an automated way, and in most cases, you will need to do some manual work on your end to accurately determine who completed each task successfully or not. For very specific types of tasks, such as navigating to a specific URL, specialized usability testing software may be able to automatically track task completion. However, this only represents a small subset of usability tasks. For most tasks, you will need to either review test recordings or use a creative tactic like asking users validation questions.

Reviewing test recordings: If you are already planning to review all the recordings to gather insights, which you should probably be doing anyways or it won’t be worth it to record them, then you can mark down task completion outcomes as you go. Just be sure to plan ahead to determine what counts as a success or failure. Brainstorm potential errors and determine which of them would constitute a full task failure.

Asking users validation questions: For some tasks, you may be able to include a question in the post-task rating section that will indicate to you whether the user completed the task or not. For example, if you ask users to find or purchase a specific product, you could ask them what the price was.

Putting it all Together

Usability testing has a lot of moving parts, especially when you are avoiding the specialized software that holds it all together for you. Below is a basic checklist you can use to stay organized throughout the process.

  1. Use our usability test plan template to collaborate with your team and stakeholders to write a usability test plan that includes the test’s learning goals, tasks, desired sample, etc.

  2. Determine how you will recruit users for your test.

  3. Use our unmoderated usability test template to set up a Google Form with task instructions, post-task ratings, and post-test follow-up questions. (To use the template, make a copy, then click “Forms” and “Edit form,” and update to fit your test!)

  4. Set up a Zoom meeting for your participants to record the session.

  5. Test out your setup to ensure it will work for participants – run multiple “dry-run” sessions with coworkers or friends.

  6. Send recruitment emails to participants asking them to participant and sign up for a specific time slot (if they will all be using the same link).

  7. Be available to field technical issues during the data collection phase.

  8. Send each participant a thank you message and incentive, if applicable.

  9. Collect and analyze all your awesome data.

  10. Share the findings with your team!

I hope this guide was helpful! Please reach out with any questions or comments.

Happy Testing!

Previous
Previous

How to Write Effective Usability Tasks (5 Key Strategies)