User Research

User research helps us understand user behaviors, needs, and motivations. This understand helps us create better experiences for the user.

User testing can be either qualitative or quantitative.

Qualitative research refers to research that asks subjects to rate their degrees of feeling or opinion — agreeing or disagreeing with a statement, or how likely they are to use a product.. It helps us to understand why users perform certain actions. It also often requires interviews with open-ended questioning, like asking why something is easy or difficult to use. Qualitative methods are better suited for answering questions about why or how to fix a problem.

Quantitative research is research that can be measured numerically. It can answer questions like “how many people could find the call to action” or “what percentage of users made a particular error?” Quantitative research is valuable in understanding statistical likelihood and, similar to data analytics, what is happening on a site. Analytics offers certain insights into the qualitative data around a site.

Results from both of these types of research can prove or disprove assumptions and find commonalities across target audiences.

Every UX project is different and could require one or many different types of tests to answer the assumptions in question. Some of the most popular forms of research are usability tests and card sorts.

Does your program office want to perform user testing? Reach out to Digital Comms and set up an initial chat!

Types of User Testing

Usability Testing

Usability testing evaluates a product or service by testing it with real users from the target audience in a controlled setting. Test participants usually complete a set of tasks and their observable behavior provides qualitative and quantitative data that helps determine the usability of the product or service. There can be variations of usability testing, but the three most common are moderated, unmoderated, and guerrilla.

Moderated usability tests are the most traditional and what we use most often at ACF. They can happen in person in a lab or more informal setting (like a conference room), or online via screen share. In a moderated test, a facilitator talks with the participant as the participant works aloud to complete given tasks or scenarios. The unbiased facilitator helps the participant feel comfortable while performing the test, but also probes to evaluate the effectiveness of a design and test assumptions.

Unmoderated usability tests are similar to moderated except participants perform tasks at his or her convenience. After the tasks are delivered, the user records his or her screen as the test is performed. Users are still encouraged to speak aloud during completion and note any particular points of frustration or ease. Although there is no facilitator to ask follow up questions to gain additional insight, unmoderated tests are usually less time consuming and can be less expensive, since you don’t have to pay for time, equipment, or space.

Guerrilla tests are very similar to moderated usability tests, but are typically done in nontraditional places like a coffee shop or office hallway. Facilitators briefly stop random passersby and ask if he or she would complete basic tasks. Depending on the study, obtaining reliable results can be difficult because the participants are not carefully chosen. This methodology of testing is best used for sites or applications with a very broad and varied target audience.

Card Sort

Card sorts help to explore relationships between content and better understand information architecture (IA). We often use card sorts to help reorganize content, create new site maps, and better understand the content hierarchies a user perceives. In a card sort, a user is provided a set of terms and asked to categorize or group them.

In a closed card sort, users are given category names in which to organize terms. In an open card sort, the user creates the categories he or she deems appropriate. The type of card sort you use depends on the goals and constraints of the test or project. For example, if you have an agreed upon navigation already, you may want to perform a closed card sort so users must organize terms in predetermined categories.