‑ User research

Get Out Of the Building (or stay in, but at least talk to actual users) to validate your hypotheses about the users, learn about their actual needs, actual usage situations, etc. through interviews. This is absolutely necessary.

The reason for actually leaving the building is to see the users in action, and see what happens before and after the interaction with the tool. For instance, one user may take a screenshot of the result, that you won’t notice looking at the user interactions remotely, showing that there might be a need for exporting the result. There are tons of things to learn that you will never find out by staying at home.

For some products, you might have users all over the world, and for practicality reasons you will not meet them and see them work in person. You might have to do remote interviewing or similar. The drawbacks with this must be taken into consideration when valuing the results of the user research.

What you get out of user research are answers to the following questions:

  • Does your assumed demographic or user exist?
  • What is important to your users?
  • What patterns and problems exist?
  • What is the context and meaning?


Contextual interviews

A common research technique is called contextual interview or inquiry. Contextual interviewing defines four principles to guide the interaction:

  • Context - Interviews are conducted in the user’s actual workplace. The researcher watches users do their own work tasks and discusses any artifacts they generate or use with them. In addition, the researcher gathers detailed re-tellings of specific past events when they are relevant to the project focus.
  • Partnership - User and researcher collaborate to understand the user’s work. The interview alternates between observing the user as they work and discussing what the user did and why. Early on in the interview, the researcher defines him/herself as the beginner/apprentice/pupil, thus establishing a teacher-pupil relationship that will entice the user to explain in more detail.
  • Interpretation - The researcher shares their interpretations and insights with the user during the interview. The user may expand or correct the researcher’s understanding.
  • Focus - The researcher steers the interaction towards topics which are relevant to the team’s scope.

A contextual interview generally has three phases:

  • The introduction - The researcher introduces him or herself and shares their design focus. They may request permission to record and start recording. They promise confidentiality to the user. They agree with the user on the specific tasks the user will work on during the interview, such as reaching a goal in a prototype.
  • The body of the interview - The researcher observes the work and discusses what they see. They take notes, handwritten, of everything that happens. Here, the think aloud-protocol may be used to capture all nuances.
  • The wrap-up - The researcher summarizes what they learned from the interview, offering the user a chance to give final corrections and clarifications.

Interviewing specifics

Apart from always trying to pose questions as open-ended, e.g. “Can you describe…” vs. “Name three things…”, there are a lot of other important things to think about. As an interviewer you have two roles to play, the tour guide and the therapist. The tour guide is responsible for telling participants what to do, to keep them moving, and keeping them happy. The therapist job is to get the participants to verbalize their thoughts along the way. Usually you do this by using the think-aloud protocol (described below) during a usability test of some kind. For a user research interview, you can try the following (taken from the great book of Steve Portigal - Interviewing Users; How to Uncover Compelling Insights):

  • The silence

    After you ask a question, be silent. This is tricky; you are speaking with someone you’ve never spoken to before. You are learning about her conversational rhythm, how receptive she is to your questions, and what cues she gives when thinking about an answer. These tiny moments, from part of a second to several seconds, are nerve-wrecking.

    Ask your question and let it stand. Be deliberate about this. To deal with your (potentially agonizing!) discomfort during the silence, give yourself something to do, for instance slowly repeat “allow silence” as many times as it takes. Use this as a mantra to calm and clear your mind (at least for the moment). If the person can’t answer the question, she will let you know.

  • The paragraph

    People do speak in paragraphs. You can see evidence of this by looking at an interview transcription. The pauses between blocks of content are interpreted by the transcriptionist as paragraphs. Make sure you give space between the users last paragraph to a potential new paragraph before you ask the next question.

  • The flow

    Strive to weave the questions from your interview guide into follow-up questions. Not everything can be a follow-up. Some threads run out of steam, or sometimes you need to deliberately change the discussion in order to dig into a specific area of interest. The guiding principle here is to signal your lane changes. If you want to change subject, use a more deliberate sentence to do it. Example: “Ah, great input. Now, I’m going to shift direction here. Let’s say if you have [this new] situation, maybe you can tell us a little about…”.

  • The feelings

    Ask about emotions such as “Why do you laugh when you mention Aftonbladet?” Emotions are often a great cue for follow-up questions.

  • The assumption

    Always assume that you do not understand and follow-up with asking if the participant can give an example or explain the answer in another way.

  • The language

    The participant will have her own words for things in the domain, let her have it, do not correct her with your word for it. Use her own words when asking follow-up questions. It can be challenging to use someone else’s terminology and feel as if you are being authentic. Going a little meta, such as “I want to ask more about the thing you called batcave” will help the situation.

  • The wait

    Hold your thoughts if some things need a deeper explanation from you to the participant. If you feel the need to correct, stifle it. If you start laying out facts, the interview will change from you being the listener to you being the interviewee. If you must explain (and most of the times you do not), wait until the very end of the interview session.

Interview guide

The interview guide is your hypothesis for how you will ask questions. But really, you’ll spend much of your effort in the interview digging further and giving your participant the best opportunity to share deeply. You need a broad set of question types in order to make this happen. Remember, this is just a guide, not a law. Here are some examples from the above mentioned book by Steve Portigal to get you started:

Questions that gather context and collect details:

  • Ask about sequence. “Describe a typical workday. What do you do when you first sit down at your station? What do you do next?”
  • Ask about quantity. “How many files would you delete when that happens?”
  • Ask for specific examples. “What was the last movie you streamed?” Compare that question to “What movies do you stream?” The specific is easier to answer than the general and becomes a platform for follow-up questions.
  • Ask about exceptions. “Can you tell me about a time when a customer had a problem with an order?”
  • Ask for the complete list. “What are all the different apps you have installed on your smartphone?” This will require a series of follow-up questions—for example, “What else?” Very few people can generate an entire list of something without some prompting.

Questions that probe what’s been unsaid:

  • Ask for clarification. “When you refer to ‘that,’ you are talking about the newest server, right?”
  • Ask about code words/native language. “Why do you call it the bat cave?”
  • Ask about emotional cues. “Why do you laugh when you mention Aftonbladet?”
  • Ask why. “I’ve tried to get my boss to adopt this format, but she just won’t do it….” “Why do you think she hasn’t?”
  • Probe delicately. “You mentioned a difficult situation that changed your usage. Can you tell me what that situation was?”
  • Probe without presuming. “Some people have very negative feelings about the current government, while others don’t. What is your take?” Rather than the direct “What do you think about our government?” or “Do you like what the government is doing lately?” This indirect approach offers options associated with the generic “some people” rather than the interviewer or the interviewee.

Questions that create contrasts in order to uncover frameworks and mental models:

  • Compare processes. “What’s the difference between sending your response by fax, mail, or email?”
  • Compare to others. “Do the other magazines also do it that way?”

Questions that give deeper insights:

  • “Let me rephrase that and you can tell me if I understood it correctly…”
  • “What would you say is the great value in that?”
  • “You mentioned [this], can you elaborate?”

Think-aloud protocol

Think-aloud protocols involve participants thinking aloud as they are performing a set of specified tasks (given by the interviewer or while explaining in a contextual interview). Participants are asked to say whatever they are looking at, thinking, doing, and feeling as they go about their task. This enables observers to see first-hand the process of task completion (rather than only its final product). Observers at such a test are asked to objectively take notes of everything that users say, without attempting to interpret their actions and words. The purpose of this method is to make explicit what is implicitly present in participants who are able to perform a specific task.

User journeys

User journey

The name user journey can refer to two things, a type of presentation of user research (like the image above) or the actual method for finding out what the user does and what problems they have when doing it. (A representation of how the user should work is generally called a user scenario.)

The method is based on a template that you go through together with the user during an interview. You ask these things:

  • What are you doing? What are the steps you need to go through, including before and after the actual main operation?
  • How are you doing it specifically? Where are they performing a step, in what context, and with what kind of tools?
  • What do you want to achieve? What do they want to learn? How do you want to feel?
  • What did you feel? What was the actual experience when doing the step?
  • Is there a possibility? If there’s a bad user experience in one of the step, what could you do (in generic terms, not solutions) instead?

But, you do not ask all things. Start with getting the user to explain the steps (from start to finish), meanwhile dig into the hows. If you feel that the user has an opinion on some of the steps, negative or positive, go back to those steps when you’re done with the whole journey, and ask what they wanted to achieve/learn/feel and what they actually felt. When you’ve explored the painful steps, you think about possible changes/improvements to the user journey together with the user. So, you rarely fill out all the blanks in the template, just the important parts.

User journey map

And when you’re done you can present it as the diagram in the beginning of this topic. In that case, a measurement was added to the user journey method. The users were asked on a scale from -10 to 10 how they rated the experience and this result was plotted in the diagram.

Card sorting

Card sorting is a simple technique where a user, however inexperienced with design, is guided to generate a category tree or other structure. It is a useful approach for designing information architecture, menu structure, or web site navigation paths. It can also be used for finding and prioritizing features or steps in a workflow.

Here is a general version of card sorting, a so-called open card sort:

  1. A user is given a set of index cards with terms already written on them.
  2. This person puts the terms into logical groupings, and finds a category name for each grouping.
  3. This process is repeated for all users
  4. The results are later analyzed to reveal patterns.

The closed version is with set categories or no categories at all.

If the sort is for features or steps in a previously unknown workflow, there might be a set of prewritten index cards but also blank ones for the user to write new features/steps when needed. In this type of sort, there is usually no category, just one (prioritized) list (of consecutive steps).

The actual research

We will be setting up meetings with all users we recruited to be able to learn as much as possible. When we get into details, such as when usability testing a prototype, we pick a subset of these people, but to start, we interview as many as possible.


Start with creating an interview guide. Add a questionnaire for the measurable needs and goals. If possible, find a trigger to bring to the interview, to focus the user around the correct topic (for instance a persona, a set of closed questions on the topic or a prototype).


The right incentive amount depends on where you are doing research and what you are asking of the participant. You want a simple and direct way to demonstrate your enthusiasm and appreciation, that does not influence the result in a bad way. You should very rarely go down the monetary path. With that said, a cup of coffee might be just right.

Interview guide

An interview guide will not survive first contact with the participants (as any battle plan), but to have the guide gives structure and makes sure that we will not forget anything. The example questions in the structure below are mainly there to help validating hypotheses. Here is an example of a good structure:

Introduction (5-10 minutes)

We start with an explanation of the agenda, then explain the reasoning behind the interview.

“This interview will start with an introduction of us/me, then I will ask a broader set of questions about you and your daily work. After that we will go into details about [topic]. We’ve scheduled this for [x] minutes, is it important that we keep that exact time? “

Then we explain who we are and why we are doing this, what we will use the results for, and what the participant will gain from helping us. This takes care of two of the four Swedish ethic rules for research (informationskravet, nyttjandekravet). We make sure we cover the other two (konfidentialitetskravet, samtyckeskravet) with the following:

“I like to talk to you today about your needs concerning [topic]. I have lots of questions to ask you, and I am interested in hearing your stories and experiences to understand what motivates you. Everything we discuss is to help us build a better tool. Your answers will not be shared with anyone else. There are no wrong answers, this is information that help us direct our work. Does this seem reasonable to you?”

Overview (5-10 minutes)

For the next part, we want to get the participant talking. We want to shift the focus from being the person who talks to become the listener. We do that by starting our discussion with a couple of questions that are easy to answer. They will also give us a good background for interpreting the answers to the tougher questions below. We use a start-off phrase and we add relevant follow-up questions to the second question that can help us categorize users. We should also use this part of the interview to build rapport, to small talk, to make sure we understand each others talking rhythm.

“So, to start off, can you tell us a little about yourself - what you do, hobbies, etc - so that we get to know you?”

“Can you tell us about your background? Education, previous work experiences?”

Possible follow up: “How many years?” Possible follow up: “To what extent do you use web based tools in your work?”

Main body (10-20 minutes)

Here’s where the magic happens. As the name suggests, this is where the main questions are asked. Here is also where you would put any tasks, activities that you want your participant to carry out. The before-mentioned trigger could and should be used for this section, to focus the participants answers if needed. Make sure you get the answers you need here by probing deeper, asking follow-up questions that target from a different perspective, etc.

In the following example, we look at targeting the users needs from two directions, one open set of questions and one closed set, and we start out by understanding what happens before, during and after to get the context. The last question works as a trigger, albeit late in the interview to keep an open mind.

“Can you tell us about your typical work day?” Follow up: “Can you list your work tasks concerning [topic]?” Possible follow-up/clarification: “What kind of deliverables do you have in this case? Are you working with other persons to achieve this?”

“What was important when you used the tool the last time?” Possible follow-up: “In terms of actual needs? In terms of your user experience?”

“I will now ask you a set of questions for you to rate on a scale from 1 to 7, where 1 is lowest priority and 7 is highest. On a scale of 1 to 7; How important is it to make sure your budget is not overspent? On a scale of 1 to 7; How important is it for you to be able to easily communicate your work and results to someone else? […]”

Exploration (5-10 minutes)

When we get closer to the end of the interview, we have the chance to ask more audacious questions, about future needs, about uses of the tool that aren’t standard, etc. This is also a good place to follow-up earlier discussions to be able to drill-down into specifics. A good idea here is to ask to take a minute, and look back on your notes to see if you have any specific questions that have been missed from your point of view.

For example: “You mentioned [something earlier]. Is this always the same, or does it change? Why do you do it this way?”

[Measure the usability]](/docs/measuringusability/) (5-10 minutes)

You may use the closed trigger questions in the example of a main body above, and/or you can end the interview with a survey (on paper) to use both for measuring usability and for acting as a base for follow-up questions. If the participants answer something that might be interpreted as a bit odd, you have the possible to ask for clarifications. Here, you can uncover a lot of hard-found truths.

Wrap up (5 minutes)

In this final part, it is important to give the opportunity to the participant to ask questions back. If there is a way to summarize what you’ve discussed, do that here, to make sure you’ve understood things correctly. And, do tell what happens next. For example:

“Is there anything you think we haven’t talked about during this session that we should?”

“What will happen now is that you will get a survey sent to you in a couple of weeks, it summarizes some of the things that we talked about today, so we will be very happy if you would answer that.”


This is a very important step. The longer you wait with it (say, until the next day or the next week), the less you will remember, and the more jumbled up the different interviews will become. Make sure you do it as soon as possible after the interview and that all important people who will continue working with the actual result are there. By that, I do not mean the people that will build the implications, but the people who will build the persona or immediate design.

Structuring information

The result of the interviews has to be analyzed and structured to give value. One way to structure demographics/types of people is to use user graphs to create personas. A lot of the information collected during user research will fit well in an impact map.