Do you have a great idea but wonder about the market fit?
Are you trying to build a product, but you’re not quite sure what people need?
Both of those are great reasons to do some user research!
What is user research?
At its most basic level, user research is just talking to people—your users—to learn something about their needs. That could be as big picture as identifying whether they’d have interest in buying your product, or as detailed as whether a green or blue button is more findable.
About me
My user research tends toward more strategic, big picture work. I help teams to define whole products, major new features, possible feature enhancements or key roadmap items.
I have a deep technical background and specialize in doing interviews with scientists or subject matter experts that other designers may find hard to follow or understand. I have a thorough, structured approach to interview preparation refined over 8 years of B2B SaaS industry experience to produce fully documented, traceable results.
The process
I’ve done a lot of research projects, and I find that this recipe works best:
- Initial consult to understand the project and figure out if we’re a good fit
- In-depth context call(s) with the working team to understand the project, the questions, the big unknowns and areas of greatest risk, and the current hypotheses
- Create a 1-2 pg research plan to fully understand objectives and aims
- Develop a detailed interview script with nested questions to support flexible user interviews
- Hold individual or small group sessions with 3-5 internal experts to gather opinions, identify hypotheses, and train me to know what to listen for (and why it’s important) once we’re on the call. (This is also a great opportunity to find out how much your team already knows!)
- Regroup, refine plan and script based on initial conversations, and get really clear on who we need to talk to next
- Schedule some external calls and talk to people!
- Call transcripts and summaries for internal and external calls
- Consolidate results by topic or theme for a semi-condensed view
- Create powerpoint slides or other top-level summaries to share out with the broader team
Output package includes: Research plan and scripts, all recordings, transcripts, consolidated “summary” document, presentation slides with results, and, if applicable, your AI archive to help you access and work with the results.
Optional extras:
- Research synthesis doc. This is an optional extra, and is more about interpreting rather than documenting results. The core research archive is about what users said, not what I think the core problems are. The synthesis doc is more about interpreting the research results into more actionable form. In some cases a synthesis doc is really helpful; in others it’s not needed at all. (Sample available upon request.)
- Experimental, but exciting: if you have access to Google Notebooks or something similar, I have been playing around with creating AI research libraries to make results more findable and accessible to the broader team. I personally love this technology, and would be happy to help implement it on top of the core research archive items above. Also includes a “Context” documentation to help focus the results, and a “Fact check” document to correct misunderstandings and add notes to supplement what was heard on the different calls with context that might not be clear from what was said.
Now, you might be thinking that this sounds like an awful lot. And it is! Research is a big deal, and you want to make sure that you get it right. We can also slice it and dice this in different ways, depending on the specific things you need to achieve. That’s what the context calls and research plan are for.
Finding the right mix
I have run some projects very light and done almost no analysis: the project lead was on the call, and got what they needed just from the conversations. Others were major research initiatives that defined our platform direction, and I still actively refer people back to the initial research archive 5 years later. For most things, I recommend the full archive for best insight, but that’s really up to you and what your team needs.
Here are a few “packages” to illustrate the ways we can flex with the different ingredients above:
- Option 1: Consult on the script, team runs the research. You may not really need me to run the calls, or to document the results. Sometimes it’s helpful to have someone help you to prepare, but then you’re ok to do it yourself. Maybe we do a few internal calls together, and then someone on your team takes over and does the rest.
- Option 2: Do the calls, skip the archive. Most of the magic happens in the calls themselves. It’s the practice of active listening that gets the answers you need. You can process, document, and analyze the results on your own (or decide not to do it at all). We’d probably want some kind of brain dump of things I heard/took away from each call, but that can be quick and informal if you want it to be.
- Option 3: Iterative scope. Run a couple of internal calls and see what we get. Hold one or two external calls, and then pause and decide whether or not to continue. I’ve had projects where we learned what we needed in just a couple of sessions, and others where a small project kicked off a much bigger initiative with a broader scope and a totally different perspective. You never know what you’re going to hear, and we’ll often need to adapt on the fly.
- Option 4: (Speculative) You run the calls, I do a synthesis of results. I’ve never done it this way before, but in theory you could collect the information and I could just listen to the calls and advise (formally or informally) on what you heard. This is a little harder because it doesn’t have the back and forth to dig in or validate ideas on the call, but I can imagine some cases where it could work.
FAQ
(Click to expand the FAQs below…I can’t figure out why the WordPress block editor has the arrows turned off. Seems like bad UX!)
What kind of research do I need?
Strategic research focuses a bit more on the big-picture market needs, while user research tends to focus more on a specific person, process, or team. Really, in every conversation you’ll end up with a bit of both.
Click-by-click usability research is also helpful, but that usually comes later, when you have a proposed solution in hand, and it’s usually best to have the team working on the prototype solutions gather that feedback themselves.
In the end, it matters less what we call the research than that we’re clear on what we need to get out of it. That’s what the introductory context and research plan conversations are for. Often, we’ll start the discussion thinking we need one thing and find out half way through that we actually needed something else.
When do you start research?
As soon as possible, and probably sooner than you think! Usually, you want to start way ahead of your project starting gun if you possibly can. The flip side is that you want to wait until you have a really clear idea of what your questions are. Open ended, exploratory research can be really fun (and sometimes it’s what you need), but undefined objectives can also lead to unfocused results.
When working with an in-house product team, I would usually plan to start research 3-6 months before you need to start designing the feature. This leaves plenty of time for scheduling delays (especially for external calls), without compromising your work window.
How long does it take?
That depends on the number of interviews, the depth of analysis we need, how quickly your team can move with the internal calls, and how fast you can find and schedule conversations with external users. For most research projects I’ve done, the prep and internal calls take a couple of weeks, and then the external calls can take anywhere from a week or two to several months to schedule.
Once all of the calls are completed, analysis and final packaging of results usually takes about a month. If the other calls are spread out, most of the analysis is usually done by the time we finish the calls themselves. If you also want a full research synthesis, that may take another couple of weeks. Again, this can often overlap with other things.
Playing it safe, 3-6 months is a good window to plan, especially if your team is busy working on other things. If this is a high priority project with a focused team (and if I can clear the rest of my schedule to accommodate), it’s often possible to be done in a month or two. Access to users is usually the step that determines the rest.
Where do we find users?
That’s the million dollar question! I’ll be counting on you to find and identify the right people here, to make sure that we have the right group to really get at the things you want to know.
Usually, your best users are going to come from close by. People you’ve worked with, current or former clients, industry peers, people who you know who are likely to understand your product, your industry, and what you’re trying to do. Unless you are truly looking for an outside perspective, it’s often best to work with people who are a little bit familiar with what you do. Every user research engagement starts with calls with internal stakeholders, but it’s always best to have a few outside perspectives in there as well.
There are lots of services out there that sell access to users. I’ve worked with GLG (good, but expensive), have a profile on Office Hours (have not yet had a call), and a Google search for “GLG competitors” creates quite a list of services in this area. Each company tends to specialize in a specific industry or type of user, so it’s worth doing a little research and price comparison up front.
How many calls are enough?
It depends! The realistic answer is usually “as many as you can get!” If we’re talking truly external users, it’s usually tough to pull in more than 5-10. (At least for B2B, where end users are usually quite specialized.) Even one user can be helpful; 3-5 is usually a realistic but happy medium. You often have to try for 10-15 to get 3-5. The exact number we shoot for will depend on the conversation that comes out of our research plan, and may change again once we get through the initial internal calls. This is normal, and ok.
Things I would consider when deciding how many people to pull in:
- How important is the insight? If this is going to change your whole business direction or determine your product strategy, more is probably better. More interviews usually give more diversity of insight, more certainty, and more depth.
- How big a risk is it if we get it wrong? If you’re just looking to verify something you already know pretty well or if it isn’t a big deal to change later if you get it wrong, a smaller sample size might be fine.
- How big is the impact? A smaller feature needs less scrutiny than a major feature or a new app.
- How much can you afford? External users tend to be pricey, at least in the B2B world. More users means more scheduling and analysis time, and it’s always a tradeoff between certainty and cost.