Skip to main content
  1. Writing/

Quantitative And Qualitative Metrics

·930 words

This question probably came up for anyone that ever worked as a PM - when should I use quantitative metrics, and when should I rely on qualitative metrics? It’s an important aspect of ensuring that the right data is used for the right aspect of product planning.

Let’s start by distilling the terms. Quantitative data is data that is heavily grounded in numbers - how many users, what percentage of them are happy, what is the ratio of converted vs. non-converted users? Qualitative data is grounded in information that can’t be precisely measured - what do users think about a feature, what gaps did the users identify in the service?

When it comes to making product decisions, I made it into a habit to first seek out qualitative data - it helps you identify whether the problem that you are trying to solve is, in fact, a problem. Determining product-market fit is much easier when you talk to potential customers and try to understand how they would use your product. This is especially helpful because chances are - you don’t really have the numbers yet. Whenever you are working on a new product, it might be possible that there is not enough research data out there to tell you whether specific issues exist for customers. That’s when you can rely on interviews and customer studies to set the foundation. Qualitative data is great at helping you identify whether you are on the right track, but also does hide an underlying issue that is very easy to overlook - you are not getting data at scale. No matter how much qualitative research you are doing, you would probably count users in the tens - your product will operate at a much different scale (unless you are building something very niche).

Therein lies the danger - making decisions solely on qualitative data. If you do that, you are running the risk that:

  • Your users are biased, and the information you collected might not be entirely accurate.
  • You got lucky and everyone you talked to somehow pointed you in one direction, when you product needs to go the other way.
  • You asked the wrong questions or drew the wrong conclusions from the limited set of answers.

Say you are thinking of building a new music discovery service, and are interested in knowing what would help users the most in finding music they like. You put together a list of a couple of universities that you would like to visit to get some data from students (after all, they might listen to a lot of music). You venture out, ask all the questions on your forms, and come up with some insights - looks like people really like country music, they care about finding more local country artists and are wondering where the nearest concerts are.

In the case above, you can check for all three risks I outlined:

  1. Biased users. While you thought that you were interviewing a pretty diverse group of people, you did not take into account where you conducted the study. Music preferences at University of Kansas are probably going to be different from UCLA. All of a sudden, some of your insights will become TBU - true, but useless.
  2. “Luck”. Because the group of users you talked to had some larger commonality that you did not account for, they seemed to indicate a certain trend that was not representative of the larger population your product might be targeting.
  3. Wrong questions asked. You were asking about music preferences, and instead should’ve been asking about the discovery process.

All of these should be a topic for an entirely different post, however you see how you should not make product decisions exclusively based on qualitative data unless no other data is available and you are seeking product-market fit. That does not absolve you from the responsibility to ensure that you have the right questions and audiences in place. And even then, you should test your assumptions with quantitative data as soon as humanly possible.

I, myself, am a big fan of quantitative metrics - numbers rarely lie, and while they can be subject to interpretation, at the end of the day they can point to the right insights much quicker and reliably. This is why your goal is to get to the point where you can collect quantitative data as soon as possible. The moment you have a minimum viable product (MVP), you should be able to start seeing numbers roll in and show you exactly how your product is used (or not used). The same would apply to experimentation - whenever you are preparing to ship a new feature, setting up an experiment that will show you how a chunk of your users are engaging with what you are building is going to be a significant signal that will help you guide the development process, and it will carry much more weight than talking to a limited audience.

Numbers, coupled with qualitative data, will give you better insights to drive in the right direction than looking at either in isolation. Obviously, when looking for quantitative data, you want to make sure that you are looking for the right metrics - “garbage in, garbage out” is something that can bite you (that, and vanity metrics).

To conclude - you use qualitative data to validate initial assumptions, and quantitative data to validate the direction and get hard numbers on whether your assumptions were correct. Using them in tandem for the right situation, and you will maximize your ability to ship the right thing.