If you are a product person, you’ve probably heard this numerous times - “fail fast, fail often”. To many, this often might sound like an invitation to just ship whatever comes to mind, test it and then assume that there is some “learning” happening afterwards that is going to help one come up with a better solution to a user problem.
Let’s say that we take this scenario outside the product world. “Fail fast, fail often” would not work the way I described it in any other industry. You wouldn’t want your doctor to pull a random prescription name and give it to you in hopes that something will make you feel better. You wouldn’t want to go to a university for a degree, and all professors randomly opening the course book at some page and hoping that the material will be relevant. Of course, a lot of industries don’t have the agility and the ability to course-correct as the technology sector, but the core principle still applies - failing does not mean that one should not be prepared to fail properly.
What in the world is “failing properly”?
Put simply, it means doing your due dilligence in assessing the problem before you tackle the problem, and taking calculated risks. Here is how you can do that in a product management role (it’s really not a secret):
- Analyze existing data. Chances are, whatever you are coming up with in some shape or form has already been tackled - it could be in a different product, or a different industry, or purely in the form of customer interviews and behaviors. You can draw insights from there and see what works and what doesn’t work before you get to implementation. Be data-proficient, and loop in the data and research teams to help you in your assessment.
- Talk to potential users. As you think of a potential market for your product or feature (there may not even be one), talk to people to see whether what you are trying to solve is truly a problem. Keep in mind that external customers rarely know the solution, but they are experts in the problem space.
- Define experiments. Once you have the foundations (data), you can define a Minimum Viable Product (MVP) that you can put in front of a limited audience to see whether the idea resonates with real users or not. No business plan survives first contact with the customers, and no feature is perfect in its first iteration. Check out Online Controlled Experiments: Lessons from Running A/B/n Tests for 12 years for a more in-depth overview of the topic.
Yes, there are only three things that you need to account at a high-level - the processes that go into each might differ (I am not even going to discuss tools…yet), however you can see that the picture becomes clearer. You fail fast by having the “parachute” ready to deploy whenever things go wrong, the metaphorical device representing all you’ve put in place to empower you and your team to make informed decisions. Failing fast is really learning fast, and you can only learn if you have some structure and understanding of what you’re after. Without this, you will effectively be gambling with your engineering and other resources involved in the release.
Have any thoughts? Let me know on Twitter!