I am a former Trustee, alumna, parent, donor and longtime community member of a non-profit organization that’s mission is “changing lives for good” through the the community’s gifts of time, talent and treasure. As a hundred-plus year old organization with a diverse group of alumni, staff and trustees, we strive for our community to give beyond dollars (treasure) and to pay it forward with their time and talents. This mission is something I deeply believe in beyond this particular non-profit and extends into my personal and professional life; and it’s something I value and look for in the people I work with.
If you follow me on Twitter, you know we just wrapped up an eight day Startup Bootcamp at Harvard Business School (HBS). This program was brought to life four years ago by my mentor and friend, Professor Tom Eisenmann who recognized a growing need for our first year MBA students to explore their startup ideas and understand the world of entrepreneurship through experiential learning. First year HBS students enroll in a prescribed curriculum known as their Required Course year, and these students are referred to as “RCs”. RCs wanting to start companies, learn about joining startups and/or venture capital have limited time outside of classes and other school-related activities to pursue these interests in their first year. The advent of Startup Bootcamp created that time and programming to explore before their second (“Elective Course”) year. Approximately 200 students take time out of their January break to return to Boston to immerse themselves in startup land. Startup Bootcamp is free to enrolled students and is seen as a Pass/Fail course on their transcripts.
Those who know HBS, know our primary teaching approach is via the case method. This is an excellent way to help our students understand the complexities and challenges of business through the lens of a diverse set of protagonists and companies around the world. Many of our course sessions for each of the hundreds of cases our students read invite the protagonists into the classroom to discuss their perspective and “what happened after the case” with our students. This time with these leaders is invaluable to students and often the leaders get as much out of the experience as they impart their wisdom and share lessons learned.
In the past few decades we have introduced more field courses to complement the case method with a learn-by-doing approach. Like our live case discussion with protagonists, the HBS field courses tap into some of the top entrepreneurs and industry experts in the world. Startup Bootcamp is no exception! In order to pull off an intense eight days of programming, we drew in over 70 guests throughout the week. These guests did everything from keynote talks and serving on panels to offering hours of coaching time and workshops. Each guest donated their valuable time, treasure (many paying their own way to travel from all over the US) and immeasurable talent to be part of this program. Our students were blown away with the level of high quality content each guest provided and all were so grateful for every ounce of insight they received on their ideas, their teams and their future as entrepreneurs, joiners of startups and members of the venture capital community.
Tagging over seventy people is well above the maximum limit for most social media platforms, so herewith, a hearty THANK YOU to all of the guests who joined us last week. We absolutely could not have done it without you!
Gil Addo Gideon Ansell Henry Ancona Berlynn Bai Jay Batson Eliza Becton Edward Berk Ethan Bernstein Peter Bleyleben Jana Boruta Jeff Bussgang Bobbie Carlton David Chang Chuck Collins Maggie Crowley Karen Devine Brian Doll Richard Dulude Doug Fox Dave Gerhardt Jodi Gernon Shikhar Ghosh Rob Go Jamie Goldstein
Sean Grundy Rohit Gupta Christian Heim Jason Hines Sarah Hodges David Hornik Alex Iskold Jennifer Jordan Howard Kaplan Stella Kim Melody Koh Brendan Kohler Tarikh Korula Karen Korula-Young Jeremy Kriegel Pascal Kriesche Elizabeth Lawler Sarah Leary Elise Lelon Rebecca Liebman Jennifer Lum Nate Maslak Bob Mason Devon McDonald
Jennifer Neundorfer Eric Paley Andrew Payne Melissa Perri Mark Pincus Amira Pollack JamesPsota Vinayak Ranade Christina Raymond Jeffrey Rayport Caty Rea Carlos Reines Laura Rippy Mark Roberge Bryce Roberts Brendan Schwartz Javier Segovia Shereen Shermak Caroline Sherman Nancy Tao Go Satish Tadikonda Jill Ward Christina Wing Peggy Yu
Finally, it takes a village to pull off such an intense program that took months and many hours of planning and an average of twelve hours a day over eight straight days to orchestrate. Hats off to my co-instructors, Allison Mnookin and Martin Sinozich for being great collaborators, to Jacey Taft and Sneha Pham for their tireless support throughout many twists and turns and to our outstanding Teaching Assistants – now second year MBA students and Bootcamp alumni – Gaby Goldstein, Jad Esber & Ollie Osunkunle. Best team ever!
Interested in learning more? Listen to what our students and faculty have said about the program in this video and check out our Instagram page!
As an entrepreneur, how confident are you that you fully understand your customer’s pain points and/or job to be done? When I first meet an entrepreneur, they tend to start selling me on their solution before explaining the problem they are trying to solve. I typically see or hear little evidence that they’ve done true discovery work to validate the problem or their target customers. While gut feel or personal experience with the problem can be a strong signal there’s a problem to solve, without proper product discovery work, you won’t truly know if you have a winning solution.
For those that profess they have done proper discovery work and have validated the problem, but don’t yet have a product, my follow-on question is “How do you know people or companies will use your product?”. More often than not, I get examples ofinterest tests such as hits on social media posts or answers to surveys that are so biased it’s hard to trust the results. Further, they may have a good hunch there’s a job to be done that needs improving/replacing, but they cannot describe where in the customer journey they can truly make an impact.
I’m a big fan of confident founders who are passionate about their idea, but a little humility and a lot of discovery work can determine whether there’s a winning solution and save a lot of time and money wasted on building the wrong thing. If fundraising is also a consideration, being able to have real data vs. gut feelings and biased test results can be the difference between a modest angel round and a strongly led seed- or A-round.
To that end, a few tips…
Interest vs. Problem Testing
“We had 1000 clicks on our Facebook ad in the first 48 hours”,
“Our conversion rate from click to sign-up was 50%”, OR
“We interviewed a bunch of people and they said they’d use our product if we built it”
When I hear these types of quotes early on in a product’s lifecycle, I do a mental facepalm. These quotes suggest they may have found an audience interested enough to click on an ad and to give their email addresses, but they still have not proven anything about the actual usefulness of their product or that it solves a real pain point for their target customer that they are willing to pay for to fix. These tests are OK to do, but should not be the only way you validate problems to solve. If you plan to do interest tests, consider these approaches:
Social Media: Great for finding your audience, should be done on multiple platforms and carefully crafted so as to answer only 1–2 hypotheses. These hypotheses are commonly “Is this where our audience is if we want to market to them at some point?” and “Are they interested enough to click and learn more”. These tests can be expensive so be thoughtful about where and when to do them (e.g., if you’re building a product for teens, test on Instagram or Snapchat where they are (vs. Facebook)
Landing Pages: The best way to capture interest, email and demographic data. If they found you through social media tests or googling,
a) you’ve proven they were interested enough to learn more,
b) that your SEO works and they found you; and/or
c) that they trust you or care enough about the problem you wish to solve that they will give you insight into who they are.
These future customers are great targets for problem testing and could be your early adopters. Be careful though, early adopters are great for testing, but don’t always guarantee a chasm-crossing to the mainstream. This too must be validated.
Surveys: Surveys are very hard to do right and often capture a lot of random and very subjective information instead of getting real data to inform your product. We have this tendency to think “while we have them, let’s ask them everything!”. Great surveys are:
Ten questions or less,
reflective in nature (Ex: how many times did you buy “X” in the last month?) and very data-centric (Ex: how often do you order takeout for dinner?). Reflective questions should have ranges to choose from that do not sway the prospect or suggest there’s a right answer; and
capture basic demographic information only relevant to the questions at hand (e.g., don’t ask age or gender or income unless that’s specifically something you need to know about your audience); as long as you have contact information, you can always follow up for more demographic data if needed.
More important than interest tests early on, are tests that validate there is a problem worth solving and where exactly a product can be most successful in solving that problem. Validating hypotheses about the problem through a variety of methods is going to lead to a far better outcome than clicks on a Facebook ad. The more ways you can learn about your target customer and discover where the problems are, the more likely you’ll get on the right path to solution building. This process takes multiple iterations and approaches to get to a minimum viable product (MVP) that begins to address the issue.
Consider trying these different types of problem validation tests in your discovery process:
Interviews: Similar to surveys, interviews are as much an art as they are science. It is incredibly easy to lead a witness, bias answers and hear what we want to hear in an interview. The best guide for conducting a proper discovery interview is Rob Fitzpatrick’s book “The Mom Test”, which I encourage every entrepreneur and product manager I work with to read. A few key takeaways:
PRIORITY: Talk with strangers! Any interview subject who is a friend, family member or member of an affinity group (e.g., student/alumni at your school), you bias the conversation. They are more likely to tell you what you want to hear and validate your idea vs. truly objective answers. If you’re not comfortable talking with strangers, don’t interview or hire an independent consultant/friend to do it for you.
Write a script and be clear about what hypotheses you are trying to validate before the interview. Sticking to a script ensures a clean comparison of results after interviews.
Start by setting the stage. You are learning from them vs. selling them on your idea, no answer is a wrong one and set a time expectation — 30–45 minutes are ideal. Always end by thanking them, asking if you can follow up AND if there’s anyone else they suggest you speak with about the topic.
Always ask open ended questions — Ex, tell me the last time you…
Always have someone serve as observer & notetaker not just to capture what’s being said, but to look for body language, expressions and any other “tells” about the problem you are trying to learn about.
Do more listening than talking — you’re there to learn from them, not sell to them.
Unsure what they were explaining or want to reframe their response into hard data? Echo it back and see where that leads them. Ex: “So what you said is, you usually eat out twice a week?”.
Always record the session — most interviewees will not mind being audio or video recorded (the latter is better), especially if you assure them it won’t be shared outside of your team.
Ethnography: Observing prospects performing the job you hope to improve/replace can be extremely insightful. You may see hacks they would never tell you about in an interview or discover there’s a whole new set of problems in their process that you had no idea existed.
Emotional journaling or mapping: Having a prospect journal or map out their process and highlight how they feel along the way can pin-point exactly where they are most frustrated in their process. This is also a great technique if you cannot observe the prospect in the setting where the problem exists. Ask them to journal or map and send you something within a set period of time.
Journey mapping: Bringing together all your discovery work to identify where you found patterns of highs and lows. These may surprise you; often where we hypothesized there was the most pain in a process may be somewhere completely different.
(Don’t do) Focus Groups: I am generally not a fan of this form of discovery. It lends itself to group think and can lead to false results. Focus groups can be useful later in the product cycle when you want to get reactions to branding or observe groups of people using your product if it’s a tangible item.
Prototype Testing The best way to validate a problem exists is to actually insert yourself into the process and learn by doing. These tests lean towards solution building, but the idea is you’re doing tests without building anything or building very little to get clarity on the problem and the customer. The most common forms of these tests are:
Lo-Fidelity Concierge Testing: Jump right in and assume part of the role that your product might fill in the future. If you were coming up with a new restaurant reservation system, this may involve a phone conversation with the party needing a reservation and having you do the actual booking for them and perhaps texting them to confirm their reservation. By being the intermediary, you are fully embedded in the process to understand all sides of the problem. The key to success of these early tests is to resist the temptation to correct your customer or other players and just go with whatever they do to experience the process. You can tweak things as you learn more about what works and what doesn’t work along the way.
Wizard of Oz (WoZ) Testing: Unlike a concierge test which is transparent and prospective customers are aware you are part of the solution, a WoZ test allows you to intervene without a customer knowing you’re doing work behind the scenes. This is usually created by having a prototype of some sort that the user interfaces with, but involves manual labor that users don’t see. For example, In the early days of Uber, a dispatch team was used to direct drivers to pick up customers and text customers about arrival times before they had complex algorithms and a driver app.
Physical Prototypes or Competitive analogs: If you are building something non-digital that could be expensive to manufacture before you test, there are several creative ways to do discovery early on.
Prototypes: Small runs of your future product or handcrafted using freelancers to do 3D printing, sewing or even a pop-up restaurant are ways to get your idea concept tested and feedback on its use before spending too much money. One of my favorite examples of these is a former student’s idea for a smoothie making machine for offices. Before he even made the machine, he started making smoothies in offices just to see what employees liked, how visual aids helped (having fresh fruit nearby inferring a fresh product) or offering add-ins like chia seeds or protein powder to see if they made his smoothies more appealing. Not only did he learn what flavors were most popular to focus his MVP, but he also got a lot of insight into the operations of small to medium sized businesses, how much of a footprint he’d need for machines, maintenance requirements, etc. It was an invaluable experience for this entrepreneur.
Competitive analogs: Having target customers use other, similar, products can be as telling as using the product you hope to create. Using a tool like UserTesting to have a prospect walk through their use of a current competitive product can be very insightful. Having target customers use a competitive product for a week or two can also be insightful. Just be careful not to start creating your product based on what these other products have/don’t have. The goal is to understand how these prospects interact with those products today — it’s not to get feature parity.
Expert Testing: Sometimes, you are working in an area where you may not be an expert, but you have a hunch it’s a white space ripe for disruption. If you don’t have access to the experts or their customers, find or create a space for them to connect and observe through their experiences. This could be as simple as finding them on Quora or Reddit and looking at threads of questions that are related to what you’re exploring. You could create a forum for them to chat if none exists (e.g., an affinity group Slack workspace or Meetup) or even create an event to gather the right people. Another one of my former students got her start with ElektraLabs by creating an event which not only informed much of the early product, but also connected her with experts who went on to both advise her company and evangelize her product.
A Few More Best Practices All of the above tests should be explored whenever you are in the process of validating problems and target customers. Try many and do them often. Testing never stops! Here are a few more things to consider when designing your tests:
Eliminate Bias: I can’t emphasize enough how important it is to have as objective a test as possible. This means not asking your friends, co-workers or parents to participate. Find total strangers who can give you honest and authentic feedback.
The Rule of 5: If you keep your criteria very tight — who you are asking and very specific things you are testing — you need not do more than five tests before you know where you are trending. But limit your variables per test (see next bullet).
Limiting Variables: The Rule of 5 only works if every test you do is limited to a couple of key questions you want answered. The more variables in a test, the harder it will be to discern what influenced an outcome. For example, if you are trying to test whether women ages 18–20 vs. women 30–35 have a problem finding a great yoga class, design a test that is the same in every way, just test it with five of each of these two different audiences. Similarly, limit variables in prototype tests such as in the smoothie test noted above, where when the founder tested add-ins at one particular site, that was the only variable he changed; all other aspects of the test remained the same including the site itself.
Breadth of Demographics: You may be designing a product that you believe everyone in the world will need OR that you believe only one target audience needs. Gender, income level, geography, etc. may or may not have an impact on adoption but you won’t know that until you parse things out early on and test a few. How a 13 year-old uses a product may be completely different than a 45 year-old (Facebook is a great example of this). Also, if you don’t test different demographics, you may miss an audience that could be in most need of your product.
Measured Outcomes: Start with a hypothesis of what will happen per test, ideally in measurable outcomes such as % of people who accept a restaurant recommendation or number of smoothie customers who want an add-in vs. those who do not. Decide what you think success looks like for these tests. If your outcomes vary, then consider whether your test was valid and/or whether the learning lends itself to further testing or abandonment of an idea. In the case of the smoothie, the founder hypothesized that his target customer would want 5–6 flavor combinations, but found only 2–3 flavor combinations were most popular, thus he limited the flavor options in his MVP.
Leverage Existing Technology: Finally, in today’s highly tech enabled world there are a number of ways to engage your target customers using what’s already out there to your advantage before building anything yourself:
Typeforms, google forms, etc. can capture form data
Online payments can be simulated using Venmo
Texting can simulate alerts and notifications
High fidelity web prototypes from Figma, Sketch, Invision, etc.
3D printed mockups & scrappy hand crafted prototypes made from supplies you can buy online
Another former student of mine with a software engineering background resisted the temptation to code a solution and instead created a WoZ test by cobbling together Soundcloud, Dropbox, texting and a high-fidelity mock front-end. Once she had experienced dozens of people using this method and understood what they needed, she officially built and launched the product.
Test Early, Test Often! With all the options available, there is no excuse for weak validation of problems and target customers early on in your product development process. One test or even a few tests does not qualify a product as marketable or fundable. The more objective tests you do up front, and iterate on those tests often, the higher likelihood you’ll land on a great solution that people want to use and buy.
This blog post is largely inspired by my course, PM101 at Harvard Business School. We focus the most of the semester on best practices for discovery. I have open-sourced the syllabus for this course here.