Introduction
Launching a new product often brings a sense of both anticipation and anxiety. There’s the rush of creation, the excitement of unveiling something new, and the ever-present question: will anyone care? Despite best efforts to design, plan, and strategise, bringing a product to market remains an uncertain exercise. Have you ever heard the phrase “Half the money I spend on marketing is wasted; the trouble is, I don't know which half,” captures the problem perfectly. It highlights the difficulty in deciding which work to progress and why so many organisations waste time and resources spending more time on 'hope' than on discovery activities.
There’s a better way to approach product launches though and to be honest, it's not that challenging. Instead of treating a launch as a single, make-or-break event, teams can use them as opportunities for structured experimentation. By combining customer insights, lightweight planning, and tools like Strategyzer’s Test and Learning Cards, product teams can turn that uncertainty into systematic learning.
Understanding Customer Journeys
One of the biggest mistakes product teams make is to assume too much, too early. Imaginary user personas are matched to journey maps with far too many assumptions that make them mostly worthless (even if they look good). Perhaps worse, these assumptions become learned truth over time, with important decisions built on top of them. Sadly when the product meets a real user, those assumed narratives frequently fail.
Effective go-to-market (GTM) strategies begin by acknowledging that most of our ideas are hypotheses - calling them anything other than targeting guesses is a lie. To test them, we must begin by understanding the actual experiences of potential customers. That means going beyond theoretical pain points and uncovering how people actually make decisions, avoid problems, or settle for imperfect workarounds. Humans do not make decisions with logic or rationality in most case, they are post-rational i.e. they can explain why the 'right choice' is the best one, but in the moment they are more influences by their emotions that their logic. A practical way to do this is through joint observation. Don't just involve product managers, but also developers, marketers, and support roles in customer interviews. Let everyone hear the same things and ask their own questions. As these observations accumulate, patterns emerge and those patterns often tell a different story than the one imagined in the planning room.
These early-stage insights can inform not only product features but also positioning, messaging, and distribution. When a team is aligned around how customers experience the world, its ability to make good decisions under pressure improves dramatically. Last week, one of the team's I'm working with said 'We're now starting to think of this like a product with customers, instead of just a code-base' - this mind-set shift will impact them in ways they don't even realise yet.
Planning Using Sprints, Not Launch Dates
Most organisations still treat launches like one-off campaigns. Everything builds to a big day: press releases, email blasts, and a hope that traffic will follow. In reality, most successful launches resemble a rolling sequence of events with each one designed to learn more, test assumptions, and refine the offer. These campaigns acknowledge the simple truth that we don't know anything for sure until the market tests it. You can quite easily apply Sprint thinking to go-to-market planning. A two-week Sprint might focus on validating a specific message, testing a pricing model, or refining a sign-up process. Instead of trying to optimise everything at once, each Sprint becomes a focused experiment with rapid release.
This approach allows teams to break down big goals into manageable pieces. For example, Sprint Goals could include:
Publish a targeted landing page that is visiting by 100 unique users.
Run a small paid ad campaign with two variant headlines to determine which is most effective at generating a sales conversion.
Engage directly to a group of trial users to observe the customer onboarding process to identify the three biggest pain points.
At the end of the Sprint, the team can assess what was learned. Did one headline perform better? Did users drop off at a specific point? These learnings then shape the next Sprint’s focus. Launching in this way becomes a continuous process of refinement and discovery, not a single dramatic event.
Using Hypothesis and Learning Cards
Central to this experimental approach is the idea of validating assumptions. Strategyzer’s Test Card is a particularly useful tool here. It encourages teams to frame their ideas as clear, falsifiable hypotheses. A basic Test Card asks four key questions:
What do we believe?
How will we test it?
What will we measure?
What’s our success criterion?
For example:
Belief: Positioning the product as a time-saver will drive more conversions than a cost-saver message.
Test: A/B test two landing pages with identical content except for the headline.
Measure: Click-through rate to the sign-up page.
Success: The time-saver headline gets at least 30% more clicks.
Once the test is complete, the team uses a Learning Card to capture the outcome. Did the results confirm or challenge the original belief? What new questions have emerged? This structured reflection ensures that each experiment contributes to a growing body of insight, not just a series of disconnected efforts.
By making learning visible, the team reduces guesswork and builds evidence that can inform future product decisions, marketing strategies, and sales conversations.
Focus on Metrics That Matter
Marketing teams often rely on vanity metrics because they are easy to access and feel good to report. Pageviews, likes, and email open rates can indicate activity, but they rarely reflect real engagement or value delivery.
Instead, teams should prioritise metrics that speak to actual product traction. These include:
Retention: Are users coming back after their first interaction?
Time to value: How long does it take for a user to experience a core benefit?
Referrals: Are people recommending the product without prompting?
Revenue per customer: Is the product viable beyond initial interest?
These metrics offer a more honest picture of what is working and what is not. They also tie closer to the business outcomes that matter such as growth, sustainability, and customer satisfaction. Measuring what matters also means acknowledging what you don’t yet know. Teams should be comfortable saying, “We’re not sure why this is happening,” as long as they are equally committed to finding out.
Conclusion
Bringing a new product to market is rarely straightforward. It involves risk, speculation, and constant adjustment. But rather than seeing this uncertainty as a threat, product teams can embrace it as a learning opportunity.
By treating each step of the go-to-market process as an experiment, teams can build clarity over time. Tools like Strategyzer’s Test and Learning Cards make this approach repeatable. They help teams move beyond gut instinct, reduce wasted effort, and generate useful knowledge about customers, markets, and value.
The goal is not to eliminate uncertainty because that's unachievable. But with the right mindset and mechanisms in place, teams can make smarter decisions, sooner, and waste a lot less than half their marketing budget in the process.
References
Strategyzer. “Validate Your Ideas with the Test Card.” Available at: https://www.strategyzer.com/library/validate-your-ideas-with-the-test-card
Strategyzer. “Capture (Customer) Insights and Actions with the Learning Card.” Available at: https://www.strategyzer.com/library/capture-customer-insights-and-actions-with-the-learning-card
Mary Iqbal. “The Most Valuable Thing to Do Next.” Scrum.org. https://www.scrum.org/resources/blog/most-valuable-thing-do-next
Sanjay Saini. “Let’s Play the Game of Scrum.” Scrum.org. https://www.scrum.org/resources/blog/lets-play-game-scrum