A major problem that I have using traditional analysis and hypothesis driven experimentation is that far too often this approach is backwards looking, and doesn’t provide the necessary agility to pivot in fast moving industries like mobile gaming. This is particularly a problem for the management of live products, when you have short term urgency to service a set of existing users.
A Common Scenario
For example if I introduce a new feature that I expect to lift veteran user return rates, it may take 4+ months to perform the necessary due diligence, brainstorm, design, develop, then QA this feature. However, realistically, even though we’ve put our best minds to work on this feature, there is never a 100% chance of success. Failure happens for a bunch of reasons, such as: incorrect hypotheses on the motivation of our users, misinformation from feedback received, executive mismanagement, or even just a mishap in our own execution.
Consecutive failures have compounding consequences. It may take several more months to iterate again on this feature. Meanwhile…time passes… the product ages without a clear improvement. The user base shrinks. Its only a matter of time before there aren’t sufficient users and profits coming from the game to justify continued investment.
How do you avoid this? Here is a quick summary of strategies I’ve seen employed by different teams on various products.
|Long-term Feedback||Short-term Feedback|
|Objective||A/B testing in different geos||Consumer insights|
|Subjective||Telemetry analysis||Product owner making decisions|
As I mentioned, traditional telemetry analysis has several shortcomings in that it’s too slow and is in my experience more of a lagging indicator, since data only shows the end result of users dis-engaging from the product without a concrete answer of “why” they are leaving. We can hypothesize reasons for “why” and try to fix the leak, but all too often there are bad hypotheses mixed in with the good, and you can never tell the bad answers from the good ones until after you’ve spent a bunch of time deploying fixes and have failed to reverse trends.
A/B testing provides much more targeted feedback but it requires a ton of resources and a lot of runway to adequately test variations for a sufficient period of time. You also potentially run into a lot of trouble with existing customers if this is a mature product, since customers generally don’t like being experimented on. Eg amazon’s pricing tests. It’s particularly dangerous with large scale consumer products since the internet is an open information environment and customers, regardless of geographic base, will share information in forums or in the news.
Using product owners. This is not particularly a scalable strategy, but something I’ve seen work in practice is leveraging a gifted product visionary to shepherd feature improvements, and just straight up ignore data and consumer feedback to make fast decisions based on gut feel. This may be slightly more prevalent in interactive media due to the creative nature of the industry. These individuals usually have what we call product sense or a natural intuition (based on personal experience, luck, good timing or whatnot) of what users want, and can comfortably call out that something sucks and needs to be fixed. But not everyone possesses this level of authority, which is why I’ve mostly just seen founder/CEO types able to pull this off. It’s certainly not easy if you are a product manager joining a new team and have no pre-existing credibility. I am not a huge fan of this because ultimately there is a ton of subjectivity in this approach, and all it takes is one monumental failure before your ass is out the door.
The fourth solution, which I’d like to explore further, is using consumer insights as an additional tool to bridge the gap between hypothesis driven design and telemetry analysis to drive faster feedback loops. Even though its qualitative, this can still serve as a useful cross reference point to correct course in case we are meandering down the wrong path. The holy grail here, of course, is to identify a process that allows us to increase our probability of market success in a scalable and repeatable way.
In the ideal use case, the team would first identify KPI targets that the product needs to hit. Then we brainstorm a list of improvements for how to move numbers in a sufficient manner. We do some funnel analysis using comps or out of deep discussion on how to prioritize these features, and start production. Consumer insights comes in then as the bridge once production has started but before the feature is released into a live environment. This enables us to start collecting qualitative data and justify if we are moving in the right direction or not.
In the next post I’ll talk more about how consumer insights can be leveraged in new product development.