How to Run a Weekly Numbers Review: A Diagnostic Framework for Live Games
This companion to my Data as a North Star series translates strategy into execution, a practical operational framework for diagnosing weekly revenue shifts in live games. While Expected Outcomes help you define success before you build, a Numbers Review helps you understand what’s working after you ship. Together, they form the heartbeat of a data-informed live service team.
Why This Framework Matters
Running a numbers review is about more than charts and deltas. It’s a ritual that builds alignment, trust, and shared decision making across your team. By reviewing results together, PMs and analysts learn to see the same story in the data and respond with clarity rather than panic.
This process has roots in my Zynga days, where daily and weekly numbers reviews were an essential discipline for understanding live game performance. I’ve carried and refined this best practice across every company since, applying it as a cornerstone method for quickly diagnosing and isolating issues in game health.
A well-run review builds rhythm: it turns data into conversation, conversation into clarity, and clarity into action. Over time, it becomes the single most powerful operational habit your team can develop for long term stability and growth.
For readers who want the detailed diagnostic playbook, including example funnel tables and metrics, a downloadable version of the full framework is available. This post focuses on why reviews matter and how to run them so they strengthen your team’s ability to interpret data and act decisively.
For additional background on KPI fundamentals, see Google Play’s KPI Guide for Apps and Games.
How to Run a Numbers Review
1. Start with context. Begin every session with the topline story: current revenue trend versus target, recent content releases, and any operational changes that could have influenced results. The goal is to give everyone the same starting point.
2. Work top down. Start with revenue and break it into its building blocks (DAU × ARPDAU). Once the driver of change is clear, assign owners to dig deeper into engagement, conversion, or monetization layers.
3. Focus on hypotheses, not blame. The review should uncover why something changed, not who caused it. Encourage open exploration and frame findings as shared opportunities for improvement, ensuring each hypothesis includes a specific action item or validation step through additional testing or supporting data.
4. End with actions. Every review should generate a short list of next steps with clear owners and dates. Follow through is key. Continue iterating and diagnosing until the test produces a result or the fix resolves the issue. Which areas need follow up analysis? Which levers will we test next? Leave with ownership, not just insight.
5. Keep it lightweight. The best reviews last under an hour. They’re not for endless data debates. They’re for pattern recognition, accountability, and alignment.