Most small teams do weekly check-ins on their numbers and call that analytics. The bigger picture — the one that shifts strategy — gets lost in the weekly noise. That’s where a quarterly analytics review earns its keep. Done right, ninety minutes every three months is enough to catch drift, reset priorities, and retire metrics that stopped mattering. Done wrong, it becomes another status meeting nobody needed.
In my experience, the teams that resist quarterly reviews are the ones who most need them. They’re too busy running daily numbers to notice that the whole direction has quietly shifted. This guide gives you a routine you can run without a data team, an analyst, or a paid tool — just a calendar, a spreadsheet, and one solid ninety-minute block.

Why Weekly Reviews Aren’t Enough
Weekly reviews catch operational problems. A drop in signups, a conversion spike, a channel that misfired. They’re tactical, fast, and small in scope. That’s their strength and their limit. They can’t catch slow-moving problems — a metric definition that drifted, an assumption that stopped being true, a dashboard tile that’s been meaningless for two months.
A quarter gives you the altitude to see the whole picture. Three months of weekly data becomes a trend. Three months of decisions become a pattern. Consequently, you can ask questions a weekly meeting can’t: “Is this metric still the right one?” or “Did we actually do what we said we’d do last quarter?”
The Ninety-Minute Agenda
Keep the review tight. Every extra minute dilutes the impact. Here’s the structure I’ve used with every small team I’ve advised:
| Block | Purpose | Time |
|---|---|---|
| 1. Metric sanity check | Are the numbers still defined and collected correctly? | 15 min |
| 2. Trend review | What moved, what didn’t, what’s the shape? | 20 min |
| 3. Assumption audit | Which assumptions from last quarter still hold? | 15 min |
| 4. Metric pruning | What to add, retire, or redefine | 15 min |
| 5. Decisions and next quarter | Top three priorities for the next 90 days | 25 min |
Five blocks, no padding. If a block runs long, cut it and move on. The discipline of finishing on time matters more than perfect coverage of any single topic.
Block 1: Metric Sanity Check (15 min)
Before you analyze any number, confirm the number is still correct. Broken tracking, failed automation, or silent definition drift can make a whole quarter of data meaningless. Spot the breakage early or you’ll waste the next 75 minutes analyzing noise.
- Pull the same query or report that defines each metric — does it still run?
- Spot-check one value against the source of truth (billing system, CRM, ad platform)
- Check that the metric definition documented in your plan still matches what’s in the dashboard
- Flag any metric where the source changed, the field renamed, or the calculation shifted
Any flagged metric gets either fixed on the spot or removed from the review. Never include a broken metric in the trend discussion — the group will build theories on noise, and noise-driven theories persist long after they’re disproven.
Block 2: Trend Review (20 min)
Now look at the last 13 weeks for each surviving metric. Not as a table of numbers — as a chart. The human eye catches trends and inflection points faster than columns of data. Specifically, line charts with the current quarter against the previous quarter’s baseline tell you what changed without anyone doing math.

For each metric, answer three questions:
- What’s the direction? (up, flat, down)
- What’s the magnitude? (small wobble, meaningful shift, dramatic move)
- What do we think caused it?
The third question is the one most teams skip. “We don’t know” is a valid answer — but it’s also a todo. Any trend you can’t explain is a research task for next quarter. Furthermore, the assumption that you can always explain a trend is how narratives become unfalsifiable.
Related: The three revenue metrics every small team should track weekly covers the baseline metrics whose trends you’ll most likely be reviewing here.
Block 3: Assumption Audit (15 min)
At the start of last quarter, your team held a set of beliefs: “Our best channel is X,” “Customers who do Y retain better,” “Trial-to-paid above Z is healthy.” Some of those beliefs were tested by the data that came in. Some weren’t. The quarterly review is where you stop and ask: which are still true?
| Last Quarter’s Belief | What Data Says Now | Verdict |
|---|---|---|
| “Email is our highest-ROI channel” | Paid search now equivalent at scale | Update: tie |
| “Users who complete onboarding retain 2x” | Still holds in new cohort | Confirmed |
| “Free tier drives conversion” | Free-to-paid dropped 4% | Question — investigate |
| “We can’t scale paid ads profitably” | CAC stable at 2x spend in test | Flip — try scaling |
This exercise is the single most valuable hour in the review. Most teams discover that at least one deeply held belief is out of date. Acting on that — not on new data — is usually where the next quarter’s growth comes from.
Block 4: Metric Pruning (15 min)
Dashboards accumulate. Every launch adds tiles; few launches retire them. After a quarter, many metrics have stopped being useful but nobody noticed. This block is where you notice.

For each metric on the current dashboard, ask:
- Was any decision made based on this metric in the last 90 days? If no, retire or demote to context.
- Is the metric still measuring what we think? If the product changed, the metric might be stale.
- Is there a better version of this metric available now? Sometimes new data lets you replace a proxy with the real thing.
- Should we add a metric that’s currently missing? Gaps usually show up when assumptions fail in Block 3.
Aim to retire at least one metric per review. If nothing can be retired, your dashboard is probably below critical size — which is a good problem, but verify you’re not hoarding.
Related: The four questions every metric should answer before you build a dashboard gives you the filter that makes pruning decisions cleaner.
Block 5: Decisions and Next Quarter (25 min)
The review doesn’t end with analysis — it ends with commitments. Pick at most three priorities for the next 90 days, with owners and rough deadlines. Three is the maximum. Teams that pick six ship two.
- Write each priority as a measurable outcome — not an activity. “Increase trial-to-paid to 18%” beats “run onboarding experiments.”
- Assign one owner per priority — not a team.
- Set a 30-day check-in date — so priorities don’t silently drift for 90 days.
- Note what you’re not doing — deliberately killing a tempting-but-unrelated initiative.
The “what we’re not doing” section is counterintuitive but powerful. It prevents scope creep between quarterly reviews, and it forces the team to acknowledge tradeoffs. Consequently, you’ll spend less time mid-quarter re-litigating priorities.
The Output: One Page
Write up the review in one page. Longer, and nobody reads it next quarter. Shorter, and you’ve skimmed past something important. Here’s the template:
- Headline trend: one sentence describing how the quarter went
- Key metric summary: a table showing Q−1 and Q current with deltas
- Beliefs confirmed, updated, and flipped: three bullets from the assumption audit
- Metrics retired and added: from the pruning block
- Top three priorities for next quarter: with owners and 30-day check-in dates
- Deliberately not doing: one or two items
Share it the same day. Don’t polish for a week — a rough summary out immediately beats a beautiful deck delayed. The energy to act on the review peaks in the 24 hours after it ends.
Common Mistakes
- Inviting too many people. Three to five is optimal. Ten becomes a lecture.
- Skipping the sanity check. Guarantees at least one bad conclusion in the next hour.
- Not writing up the output. Guarantees the next review repeats the same insights.
- Carrying old priorities forward without review. A priority that didn’t ship in Q1 either needs to be re-scoped or killed, not auto-renewed.
- Treating the review as a status update. Status is weekly. Strategy is quarterly. Don’t confuse them.
For more on structuring decisions in small teams, the HBR guide to analytics-driven culture and the CIPS knowledge library both cover decision cadences for small organizations.
When You Should Skip a Review
Be honest: some quarters don’t need a full review. If you’re in the middle of a product pivot, between fundraising rounds, or otherwise in upheaval, a full review generates noise. Instead, run a shorter “reset” session focused on what you’re committing to test in the next 90 days, and come back to the full review once the ground stops moving.
That said, don’t skip more than two in a row. A team that goes nine months without a quarterly review will drift — and the longer the drift, the harder the correction when it finally happens.
Continue Learning
Explore more about sustaining a measurement practice:
- How to build a simple measurement plan before your first campaign — the input to every quarterly review.
- The four questions every metric should answer before you build a dashboard — the filter you apply during metric pruning.
- The hidden cost of vanity metrics — catches what your review should retire.
Bottom Line
A quarterly analytics review is ninety minutes that prevent quarters of drift. Five blocks — sanity, trends, assumptions, pruning, and decisions. One page of output. Three priorities for next quarter. In conclusion, the review isn’t a report on the past — it’s a compass reset for the next three months.

