The GTM Metrics Framework: What to Measure at Every Funnel Stage
A full-funnel GTM metrics framework covering awareness through expansion, metric trees, leading vs lagging indicators, and reporting cadence.
GTMStack Team
Table of Contents
Most GTM Teams Are Drowning in Metrics
The average B2B go-to-market team tracks somewhere between 40 and 120 metrics across their tools. Dashboards overflow with charts. Weekly reports stretch to 15 pages. And yet, when the CEO asks “why did we miss the quarter?”, nobody has a clear answer.
The problem is not a lack of data. It is a lack of structure. Without a framework that connects individual metrics to business outcomes, you end up with a spreadsheet graveyard — numbers that get reported but never acted on.
Over the past three years, we have worked with dozens of revenue teams building their measurement practices from scratch. The ones that succeed share a common trait: they start with a framework before they start with a dashboard. They define what matters at each stage of the funnel, separate leading indicators from lagging ones, and build a metric tree that makes it possible to diagnose problems in minutes instead of days.
This post walks through that framework in detail.
The Full-Funnel Metrics Model
A GTM metrics framework needs to cover six distinct stages. Each stage has different goals, different owners, and different metrics. Mixing them together — which is what most teams do — creates confusion about who owns what and what actions to take when something goes wrong.
Stage 1: Awareness
Goal: Get your brand and message in front of the right audience.
Key metrics:
- Impressions by channel — Total reach across paid, organic, social, and content. Break this down by ICP-fit audience vs. general audience when possible.
- Share of voice — What percentage of category-relevant conversations include your brand? For B2B, track this through branded search volume relative to competitors, social mentions, and analyst coverage.
- Website traffic by source — Not just total visitors, but traffic quality. Segment by source and track the percentage that matches your ICP firmographics.
- Content reach — Downloads, views, and shares of top-of-funnel content.
Target-setting guidance: Awareness metrics are inherently noisy. Set targets on 90-day rolling averages rather than week-over-week changes. A 10-15% quarter-over-quarter growth rate in ICP-fit traffic is a strong benchmark for Series A through Series C companies.
Stage 2: Interest
Goal: Convert anonymous visitors into known contacts who have shown intent.
Key metrics:
- Marketing Qualified Leads (MQLs) — Contacts who meet your scoring threshold. Be ruthless about your scoring model — if more than 40% of MQLs get rejected by sales, your threshold is too low.
- Content engagement depth — Pages per session, time on site, and return visit rate for contacts who have identified themselves.
- Email opt-in rate — What percentage of visitors subscribe to your content?
- Demo/trial request rate — The most direct signal of interest. Track this as a percentage of total website sessions and as an absolute number.
Target-setting guidance: Demo request rates vary wildly by industry. For horizontal B2B SaaS, 1-3% of website sessions converting to a demo request is typical. For vertical SaaS with a narrower audience, 3-7% is achievable.
Stage 3: Consideration
Goal: Move interested contacts toward an active buying process.
Key metrics:
- Sales Accepted Leads (SALs) — MQLs that sales agrees are worth pursuing. The MQL-to-SAL acceptance rate is one of the most important alignment metrics between marketing and sales.
- First meeting booked rate — What percentage of SALs result in a discovery call? If this is below 60%, you have a handoff problem.
- Opportunity creation rate — SALs that convert to pipeline. Track the time from SAL to opportunity creation — if it exceeds 14 days on average, deals are stalling in early qualification.
- Content consumption during evaluation — Which case studies, comparison pages, and technical docs are prospects viewing? This tells you what objections they are trying to resolve.
Target-setting guidance: SAL-to-opportunity conversion between 40-60% indicates healthy qualification. Below 40% means marketing is sending unqualified leads. Above 70% might mean your criteria are too strict and you are leaving pipeline on the table.
Stage 4: Decision
Goal: Win deals and close revenue.
Key metrics:
- Win rate — Closed-won divided by total opportunities that reached a decision stage. Segment by deal size, segment, and source to find patterns.
- Average deal size — Track the trend over time. Declining deal sizes often signal a shift in buyer mix or discounting pressure.
- Sales cycle length — Days from opportunity creation to close. Measure the median, not the mean — outlier deals will skew the average.
- Competitive win rate — When you are in a competitive deal, how often do you win? Track this by competitor.
Target-setting guidance: Win rates between 20-35% are common for B2B SaaS with average deal sizes under $50K ARR. Above $100K ARR, win rates often compress to 15-25% due to longer evaluation cycles and more stakeholders.
Stage 5: Closed
Goal: Ensure successful onboarding and time-to-value.
Key metrics:
- Time to first value — How long until the customer achieves their initial success milestone? Define this clearly per product and track it religiously.
- Onboarding completion rate — What percentage of customers complete all onboarding steps within the expected window?
- Support ticket volume (first 90 days) — High volume here signals product or onboarding gaps.
- NPS/CSAT at 30, 60, 90 days — Early satisfaction scores predict retention better than any other metric.
Stage 6: Expansion
Goal: Grow revenue from existing customers.
Key metrics:
- Net Revenue Retention (NRR) — The single most important metric for any SaaS business. Includes expansion, contraction, and churn. Top-performing B2B companies maintain NRR above 115%.
- Expansion revenue as % of new ARR — Healthy companies generate 30-40% of new ARR from existing customers.
- Product usage trends — Feature adoption, seat utilization, and API call volume. Declining usage is the earliest warning sign of churn.
- Customer health score — A composite metric combining usage, engagement, support sentiment, and payment history.
For a deeper look at how these metrics connect across teams, our Revenue Ops Playbook covers the data architecture required to make cross-stage measurement work.
Leading vs. Lagging Indicators
Every metric falls into one of two categories, and confusing them is one of the most expensive mistakes a GTM team can make.
Lagging indicators tell you what already happened. Revenue, win rate, churn rate, NRR — these are outcomes. By the time a lagging indicator moves, the underlying cause happened weeks or months ago. You cannot manage a business by watching lagging indicators alone. That is like driving by looking in the rearview mirror.
Leading indicators predict what will happen. Pipeline creation rate, first meeting booked rate, content engagement, and product usage trends — these give you early warning. When a leading indicator drops, you have time to intervene before it shows up in your revenue numbers.
The practical rule: for every lagging indicator you report to leadership, identify at least two leading indicators your team monitors daily.
Here is an example. Your lagging indicator is quarterly win rate (currently 28%). Your leading indicators might be:
- Discovery call quality score — Tracked by recording and scoring the first 50 discovery calls each month. A drop in quality score today predicts a drop in win rate 60-90 days from now.
- Multi-threaded deal percentage — Deals with 3+ contacts engaged have a 2.4x higher win rate than single-threaded deals. If your multi-threading rate drops, your future win rate will follow.
SDR-specific leading indicators deserve their own treatment — we cover those in detail in our post on SDR metrics that actually matter.
The Metric Tree: From North Star to Tactical Metrics
A metric tree is the structural backbone of your framework. It answers the question: “When this number moves, what caused it?”
Level 1: North Star Metric
Pick one. For most B2B SaaS companies, this is ARR growth rate or net new ARR per quarter. Everything else exists to explain and drive this number.
Level 2: Branch Metrics (3-4 maximum)
These are the major components that sum to your north star. For a company targeting $2M net new ARR per quarter:
- New business ARR — $1.4M target (70% of total)
- Expansion ARR — $800K target (40% of total)
- Churn/contraction — -$200K target (keeping this to 10%)
Note: these intentionally sum to more than 100% because churn offsets part of the new and expansion revenue.
Level 3: Driver Metrics
Each branch metric breaks down into the factors that drive it. For new business ARR of $1.4M:
- Pipeline created — $5.6M (assuming a 25% win rate)
- Average deal size — $35K ARR
- Number of deals needed — 40 closed-won deals
- Sales cycle length — 62 days median
For pipeline created of $5.6M:
- Inbound pipeline — $2.8M (50% of total)
- Outbound pipeline — $1.7M (30% of total)
- Partner/channel pipeline — $1.1M (20% of total)
Level 4: Tactical Metrics
These are the daily and weekly activity metrics that individual contributors control. For inbound pipeline of $2.8M:
- Website sessions — 45,000/month
- Session-to-MQL rate — 2.2%
- MQL-to-SAL rate — 55%
- SAL-to-opportunity rate — 48%
- Average pipeline value per opportunity — $42K
Now, when the VP of Sales asks “why is pipeline light this month?”, you can trace the tree. Website sessions are on track. MQL conversion is on track. But SAL-to-opportunity dropped from 48% to 31% — sales reps are rejecting more leads. That is a specific, actionable diagnosis.
The analytics capabilities you choose should support this kind of drill-down natively, without requiring an analyst to build custom queries every time someone asks a question.
Setting Targets That Drive Behavior
Bad targets create bad behavior. Here are the principles that work.
Start with historical data, not aspirations. Pull 6-12 months of conversion rates, cycle times, and activity volumes. Your targets should reflect what is achievable based on evidence, not what the board wants to see.
Set targets at each level of the metric tree. A revenue target without corresponding pipeline, conversion, and activity targets is just a wish. Work backward: if you need $2M in new ARR and your win rate is 25%, you need $8M in pipeline. If your average deal is $35K, that is 229 opportunities. If your opportunity creation rate is 12% of SQLs, you need 1,908 SQLs.
Use ranges, not point estimates. Instead of “40 deals this quarter,” set a target range: “36-44 deals (commit: 36, target: 40, stretch: 44).” This gives your team clarity about what is expected versus what is exceptional, and it makes forecasting discussions more productive.
Revisit quarterly. Markets shift. Product changes affect conversion rates. A target set in January based on December data may be irrelevant by April. Build a quarterly target review into your operating cadence.
Never set a target without an owner. Every metric in your framework should have one person accountable for it. Not a team — a person. Shared ownership is no ownership.
Reporting Cadence: When and What
Different metrics require different reporting frequencies. Getting this wrong either creates noise (daily reports on metrics that barely move) or blindness (monthly reports on metrics that needed intervention last week).
Daily (operational teams only):
- Activity metrics: calls made, emails sent, meetings booked
- Pipeline created (running total)
- Website traffic and conversion rates
- Support ticket volume
Weekly (team leads and managers):
- Pipeline movement (new, advanced, slipped, lost)
- Conversion rates at each funnel stage
- Leading indicator trends
- Forecast updates
Monthly (leadership and cross-functional):
- Full metric tree review
- Leading vs. lagging indicator trends
- Cohort analysis (how is this month’s pipeline performing vs. prior months at the same age?)
- Experiment results and operational changes
Quarterly (board and executive):
- North star and branch metrics vs. targets
- Year-over-year and quarter-over-quarter trends
- Strategic metric changes (NRR, CAC payback, LTV:CAC)
- Target recalibration
Revenue Operations teams typically own the reporting cadence and are responsible for maintaining the metric tree. If you are building a RevOps function, establishing this cadence should be one of your first 30-day priorities.
Avoiding Metric Overload
More metrics is not better. Here are the warning signs that your measurement practice has become counterproductive:
Your weekly report takes more than 15 minutes to review. If your team spends more time discussing the report than discussing what to do about the numbers, you have too many metrics.
Nobody can name the top 3 metrics from memory. Ask five people on your GTM team what the three most important metrics are. If you get five different answers, your framework is not working.
Metrics create conflicting incentives. Marketing optimizes for MQL volume. Sales complains about lead quality. This classic conflict happens because the metrics are disconnected — MQL volume is rewarded without a corresponding quality gate.
You are measuring things you cannot influence. Every metric should have a clear action associated with it. If a metric moves and nobody knows what to do differently, remove it from your active dashboard. It can live in an analysis tool for periodic deep investigation, but it does not belong in your operating metrics.
The fix is subtraction, not addition. When something goes wrong, the instinct is to add more metrics. Resist it. Instead, ask: “Which existing metric, if we paid closer attention to it, would have told us about this problem earlier?”
A strong metrics framework for a $5-50M ARR company should have 15-25 metrics in active use. The metric tree might contain 40-60 total, but most of those are diagnostic — you only look at them when a higher-level metric signals a problem.
Building the Framework in Practice
Here is the sequence that works for teams implementing this from scratch:
Week 1: Audit. List every metric currently tracked across all tools. For each one, note who owns it, how often it is reviewed, and what action it triggers. Most teams find that 60-70% of their metrics fail the “what action does this trigger?” test.
Week 2: Define the tree. Start with your north star and work down through branches, drivers, and tactical metrics. Get sign-off from every team lead on the metrics that affect their team.
Week 3: Set baselines. Pull historical data for every metric in the tree. Calculate trailing 6-month averages and identify trends. This becomes your baseline for target-setting.
Week 4: Build dashboards. Create three dashboards: executive (north star + branches), team lead (drivers + leading indicators), and individual contributor (tactical metrics + daily activities). Our post on building revenue dashboards covers the design principles for each.
Week 5-6: Operationalize. Establish the reporting cadence. Run the first full metric tree review. Identify gaps in data collection and prioritize fixing them.
Ongoing: Iterate. Every quarter, review the framework. Remove metrics nobody acts on. Add metrics that would have helped diagnose recent problems. Adjust targets based on new data.
The framework is not a one-time project. It is a living system that evolves with your business. But the structure — the funnel stages, the metric tree, the leading/lagging distinction, the reporting cadence — that structure should remain stable. It gives your team a shared language for talking about performance, diagnosing problems, and making decisions.
That shared language is worth more than any individual metric.
Stay in the loop
Get GTM ops insights, product updates, and actionable playbooks delivered to your inbox.
No spam. Unsubscribe anytime.
Ready to see GTMStack in action?
Book a demo and see how GTMStack can transform your go-to-market operations.
Book a demo