Monday, May 4, 2026
The Marketing Stack Audit Every Mid-Market CMO Should Run This Quarter
By the Fuelly Team
Most mid-market marketing teams pay for too much software and use too little of it. This is not a controversial claim. It is the dominant finding of the only stack utilization study most CMOs cite, and the math gets worse every year.
Gartner's most recent martech survey of 405 marketing leaders found teams use only 33% of their stack's capability, down from 42% in 2022 and 58% in 2020. The 33% utilization problem has its own paper in this series and is worth reading alongside this one. Capability use has fallen by nearly half in five years while average stack size has grown. The pattern is consistent: teams buy tools, fail to onboard them fully, watch the contracts auto-renew, and absorb the slow accumulation of underused software the way a house accumulates cables in the kitchen drawer.
The cost is not just the line item. It is the strategic distortion. Every dollar tied up in a tool the team does not use is a dollar not spent on the channels that drive pipeline. Every hour spent maintaining tooling logins, integrations, and failed pilots is an hour not spent on content, audience, or measurement.
This paper is an audit framework for the mid-market CMO who suspects the stack is bigger than it should be and wants to know what to do about it before the next renewal cycle. The framework is opinionated. It has to be. A neutral audit produces neutral conclusions, and a stack audit needs verdicts.
Why is the average mid-market stack overgrown?
The structural reason is that buying decisions and using decisions are made by different people on different time scales.
A tool gets bought because somebody on the team had a problem, ran a pilot, liked the demo, and got procurement to sign a 12 to 24 month contract. Six months later, that person leaves, changes role, or gets reassigned. The tool stays because the contract has not expired. Eighteen months later, the contract auto-renews because nobody owns the renewal decision and the cost of letting it renew (a known dollar amount) feels lower than the cost of canceling (an unknown disruption).
This pattern repeats per tool, per year, until the stack is full of software whose original buyer no longer works there. We have seen mid-market stacks where 40% of the tools predate the current marketing director. None of those tools are necessarily wrong. The point is that nobody on the current team chose them, and nobody on the current team has audited whether they fit the current strategy.
Gartner's 2025 CMO Spend Survey found marketing budgets have flatlined at 7.7% of overall company revenue, with 59% of CMOs reporting insufficient budget to execute strategy. Paid media's share of those constrained budgets rose to 31% in 2025, while martech, agency, and labor lines all declined. The trend signal is clear: CMOs are reallocating away from martech under budget pressure. The teams doing this well are auditing first and cutting deliberately. The teams doing it badly are cutting reactively at renewal time, often dropping the wrong tools because nobody had time to evaluate properly.
What does a stack audit actually produce?
Before running the audit, define what good output looks like. Three things.
A keep list. Tools the audit confirms are doing real work, with a current owner, with utilization above the threshold (we use 60%), and with a defensible answer to "what breaks if we turn this off." These tools renew without further debate.
A cut list. Tools the audit confirms are dead weight. Low utilization, no clear owner, no measurable impact on funnel performance, no migration cost beyond the disruption of the cancellation itself. These tools get sunset on the next contract end date, with calendar dates assigned now.
A fix list. Tools the audit shows are valuable but underused. The fix might be additional onboarding, a workflow redesign, an integration with another tool in the stack, or a downgrade to a smaller tier. Fix-list items get an owner, a 60-day deadline, and a follow-up audit. If the fix does not happen by the deadline, the tool moves to the cut list automatically.
The output is not a 40-page report. It is a one-page document with three columns and a sunset calendar. If the audit produces something longer, the audit was the wrong format.
Step 1: Inventory what you actually have
Most mid-market teams cannot produce an accurate stack inventory in under an hour. Try it and see. Pull the list from finance, then ask the team to add anything finance missed. The two lists will not match. The gap is your starting point.
For each tool, capture seven fields. Tool name. Vendor. Annual cost. Contract end date. Auto-renew terms. Current owner. What it does in one sentence.
Three observations almost always surface during this step. First, a meaningful number of tools have no owner. The original buyer is gone and nobody picked it up. Mark those for the cut list now. They are unlikely to recover during the audit.
Second, the same capability appears multiple times. Two email tools, three analytics tools, four social schedulers. Each came in for a specific use case that has since faded. The audit will probably collapse most of these into one tool per category.
Third, the cheap recurring SaaS adds up faster than anyone expects. Twelve $200 a month tools is $28,800 a year. That is a meaningful headcount fraction or a serious paid-media reallocation. The cheap tier is where stacks grow fastest because nobody flags individual purchases as material, and where the cumulative waste is the largest.
Step 2: Score utilization on each tool
For every tool on the inventory, answer four questions. These are the questions that produce honest utilization numbers, and they are simple enough that the team can answer them in a single working session.
How many people on the team actually log in to this monthly? If the answer is one or zero, the tool is functionally a personal tool, which is fine for some categories and a red flag for others. If the answer is "the marketing automation platform serves five people but only one logs in," the team is paying for a platform but using it as a single-seat tool. Either consolidate or downgrade.
What percentage of paid features are in active use? This is harder. Most tools do not report it. Estimate. If the team uses the platform for one of its eight modules, mark utilization at 12.5%. If the team uses two modules, 25%. The Gartner number above (33% average) is a useful benchmark. Anything below 25% is a red flag regardless of how loved the tool is.
What is cost per active user? Annual cost divided by monthly active users. For most marketing tools at mid-market scale, the comparable benchmark is $50 to $200 per active user per month. Tools at $500 to $1,000 per active user per month are either enterprise tools serving a small team (probably the wrong tool) or consultancy-priced tools with a thin team behind them (also probably the wrong tool).
What breaks if you turn it off tomorrow? This is the most important question. If the answer is "nothing meaningful breaks for 30 days," the tool is unlikely to be worth what it costs. If the answer is "the website goes down" or "the email engine stops" or "we lose all our customer data," the tool stays regardless of utilization.
The four answers, taken together, produce a defensible utilization score. Use a 1 to 5 scale per tool. Anything 1 or 2 goes on the cut list. Anything 3 goes on the fix list. Anything 4 or 5 goes on the keep list.
Step 3: Map tools to outcomes, not features
The most common audit mistake is comparing tools on features instead of outcomes. Tool A has 47 features. Tool B has 38. Tool A wins, on paper. In practice, the team uses 6 features in either tool, and the 41 features only Tool A has are doing zero work in the funnel.
Replace the feature comparison with an outcome comparison. For each top-of-funnel, mid-funnel, and bottom-of-funnel objective the team is responsible for, list the tools currently contributing. Most stacks reveal a top-heavy pattern at this stage: 70% of the tools are working on top-of-funnel awareness and content production, 20% are mid-funnel nurture and CRM, and 10% are bottom-funnel conversion. The waste is almost always concentrated in the top-heavy section, where multiple tools claim to drive awareness but cannot show incremental contribution.
This is also where the consolidation question gets concrete. If the awareness layer has six tools and the team can name three that are doing the work, the other three are candidates for consolidation regardless of their individual utilization scores. Consolidation in marketing tech is not about cost. It is about reducing the number of decisions the team has to make and the number of integrations they have to maintain.
The CMI 2024 benchmark study found 76% of B2B marketers have a dedicated content team, with 54% of those teams sized at 2 to 5 people. A 3-person content team cannot effectively use 12 different tools. The math does not work. Each additional tool is a learning curve, an integration cost, and an attention cost the team is paying with hours that should be going to content production.
Step 4: Re-baseline ROI on every tool over $25K a year
Tools below $25K a year can be evaluated quickly with the utilization framework above. Tools above $25K a year deserve a real ROI conversation. This is the layer most CMOs skip because the math is uncomfortable and the conversations are political.
For each tool over $25K, answer three questions.
What outcome is this tool contributing to, and is the contribution measurable? The marketing automation platform is contributing to email-driven pipeline. The CRM is contributing to lifecycle revenue. The analytics platform is contributing to decision quality. If the contribution is measurable, what is the dollar figure? If it is not measurable, is there a defensible qualitative argument? Tools that fail both questions go on the cut list immediately. The vendor-published ROI numbers, like the frequently cited 353% three-year marketing automation ROI from Forrester, are useful as category baselines but not as substitutes for measuring the specific tool inside the specific stack.
Could a smaller tier or a competing tool deliver 80% of the contribution at 50% of the cost? This is the question vendors hate, and it is the right question to ask. Most enterprise-tier marketing tools were sized for a larger team than the current one. Most "we negotiated a great deal" pricing was negotiated 18 to 36 months ago, when the team was bigger or the strategy was different. Re-quote the contract at current usage levels. Vendors who refuse to renegotiate are, by their own admission, betting that the switching cost will keep the renewal alive. Sometimes that bet is wrong.
What is the realistic switching cost, in hours and in disruption? This is the question that protects the team from cutting tools that are politically easy to cut but operationally important. CRM migrations are expensive. Marketing automation migrations are expensive. Email service provider migrations are moderately expensive. Most other tools are not. If the switching cost is under 80 hours of team time and the savings are over $30K a year, the math favors the cut almost every time.
39% of CMOs plan to cut agency budgets, per Gartner's 2024 CMO Spend Survey, with the top stated actions being "eliminate unproductive agency relationships" and "streamline rosters." The agency vs. AI marketing math is the deeper version of that conversation. The exact same pattern applies to martech. Eliminate unproductive vendor relationships. Streamline the stack. The strategic logic is identical, but more CMOs have applied it to agencies than to tools, because agencies have humans on the other end of the cancellation conversation and tools do not.
Step 5: Build the consolidation case
Once the keep, cut, and fix lists are drafted, look across them for consolidation opportunities. Three patterns recur in mid-market stacks.
Content production tools. Many teams pay for a writing tool, a graphics tool, a video tool, a social scheduler, a repurposing tool, an SEO research tool, and an editorial calendar tool. AI-native platforms increasingly do most of those in one place, with voice infrastructure and channel-native output. The 2024 stack and the 2026 stack should not look the same in this category. If they do, the audit has not done its job.
Email and automation tools. Many teams have a marketing automation platform plus a separate email service provider plus a separate transactional email tool plus a separate newsletter tool. The first two often consolidate. Most marketing teams send fewer than 100,000 emails a month, which is well within the consolidation tier of the major platforms. Mailchimp's all-industry email benchmarks show a 35.63% open rate, 2.62% click rate, and 0.22% unsubscribe rate, with the caveat that Apple Mail Privacy Protection inflates open rates and clicks are now the more reliable signal. A consolidated email and automation stack with clean reporting on both metrics is more useful than four separate tools each tracking a slice.
Analytics and reporting tools. Many teams have an analytics platform, a dashboard tool, a heatmapping tool, a session replay tool, an A/B testing tool, and an attribution tool. The combined cost is often 4 to 5x what the team would pay for a single platform plus a smaller secondary. The "best-of-breed" argument is most often invoked here. In practice, the marginal feature gap is usually doing zero work in the funnel.
The consolidation move is not "buy the biggest platform that does everything." It is "consolidate where the marginal feature gap is not actually contributing." The audit produces the consolidation case as a side effect of the utilization data, not as a separate exercise.
Step 6: Set the calendar and run the cuts
The audit is worth nothing if the cuts do not happen. Most stack audits fail at this step because the cancellation work is more uncomfortable than the analysis was.
Set the cancellation calendar in writing, with a date for each tool on the cut list and a named owner for the cancellation. Calendar the renewal date. 60 days before each renewal, the owner sends the cancellation notice or renegotiates the contract on tighter terms. Auto-renew is the default failure mode. Removing it is the default fix.
For tools on the fix list, the owner has 60 days from the audit date to demonstrate measurable improvement. If the metric does not move, the tool moves to the cut list automatically. This is the discipline that makes the audit produce results. Without the automatic-cut clause, the fix list becomes a holding pen for tools that nobody wants to cancel and nobody is willing to fix.
For the keep list, schedule the next light audit for 90 days out and the next deep audit for 12 months out. Note the contract end dates on the marketing leadership calendar. The keep list is not permanent. It is current.
What does the team do with the savings?
This is the part most audits underdeliver on, and it is the part the CFO will care about most.
The savings should not just go back to the budget pool. They should be reallocated, with the reallocation announced as part of the audit output. Three categories tend to produce the highest returns at mid-market scale.
Content production capacity. Either headcount or AI-native platforms that compress production time. The CMI data above showed only a third of B2B marketers have a scalable content model. The constraint for most mid-market teams is not strategy or measurement, it is throughput. Reallocating saved tooling spend to throughput is usually the highest-impact move. HubSpot's 2026 State of Marketing report found 83.5% of marketers say they are expected to produce more content, with 35.7% saying "much more." The content-production budget line is going to grow whether the CMO funds it deliberately or not. The question is whether the funding comes from reallocated stack savings or from cuts elsewhere.
Paid media in the channels with proven incrementality. Not "more paid media in general," which is the lazy reallocation. Specifically, more paid media in the channels the team has already validated through incrementality testing or geo holdouts. If the team has not validated, fund the validation work first. DemandScience's 2026 State of Performance Marketing Report found marketers waste roughly 25% of their budgets on activities that produce no results, with the worst-measured teams wasting 30%. The argument for funding validation before scaling spend is, in dollar terms, the largest argument in marketing.
First-party data infrastructure. Email lists, customer data platforms, post-purchase surveys, customer interviewing, voice-of-customer programs. The teams investing in first-party data over the last three years are the teams with the cleanest measurement and the highest content-production efficiency in 2026. The compounding is real.
What the savings should not fund: another martech tool that solves a problem the team has not formally diagnosed. The audit's whole point was to break that pattern. Spending the savings on another single-purpose tool unlearns the lesson.
A short, honest soft sell
FUEL is one of the platforms a stack audit might surface as a consolidation opportunity in the content production layer. We are AI-native, voice-aware, and built for mid-market teams that need to ship volume in their own voice across every channel they use. If the audit reveals three or four content tools each doing one slice of the work, FUEL probably replaces most of them at a lower combined cost.
We are not in the audit business and we are not the right tool for every stack. The honest answer is that some stacks should consolidate toward us, some should consolidate toward a different platform, and some should keep their best-of-breed setup because the team is large enough to use it. The audit is what tells the team which version they are.
Run the Foundation Report on your business. If the output surprises you, that is the point.
If you're an agency, generate a Foundation Report on a client you have worked with for years. If the output does not challenge your thinking, walk away. If it does, the team plans are priced for agencies ready to scale what works.
If a different paper in the series is more relevant to where the team is right now, the full list is at /white-papers.
Frequently asked questions
What's the goal of a marketing stack audit?+
How often should a mid-market CMO run a stack audit?+
What's the biggest waste category in a typical mid-market stack?+
How do you measure stack utilization without a dedicated analytics platform?+
What about consolidation versus best-of-breed?+
How does AI change the stack audit?+
Ready to put this into practice?
FUEL gives mid-market and SMB teams the AI-powered content engine to execute on what these papers describe.
See pricing