The Ahrefs + Google Ads MCP Server That Saved a Content Agency 52 Hours a Week
A content agency was spending 3 hours per client per week on Ahrefs audits. An MCP server turned those 3 hours into 20 minutes per account and unblocked 5 more AI use cases in the process.
Why 3 hours per client per week is a ceiling, not a cost
A content agency with 20 client accounts runs on repeatable deliverables. Every week, someone logs into Ahrefs, pulls ranking data for each account, cleans the export, compares it to the prior week, identifies the biggest movers, writes up what changed and why, then briefs the writer team on what to target next.
That process took about 3 hours per account per week for this agency. Multiply by 20 and you get 60 hours a week of delivery ops. The founder was doing most of it, because the founder was the one who knew how to read the data and translate it into briefs the writers could actually use. At 20 accounts, there was no room left in the week. Adding a 21st account meant adding founder time, and there was none available. The ceiling was not a headcount problem. It was a data-access problem wearing the costume of a headcount problem.
The first instinct was API scripts. Why we did not go that route.
The obvious first move is a script: hit the Ahrefs API, pull the data, feed it to Claude, get a draft audit. That script takes a day to build and it works. The problem is it solves exactly one problem.
When the next request arrives, for keyword gap analysis, you write another script. Then a competitor brief script. Then an internal linking audit script. Six months later you have eight scripts maintained separately. Each one re-solves the data access problem from scratch. Each one has its own authentication, its own error handling, its own output parsing. Touching the Ahrefs API credential means updating eight files.
The Model Context Protocol offers a different architecture. Instead of scripts that each reach directly into Ahrefs, you build one MCP server that exposes Ahrefs as a set of named tools. Claude calls those tools on demand, the same way a person clicks through the Ahrefs interface when they need data. The MCP server handles authentication once. Error handling lives in one place. Any Claude-backed workflow can call the same tools without modification. Scripts are throwaway. An MCP server is infrastructure.
What the MCP server actually exposed
For this agency, the server exposed four primary tools to Claude via the Anthropic TypeScript SDK:
get_backlink_profile- pulls domain-level backlink data for a given account from the Ahrefs APIkeyword_gap_analysis- compares the account's ranking keywords against one or more competitor domainscompetitor_paid_keywords- surfaces competitor Google Ads spend and paid keyword data via the Google Ads APIclient_data_lookup- queries the internal Meerkat article database to return what has already been written and published for each account
These are not complex tools individually. Each one is a few dozen lines wrapping an authenticated API call and returning structured JSON. The complexity is in the data model and the auth, not the code volume. The full server is a few hundred lines of TypeScript. What makes it valuable is that every workflow the agency runs can call any of these tools without writing new integration code.
The audit pipeline that was the first customer
The weekly audit pipeline was the first use case the MCP server served. Here is what it does end to end.
A scheduled job triggers for each active account. Claude receives a system prompt with the account context and a user message requesting the weekly audit. It calls get_backlink_profile for the prior 7 days, compares rankings against the prior period to identify gainers and losers above a threshold, then calls client_data_lookup to see what articles have been published in the last 30 days. Based on that context, Claude decides whether to call keyword_gap_analysis on top movers or competitor_paid_keywords on pages with paid traffic. It outputs a structured audit with a ranked list of issues, a brief for each priority keyword target, and a section on what the ranking changes indicate about the account's content velocity.
The founder reviews the Claude output in about 20 minutes per account. Most weeks the edits are minor. Total time per account went from 3 hours to roughly 20 minutes. Across 20 accounts that is 52 hours a week recovered. For the full case breakdown, see the SEO content engine case study.
What prompt caching did to the run cost
Each weekly audit shares a large block of system context: the audit format, the account profile, the writer brief template. That context is identical across runs for the same account. Anthropic's prompt caching stores the cached prefix tokens at a reduced rate so subsequent calls reuse them without paying full input token cost.
For this agency's audit pipeline, caching typically cuts Claude API spend by 60 to 80 percent. The system prompt and client context sit in the cache. Only the variable data, the actual ranking delta and article list, is billed at the full input rate. That is a large fraction of why the monthly run cost sits under $150 across 20 accounts at weekly cadence. The Ahrefs seat was already paid for. The Google Ads API access was already in place. The MCP server added minimal infrastructure cost on top of what the agency was already running.
The 5 use cases that shipped in days
Once the MCP server existed, every subsequent use case was an incremental build. The data access problem was already solved.
Keyword gap analysis. Claude compares the account's ranking keywords against two competitors using the existing Ahrefs tools. New prompt, same data layer. Shipped in two days.
Competitor brief generation. For accounts expanding into a new topic area, Claude pulls competitor content structure and keyword overlap, then writes a brief targeting the gap. Same tools, new prompt logic. Two days.
Internal linking audit. The Meerkat data already in the server shows all published articles per account. Claude identifies pages ranking for similar keywords and flags missing internal links. One new MCP tool was added to expose the Meerkat link graph. Three days.
Monthly performance summary. A longer-form synthesis combining ranking trends, Google Ads data, and article output into an executive summary format. No new tools required. One day.
New account onboarding audit. When a new client signs, Claude runs a deep pull of the domain, top 50 ranking pages, keyword gaps against three competitors, and Google Ads waste. Previously this took the founder half a day. With MCP it runs in about 15 minutes.
Five use cases. None required rebuilding the data layer. The pattern is consistent: the connection is solved once, and every subsequent use case draws from it.
What I would do differently
The biggest thing I would change is schema validation on the MCP tool outputs, applied from day one. The tools were returning JSON and Claude was parsing it correctly, but there was no runtime contract enforcing that the shape would not drift if an API response changed. Adding a Zod schema or equivalent at each tool boundary would have caught a few edge cases earlier and made the server easier to extend without breaking existing consumers.
The second thing is audit logging. The server was built to serve Claude, and it worked. But for six months there was no structured record of which tools were called, with what parameters, and what the response looked like. Adding that early would have made the agency's reporting more defensible and would have surfaced the data to optimize prompt caching hit rates faster. Both of these are low-effort additions that have outsized value over the life of the server. Wire them in at the start, not after you notice you need them.
Who this pattern applies to
The economics here are straightforward. If you are running 10 to 30 client accounts with a shared data layer, meaning everyone's accounts live in the same Ahrefs workspace, the same Google Ads manager, the same internal CRM, an MCP server scales with the account count in a way that isolated scripts do not. Adding a new account means one new row in a config table, not a new set of integrations.
In-house marketing teams at companies with proprietary internal data are a similar fit. If your AI use cases need to combine a public tool like Ahrefs with an internal system that has no public MCP server, the only path is custom. And if you have three or more planned AI use cases that will share that data, the 2-3 week investment in the MCP layer pays back before the third use case ships. See when a custom MCP server is worth building for the full decision framework. The single-use-case situation is the exception where a script or direct API call is genuinely the right call. Three use cases or more, build the server.
Want to size the MCP layer for your data? Run the AI Operations X-Ray.
Frequently asked questions
- Why not just write API scripts instead of an MCP server?
- Scripts are throwaway. An MCP server turns your data sources into tools Claude can pick from across any use case. The reuse is where the payback is.
- What data sources fed into the server?
- Ahrefs for backlinks and keywords, Google Ads for paid-side competitor data, and internal client-project data. All exposed as Claude-callable tools through one MCP interface.
- How big is a typical agency-side MCP server?
- Small. A few hundred lines of TypeScript or Python. The complexity is in the auth and the data-model work, not the code volume.
- Did prompt caching matter here?
- Yes. Audit prompts share a large system context across every account. Prompt caching cut Claude API costs enough to keep the run cost under low three-figures monthly across 20 accounts.
- Can a 2-person agency pull this off?
- Yes if you have 3+ planned AI use cases that share data. For a single use case the MCP layer is overkill. The payback is in the reuse.
Related reading
- Your First Custom MCP Server - When It Is Worth Building vs Using an Off-the-Shelf One
The official Anthropic MCP registry has servers for Slack, GitHub, Google Drive, and dozens more. Custom only makes sense in specific cases. Here are the 5 signals that custom is the right call.
- MCP vs REST APIs vs Function Calling - Which Abstraction to Reach For
Three abstractions expose data and actions to an LLM. They look interchangeable. They are not. Here is the decision tree.