TL;DR
- •Nobody reads their GA4 dashboard on a Monday — not really. Raw analytics without context is noise. The problem is the artifact, not the data.
- •I built an agent that knows my site, my ICP, my offer, and my voice. Every Monday it emails me a one-page take on the week's analytics. Not a chart. A decision.
- •The real cost isn't the AI. It's the plumbing — the accounts, permissions, and tool-specific limits nobody warns you about. Demos skip all of it.
- •n8n self-hosted on Docker won over Claude Routines — six nodes, glass-box debugging, direct Gmail send, no drafts step.
- •I tested with DeepSeek because it was only my own analytics — no customer data. A startup routing real data can't use DeepSeek (GDPR, Chinese-hosted). Claude API is the only serious option for anything client-facing, and it also produces better output.
The problem with analytics
Nobody really reads their GA4 dashboard on a Monday. (GA4 = Google Analytics 4, the free dashboard almost every website runs. It tracks who visited, where they came from, what they clicked.) People glance at it, close the tab, promise to dig in "later." Not because they don't care — because they don't have time. They're juggling five other things. The data is there. The question is: what do you do with it?
Raw analytics without context is noise. Sessions up 47%? Good or bad? Search impressions down 79%? Crisis or correction? Two new contacts? Qualified or your mom? You don't know until someone who knows your ICP, your content, and your goals looks at the week and tells you what to do.
That "someone" is usually a €1,500/month fractional analyst, a €75K/year growth hire — or nobody. For most early-stage companies it's nobody, and the dashboard stays a graveyard.
I'm an AI GTM builder. I should be the exception. I wasn't. I had GA4 installed, HubSpot wired up, Search Console verified — and I still wasn't reading it weekly. So I built an agent to do it for me.
What I actually wanted
Not a dashboard. An analyst in my inbox.
An agent that has context on:
- •Your site — the pages, the posts, the conventions you use
- •Your ICP — who you sell to, their pains, the queries they'd actually search
- •Your offer — what you sell, how you package it, your pricing logic
- •Your content voice — the tone and vocabulary that signals you "get" the reader
Every Monday morning, this kind of agent reads the week's GA4 + Search Console + CRM data, filters it through everything above, and emails you a one-page take. Not a chart. Not a dashboard. A decision you can act on in 30 seconds.
What it actually sends
A one-page email, every Monday at 7:30 Berlin, with six sections:
- •Headline — the week's story in one sentence. What's up, what's down, what matters.
- •Numbers — sessions, users, pageviews, GSC impressions, avg position, meetings booked, new contacts. Each with WoW % change.
- •What changed — 2-4 specific bullets naming actual queries, pages, and sources. Not "traffic grew" but "direct traffic grew because X, search visibility dropped because Y."
- •AEO opportunities — 3 queries your ICP would actually type. Not "best CRM software." Queries the agent picks because it has your ICP context embedded.
- •Content suggestions — 3 follow-up ideas tied to real URLs already on your site. Ground-truth against your existing content, not hand-wavy "write about X."
- •Fix-its — 1-3 concrete problems visible in the data. Referrer spam inflating numbers. (not set) landing pages. Conversion tracking gaps. The stuff a good analyst would flag immediately.
The first time I read one of these, I closed my GA4 tab. I haven't opened it since.
Now imagine this for your team
Plug this shape into any marketing or GTM team's stack and the Monday meeting changes entirely.
- •No "has anyone looked at the numbers this week?"
- •No scrambling to pull slides five minutes before the stand-up
- •A concrete action list lands in everyone's inbox at 7am: what to write, what to fix, where to push
- •Marketing shows up with an answer to "what did last week tell us?"
- •GTM shows up with "here's where to focus this week" already scoped to three decisions
The agent becomes a teammate. Asynchronous, tireless, context-aware. It doesn't replace a real analyst — it replaces the absence of an analyst, which is what most early-stage companies actually have. A dashboard nobody reads isn't analytics. It's a graveyard with numbers on it.
How I built it — three wrong turns, one right answer
Three paths I considered:
- •Claude Routines — Anthropic's new scheduled cloud agents, research preview. Shiny. Ran in their cloud. Seemed perfect.
- •n8n self-hosted on Docker — my own container, my own rules. Node-based workflows. Proven but less sexy.
- •Local cron on my laptop — cheapest, but needs the laptop on. Defeats the point.
I picked Claude Routines first. For the same reason anyone picks the new shiny thing.
Eleven new credentials before any code ran
Before any code ran, I had to stand up eleven things I didn't have at breakfast:
- •Google Cloud Platform project, billing account linked
- •APIs enabled (Cloud Run, Analytics Data, Search Console)
- •Service account with its own JSON key file
- •GA4 property access for the service account (Viewer role)
- •Search Console access for the service account (Full user, different UI, same account)
- •Cloud Run function deployed with three environment variables (one of them the entire JSON key as a string)
- •Claude cloud trigger, configured via Anthropic's API because the UI is still research preview
- •GitHub public repo to act as a bridge
- •GitHub Actions workflow with a repo secret
- •n8n credential for DeepSeek API (and later Claude API)
- •Two new n8n workflows
I should do this fast was my hope, but I still needed over an hour. Every account, every credential, every permission had its own UI, auth model, and quirk. If you're a founder who's never touched GCP and need to juggle meetings, triple that estimate.
The roadblocks
Claude's sandbox won't talk to my own server. Its egress proxy blocks custom domains. *.run.app isn't on the allowlist. The agent literally couldn't fetch from the Cloud Run function I'd just built. Documented limitation, by design, not a bug. Anthropic has explicitly chosen a Human-in-the-Loop (HITL) stance for cloud agents — sandboxes block arbitrary network access, and any action with real-world effect needs a human in the approval chain. The egress allowlist is that principle at the infrastructure level. Workaround: GitHub Actions as a bridge — fetches Cloud Run, sanitizes PII, commits to a repo Claude can reach.
Gmail MCP can't actually send. Only create_draft. Same HITL principle, one layer up — agents draft, humans approve the Send. Reasonable in theory, but it meant my "zero-touch" pipeline required one touch.
Four failed runs later, I had a draft in my drafts folder. Pipeline worked. Human approval still required.
The pivot: n8n does the whole thing
I rebuilt the pipeline in n8n. Same ICP prompt, different infrastructure. Told Claude Code to wire it up — it had full context from the previous hours of work. One prompt.
Six nodes:
- •Schedule trigger (Monday 07:30 Berlin)
- •HTTP GET → my Cloud Run URL (n8n has no allowlist — direct fetch works)
- •Code node → build the LLM API payload with the ICP prompt inline
- •HTTP POST → the LLM endpoint
- •Code node → strip any markdown fences the model adds
- •Gmail node → send as HTML directly to my inbox
Here's the part that still feels slightly surreal: I didn't touch the n8n UI once. Claude built the whole workflow through n8n's REST API from my terminal — generated the workflow JSON, POSTed it, activated it. Two curl commands, both issued by Claude. I just watched. Email arrived 30 seconds later. Properly formatted. No draft step. No click.
Five minutes. Zero failures. Glass box — every node shows input and output.
The A/B test: DeepSeek vs Claude API
Cheapest way to validate a workflow: run it on DeepSeek ($0.001/run). That's how I tested.
One important caveat before anyone copies this: I was only piping my own site's Google Analytics through the model. No customer data, no PII, no deal info. That's the one context where DeepSeek makes sense for me — a personal, low-stakes, non-regulated project. For a startup routing customer data, HubSpot contacts, or anything GDPR-adjacent through a Chinese-hosted AI service, your DPO and legal team will have words. Not an option. DeepSeek is a personal-project tool, not a business one.
For anything client-facing or regulated, Claude API (either direct or via AWS Bedrock in Frankfurt for full EU residency) is the serious answer. Which is exactly why the production version of this pipeline runs on Claude Sonnet 4.6.
Then I ran both side by side. Same JSON, same ICP prompt, same HTML output format.
Claude Sonnet 4.6 won, clearly. Sharper voice. AEO queries that actually pass the "would my ICP type this into Google or ChatGPT?" test. Content suggestions grounded in specific blog URLs instead of hand-wavy "write about X." The email read like something I'd actually send.
Cost goes up ~20× swapping to Claude. Quality goes up more. For the one email a week that shapes how I spend the next seven days, easy trade.
Pattern stays model-agnostic: DeepSeek for "does the plumbing work?" Claude API for "does the output meet the bar?" The plumbing is where the engineering hours go; the model is a one-line swap.
What else you can build with this pattern
Once you have the shape — data source → LLM with your context → Gmail/Slack/Notion — every weekly report becomes automatable:
- •Monday: GA4 + GSC + HubSpot digest (this one)
- •Tuesday: competitor blog scan + one take per post in your voice
- •Wednesday: HubSpot deal stage review with stuck-deal flags
- •Thursday: LinkedIn post-performance + 3 new post angles
- •Friday: newsletter subscriber growth + content gaps
Each one is roughly the same 6-node n8n workflow. Different API, different prompt. The hard part is done once.
The privacy gotcha I almost shipped past
Late in the build: the repo GitHub Actions committed to was public. I'd stripped PII, but the sanitized file still had aggregate traffic numbers, top GSC queries I'm ranking for, meetings booked, new contacts count, and referrer data including my dev server URL.
Not catastrophic — but enough for a competitor to infer pipeline velocity and SEO targets. I made the repo private. The Claude Routines path broke as a consequence (raw URLs on private repos need auth; WebFetch can't pass headers). Trade-off on purpose. n8n is the live pipeline now.
The bottom line
Three takeaways:
- •Analytics without context is noise. The dashboard isn't broken — it was never the right artifact. A one-page take from an agent that knows your site + ICP + content is.
- •Demo uses one tool. Production uses five. Expect bridges, sanitization, privacy checks. The plumbing is where you earn the automation.
- •Pick the right model for the moment. DeepSeek for plumbing, Claude API for output that lands with a human.
The stack costs whatever Claude API tokens I burn — n8n self-hosted on Docker, Cloud Run in the free tier, GitHub Actions free, GCP/Gmail free. The output is the kind of analyst take I would have paid €1,500/month for a human to produce. It lands in my inbox every Monday, whether my laptop is open or not.
That's the shift. Analytics stopped being a tab I avoid and started being a channel that works for me.



