← Back to Blog

How I Built an AI Agent System to Manage PPC Campaigns

By Skylar Martinez

ai-agentsppcgoogle-adsautomationlanggraph

Last year I was managing PPC for multiple clients. Same story every week: export data, build reports, spot issues, make recommendations, repeat. Hours of work that felt like it should be automated.

So I built something to do it for me.

Not a simple script. Not a rules-based automation. An actual AI agent system — one that reasons about the data, catches problems I'd miss, and generates insights that actually matter.

I call it ScaleSearch. Here's how it works.

The Problem With Traditional PPC Automation

Google Ads has built-in automation. Automated bidding. Automated rules. Performance Max campaigns that supposedly "optimize themselves."

But here's what they don't do:

  • Connect the dots across your account. They optimize in silos.
  • Explain why something happened. You get numbers, not insights.
  • Catch subtle issues. They react to thresholds, not patterns.
  • Generate actual recommendations. They adjust bids, not strategy.

I needed something that could look at an account the way I would — holistically, contextually, and with actual reasoning.

Why AI Agents (Not Just AI)

There's a difference between using AI and building AI agents.

Using AI: You paste data into ChatGPT and ask for analysis. Works, but manual. You're still the orchestrator.

Building agents: You create systems that orchestrate themselves. They decide what to look at, what tools to use, and how to respond to what they find.

Agents can:

  • Loop until they have enough information
  • Call tools (APIs, databases, code execution)
  • Hand off to other agents with different specializations
  • Remember context across runs

That's what PPC management actually needs — not one-shot analysis, but an ongoing system that watches, learns, and acts.

The ScaleSearch Architecture

Here's the high-level structure:

┌─────────────────────────────────────────────────────────────┐
│                   CLIENT DELIVERY DIRECTOR                   │
│         (Quality gate — critiques against standards)         │
└─────────────────────────────────────────────────────────────┘
                              ▲
                              │ feedback loop
                              ▼
┌─────────────────┐   ┌─────────────────┐   ┌─────────────────┐
│    DATA TEAM    │ → │  ANALYSIS TEAM  │ → │  ACCOUNT TEAM   │
│  Load & clean   │   │  Find insights  │   │ Format & verify │
└─────────────────┘   └─────────────────┘   └─────────────────┘

Data Team

Job: Get the raw data into a usable format.

  • Pulls from Google Ads API (or CSV exports for now)
  • Normalizes schemas (because Google's data structures are... inconsistent)
  • Aggregates by the dimensions that matter (campaign, date, geo, etc.)

No analysis here. Just clean data.

Analysis Team

Job: Find what matters in the data.

This is where the AI reasoning happens:

  • Trend analysis (what's changing and why)
  • Anomaly detection (what's unexpected)
  • Strategic recommendations (what to do about it)

The key insight: Ask specific questions, get specific answers.

Instead of "analyze this data," I prompt with:

  • "What campaigns have CPAs more than 20% above average? Why might that be?"
  • "What changed week-over-week that explains the conversion drop?"
  • "What's the single most impactful optimization we could make?"

Account Team

Job: Package insights for human consumption.

  • Formats everything into readable reports
  • Ensures recommendations have specific next steps
  • Verifies numbers are accurate before they go out

The Director (Quality Gate)

Job: Make sure everything is actually good.

This agent critiques the work against a quality standard:

  • Are insights specific enough?
  • Do recommendations have timelines?
  • Is the executive summary actually executive-level?

If it's not A-grade, it sends feedback and the teams iterate.

This feedback loop is crucial. The first draft is rarely great. The third draft, after targeted critique, usually is.

The Tech Stack

  • LangGraph: For agent orchestration and state management
  • Claude: For reasoning (Opus for complex analysis, Sonnet for simpler tasks)
  • Prefect: For scheduling and workflow management
  • Google Sheets: For output (clients live in spreadsheets)
  • Python: Because it's the glue language for everything

The total system is maybe 2,000 lines of Python. Not a massive codebase — just well-structured agents with clear responsibilities.

What I Learned Building This

1. Agents need constraints

Unconstrained agents wander. They'll explore tangents, over-analyze, and never finish.

The fix: Give them specific questions to answer and clear definitions of "done."

2. Quality gates change everything

Before adding the Director agent, output quality was inconsistent. Some runs were great, some were mediocre.

Adding a critic that sends work back for revision was the single biggest quality improvement. The system now self-corrects.

3. Start with the output, work backwards

I designed the final report format first — what insights would be most valuable? What format works for busy clients?

Then I built agents to produce exactly that. Output-driven design.

4. Human-in-the-loop is still essential

This system generates recommendations. It doesn't execute them automatically.

That's intentional. AI can analyze and suggest. Humans should approve and act. At least for now.

Results

ScaleSearch now runs daily for multiple accounts. What used to take me 3-4 hours per week per client now takes about 15 minutes of review.

More importantly, it catches things I'd miss. Patterns in the data that don't show up unless you're looking at the right angle at the right time.

Is it perfect? No. Sometimes the insights are surface-level. Sometimes it hallucinates a trend that isn't there.

But it's good enough to be genuinely useful — and getting better with every iteration.

What's Next

I'm working on:

  • Automated screenshots of actual dashboards for visual reporting
  • Slack integration for real-time alerts
  • Multi-account intelligence — patterns across clients, not just within them

The goal isn't to replace human PPC expertise. It's to augment it — handling the tedious analysis work so humans can focus on strategy and creativity.

Want to Build Something Like This?

I put together a blueprint that covers the architecture, starter code, and prompts I use. It's free — grab it below.

And if you want help implementing this for your accounts, let's talk.


Join The Signal — my weekly newsletter on AI, PPC, and building systems that scale.

Subscribe to The Signal →

from

subscribed

Book a Call