Thought Leadership

Case Study: How One Algo Rebuilt Google's $15M Nurture Engine While It Was Still Running

February 19, 2026

As a global nurture programme scaled, spreadsheets and manual QA slowed delivery. NurtureOps restored clarity, speed and control.

The Problem

A high-performing program held together by spreadsheets, email threads, and heroic effort.

The Solution

An Evolved Worker who taught herself how to code and ship the infrastructure Google's team actually needed.

The Impact

Faster builds, zero version errors, and capacity to triple scale without adding headcount.

The Situation

Liane Farrant was managing one of Google's highest-visibility marketing programs. 

It spanned twelve languages, four regions, and five product streams; which meant there was over 600 email versions refreshed every quarter. 

The program was performing well, with high engagement, low unsubscribes, $15 million in pipeline already generated. Leadership loved it and other teams wanted in.

But they couldn’t see what the internal team felt. Operationally the team was drowning.

"It was a mess of documents," Liane says. "Massive spreadsheets. Email threads all over the place. Chat pings. Different meetings with different people where I wasn't always in the room. I was missing critical information for the build."

The program was succeeding DESPITE the process, not because of it. 

And it was about to break.

The Problem: When Your Process Can't Scale With Your Success

Managing the program meant living inside a maze of spreadsheets. One massive sheet tracked asset links. Another managed QA. A third handled approvals. 

To send a single test email, a marketer had to open a spreadsheet with hundreds of rows, find the right version (English vs German, product stream 3 or 4, etc), and hope they got every detail right.

And that was the EASY part.

Every quarter, Liane and her project manager spent hours manually attaching copy doc links, chasing content managers across time zones, and appending UTMs with spreadsheet formulas that broke when someone edited the wrong cell. 

Meanwhile, critical decisions were happening in meetings Liane wasn't part of, or in chat threads that disappeared into the void. Context got lost, versions drifted, and changes happened silently.

"It was clunky," Liane says. "But it's what we had."

Then three new teams wanted to onboard: GWS, SMB, and Startup. Under the old model, that meant creating three more copies of the master spreadsheet, three more manual processes, and three more chances for things to break.

What this was costing

Time: Hours every week coordinating instead of building or optimizing.

Risk: With 600+ versions scattered across dozens of docs, one missed detail could send wrong content to millions of people.

Speed: The QA process alone required a separate massive spreadsheet just to track status. Knowledge on how to run the process became increasingly siloed and difficult to transfer to new team members.

Scale friction: Onboarding a new team meant duplicating the entire infrastructure and hoping nothing broke.

As Liane put it: "If I hadn't built this, I'd be scrambling to manage all of this across the different teams. I would have to create probably three more spreadsheets just to keep it from getting even more overwhelming."

The program was succeeding despite the process, not because of it.

The Solution: One Algo Rebuilt the Infrastructure

Liane didn't wait for someone else to fix this. She didn't wait for Google to assign an engineering team or for a vendor to build a custom solution. She taught herself how to code alongside Gemini, built a web app, and shipped it in weeks.

The goal wasn't to add features. It was to create operational clarity at scale. 

Her design principles were straightforward: create one source of truth for content, build status, QA, and approvals. Control state changes through the system instead of relying on manual discipline. Make progress visible without extra meetings or status requests. And support scale to add more streams, more teams, and more versions….WITHOUT multiplying complexity.

The result was a role-based workflow app that replaced the spreadsheet chaos with structured execution.

How It Works

1. Role-Based Views (Everyone Sees What They Need)

Different users see different controls based on their role. Admins create program shells, set deadlines, and manage users. Marketers edit copy, submit for build, review finals, and request changes. Architects handle QA workflows, link management, and status control. Content contributors provide structured input for promo modules. 

Users only see programs relevant to them so there was no clutter and no confusion about what was theirs.

Liane could then assign workspace access on a per-program basis, so marketers working on one nurture don't see unrelated content clogging their dashboard.

2. Program Shells Auto-Generated

When Liane creates a new program, the system automatically generates a consistent structure: email assets with language variants, a link tracking sheet, a program brief, and a program log. 

This removed setup friction and prevented last-minute additions. 

"Marketers don't have permission to create or delete emails," Liane explains. "We're keeping this smooth so no one's adding emails last minute."

3. Controlled Submission Workflow

Marketers edit copy directly in the app. When finished, they click Submit for Build. This locks the asset to prevent uncontrolled edits, signals builders clearly without chats or email chasing, and forces any further changes through structured change requests. 

After the build team completes QA and marks the email as "Ready for Review," the marketer gets an Approve button. If they approve, it's done. If not, they submit a structured change request. 

No more silent edits or version confusion. The system enforces the workflow so the individuals don't have to remember it.

4. Automated Link & UTM Management

Previously, UTMs were appended via manual spreadsheet formulas that broke easily. 

Now, marketers hyperlink URLs naturally in their copy, the system extracts those links automatically, generates UTMs from program inputs like quarter and program name, and architects push everything to the tracking sheet with one click. Result: no broken links, no missed UTMs, no repetitive admin work.

5. Promo Content Syncing

Promo teams and customer story teams work in separate modules with their own deadlines. When content is ready, architects hit Sync Promo and it populates across all relevant email versions automatically. 

This prevents copy-paste errors and reduces coordination overhead between teams that aren't working in lockstep.

6. Version Control + Editor Locking

Early on, a real problem appeared: two people editing simultaneously caused content overwrites. 

The fix was to force refresh when opening emails to load the latest version, active editing that locks the asset for others with a clear "X is editing this email" message, and full version history that allows instant rollback. 

While this sounds like a “nice feature”, for the team executing it was a key infrastructure upgrade that prevents data loss.

7. Centralized Visibility

The app consolidates operational visibility into program logs showing status by email and version, global logs tracking all change requests and alerts, and asset management housing all uploads in one place. 

Instead of asking "where is this?" or "what's the status?", people just look. The system answers questions before they need to be asked.

The Impact

Operational wins

Build cycle time dropped significantly. What used to require days of coordination now happens in hours. 

Error reduction was immediate, with the locked editing feature alone preventing multiple overwrite incidents and version history providing instant recovery when mistakes did happen. 

Coordination overhead vanished. Team members no longer had to chase copy docs, approval confirmations, or add to the pile of "is this the latest version?" questions threading through comms channels. 

And onboarding capacity expanded. The team promptly launched three new programs without creating three new spreadsheet systems.

Program Growth

Increased operational capacity allowed the promising program to expand dramatically, without everything breaking. Audience grew from roughly 1 million to 7 million contacts. Multiple new teams onboarded with more in the pipeline. 

The program had already generated $15 million in pipeline, with projections to triple as new workstreams come online.

Qualitative Feedback

Leadership and project managers reported they "love it". Marketers said it's "so much easier to use than copy docs". Builders saw clear value in consolidated QA and workflow visibility. 

One team member, Zailyng (another Algo), actively helped improve the tool's architecture, providing feedback that led to immediate enhancements.

Perhaps most tellingly, Liane said

"A lot of people are coming to me now to build apps for them." The success created demand for similar solutions across other workflows at Google."

What This Proves

This case study isn't about "an app". It's about what happens when you embed an Evolved Worker inside a team who can see operational drag and rebuild the system causing it.

Liane didn't just execute tasks inside the existing workflow. She identified that the workflow itself was the constraint, then rebuilt it from the ground up. She didn't wait for approval or resources. She learned how to code and shipped the solution the team actually needed.

Most marketing operations hit a ceiling not because of strategy, but because execution infrastructure can't keep pace. The spreadsheets multiply. The coordination compounds. The team slows down even as demand increases. Liane didn't optimize the spreadsheets, she replaced them with a system that controls how work flows.

The pattern: Identify the operational constraint. Build a structure that governs state changes instead of relying on discipline. Automate repetition. Make critical workflows visible. Scale by design, not by adding coordination layers.

This is what Algomarketing's Evolved Workers do. They don't just execute inside your workflows, they rebuild the workflows when the workflows are the problem. It’s not done with consulting decks, it happens when you upgrade the production systems your teams use every day.

The measurable outcome: A program that was operationally fragile became operationally confident. Growth went from scary to manageable. And the team went from reacting to chaos to controlling it.

What's Next

The tool continues to evolve. Liane is building notification systems to reduce meeting overhead, velocity token generation to simplify dynamic content workflows, and API-driven metadata pulls to automate program documentation.

But the real story isn't about features. It's about the model. The real challenge for engineering team scaling applications isn’t writing the code, it’s understanding the fundamentals as well as the nuance of the existing processes.

That disconnect in understanding is why so many projects fail. Liane’s ability to build the app rooted in her knowledge of process and nuance gave the engineering team the best possible start to continue to scale it moving forward. 

This becomes possible when you deploy operators who can identify operational drag and eliminate it through scalable solutions. In this case, Google didn’t just get "better execution", they grew the program’s capacity without it breaking and without needing to add headcount. 

That's the difference between projects that scale and projects that stall. And that's what happens when you embed an Evolved Worker.

Author

Luke Crickmore
HEAD OF INNOVATION

Share this post

Let’s talk about your next steps

Interested in how AI can elevate your team’s impact? Let’s discuss how we can help you achieve your goals.

Get in touch

Interested in how we can elevate your team’s impact? Let’s talk about your next steps.

*Required
Thank you! We'll be in touch soon.
Oops! Something went wrong while submitting the form.