The backlash against AI in marketing might be the best thing that could happen to organizations using AI well. A claim like that will take some explaining because we're all hearing a lot of loud evidence against it. In the year alone, we've seen multiple of the largest brands in the world learn some expensive lessons in public.
Coca-Cola recreated its beloved 30-year-old holiday ad using generative AI, and the internet responded with mockery. One YouTube comment with 30,000 likes called it "the most profitable commercial in Pepsi's history."
McDonald's Netherlands pulled an AI-generated Christmas campaign after viewers labeled it "AI slop" and said it "ruined Christmas spirits."
When Vogue ran a Guess ad featuring AI-generated models, public outrage followed. Readers threatened to cancel their subscriptions.
The headlines wrote themselves: consumers hate AI. The technology is not ready. Brands should stay away. But that diagnosis is wrong and organizations that blindly accept it will miss what is actually happening.
What Consumers Are Actually Rejecting
The YouTube comments on these failed campaigns tell a more nuanced story than the headlines suggest.
Under the McDonald's ad, one comment with 3,500 likes reads: "Even if this wasn't made by AI, like... this vibe is actually horrible." Another, with nearly 1,000 likes: "Using AI to make an ad telling people to spend the holidays at McDonald's is something I'd expect to see in a novel about some kind of corporate dystopia." A third, more simply: "Gotta love a bad idea plus a bad execution."
Read those comments again. Not one of them complains about AI capability. Every single one complains about creative judgment.
The technology produced exactly what it was asked to produce. That is the point. The problem was never the tool. The problem was what the tool was asked to make.
Some skepticism is inevitable. Any major shift in how creative work gets produced triggers a "hang on, should we be doing this?" reaction. But novelty alone does not explain the intensity of the response, or its specificity.
These audiences are not rejecting AI in the abstract.
They are rejecting these executions, for reasons they articulate clearly.
This distinction matters because research on consumer perception of AI-generated content reveals something counterintuitive. When people do not know content is AI-generated, they often prefer it. Their objections only surface when they are told.
A 2024 study of 2,000 consumers in the US and UK presented participants with two articles, one written by ChatGPT and one by a human copywriter, without revealing which was which. When asked which they preferred, 56% chose the AI-generated version. But when the same participants were asked how they would feel reading content they suspected was AI-generated, 52% said they would feel less engaged ¹. The gap between what people say they want and what they actually respond to is significant.
Academic research confirms this pattern. A study published in the Journal of Business Research found that when consumers believe marketing content is AI-authored rather than human-authored, they judge it as less authentic and show weaker engagement, even when the content is otherwise identical ². Thirteen controlled experiments published in a 2025 study demonstrated that organizations who disclose AI usage are consistently trusted less than those who do not ³.
The label creates the bias, not the quality of the work itself.
Meanwhile, detection remains unreliable. Research from the Nuremberg Institute for Market Decisions, surveying 3,000 respondents across the US, UK, and Germany, found that only 25% of consumers even believe they can recognize AI-generated content ⁴. A separate study that actually tested detection found participants correctly identified AI copy only about half the time in side-by-side comparisons… essentially a coin flip ⁵.
Most people cannot tell, and they know it.
The implication is clear: the backlash is not a referendum on AI capability. It is a referendum on execution quality and perceived corporate intent.
The Real Line in the Sand
What consumers actually reject is cost-cutting disguised as innovation.
When AI output looks generic, uncanny, inconsistent, or unpolished, it creates a trust problem. People do not mind efficiency. They mind the obvious cheapening of something they care about. The negative reaction intensifies when brands apply AI to emotionally significant moments without the craft to match.
Coca-Cola's "Holidays Are Coming" is not just an advertisement. It is a cultural artifact that many people associate with the beginning of the Christmas season. It has run for 30 years. When the company recreated it with AI, audiences did not see innovation. They saw a beloved tradition handled carelessly. The visual inconsistencies, the uncanny motion, the slightly wrong proportions, all of it read as a company prioritizing cost savings over craft during a moment that was supposed to feel warm and human.
The same dynamic played out with McDonald's. A Christmas campaign, a time of year when brands are expected to create emotional connection, rendered in a style that felt hollow and automated. The message consumers received was not "we are innovating" but "we are cutting corners on the thing you care about."
We Have Been Here Before
In 1839, the French painter Paul Delaroche saw his first daguerreotype, an early photographic image captured on a silver-plated copper sheet. His response has echoed through nearly two centuries of technological anxiety: "From today, painting is dead" ⁶.
Painting did not die. But the fear made complete sense.
Within six years, by 1845, Parisians bought 2,000 cameras and 3 million photographic plates per year. Portrait painters, who had served as the primary means by which middle-class families preserved their likenesses, lost their core income stream almost overnight. The middle class who could not afford painted portraits turned to photography.
The displacement was real… and it was fast.
What happened next is instructive. Photography did not kill painting. It liberated painting to become something photography could not be. Freed from the obligation to reproduce reality accurately, painters moved toward Impressionism, abstraction, and modern art. Photography pushed painting toward what it could do that photography could not: interpretation, emotion, subjective vision.
The pattern repeats. Initial panic. Displacement fears. Quality concerns. Gradual integration. And ultimately, expansion, as the old discipline evolves to do what the new technology cannot.
The current backlash against AI in marketing sits on this same adoption curve. The shock factor reduces as the technology normalizes. The tooling improves. The practitioners get better at using it. And the standard shifts from "AI is unacceptable" to "AI is fine, as long as it is good."
We watch that shift happen in real time.

Adoption Is Not the Question
While the backlash dominates headlines, adoption accelerates underneath.
McKinsey's 2025 State of AI survey reports that 78% of organizations now use AI in at least one business function, up from 55% the year prior. Regular use of generative AI in business operations climbed to 71%, more than doubling from 33% in 2023 ⁷. Gartner's survey of 418 marketing leaders found that 73% of marketing teams now use generative AI.
This is not a question of whether to use AI. The market already made that decision. The question is what happens next, and to whom.
The displacement is real, and acknowledging it honestly is essential for understanding the stakes. Challenger, Gray & Christmas, the outplacement firm that has tracked layoffs for decades, reports 48,414 US job losses attributed to AI in 2025 alone. Forrester analysis shows agency headcounts down 8% in 2025 ⁸.
The pattern within this displacement matters. Entry-level and execution-focused roles take the hardest hits. Creative direction, strategy, and judgment-intensive positions prove more resilient. An analysis of 180 million job postings found computer graphic artists down 33% year over year, with photographers and writers following similar trajectories. But creative directors, creative managers, and roles involving complex decision-making and client interaction decline at rates much closer to the overall market baseline ⁹.
The gap widens between organizations that use AI well and those that either avoid it entirely or deploy it carelessly. Both paths lead to competitive disadvantage. The opportunity sits in the middle: using AI to amplify quality rather than to replace it.
Stop Apologizing for Using AI
A new trend emerged in response to the backlash. Brands like Heineken, Polaroid, and Cadbury began positioning themselves explicitly as "human-made" ¹⁰. Apple now includes "This show was made by humans" in the closing credits of some productions. Aerie announced it will use neither AI-generated models nor digitally altered bodies in its campaigns. DC Comics declared it will not support AI-generated storytelling or artwork "not now, not ever."
Individual creators do the same. Disclaimers like "no AI was used in the creation of this content" now appear commonly on social media posts, portfolio sites, and professional communications.
The impulse is understandable. It signals authenticity in a moment when audiences are suspicious. But it also reinforces the idea that using AI is inherently shameful, something to deny rather than acknowledge.
A better standard exists.
Is the thinking real? Is the output high quality? Is it useful, meaningful, distinctive? If someone uses AI to reach a great outcome faster, and it frees them to focus on deeper work, or even more personal time, that is not something to apologize for. It is something to take seriously as a new capability.
The skill that matters is not "avoiding AI." It is using AI well enough that the result still feels intentional, human, and high quality. That requires judgment about when AI adds value and when it subtracts it. It requires taste to recognize when output needs refinement. It requires understanding of context, audience, and brand to know what is right for this moment.
For marketers on the ground floor, copywriters and designers and content creators who watch this transformation unfold, the message is not that your role is disappearing. It is that your role is shifting. From production to direction. From execution to judgment. The skills that matter most become taste, context, brand intuition, knowing what is right for this audience at this moment. Those are harder to develop than prompt engineering. They are also harder to automate.
The professionals who thrive will not be those who refuse to touch AI.
They will be those who use it to do more of the work that actually matters, faster, while maintaining the quality that audiences can feel even when they cannot name it.
What This Means for Organizations
The practical implications are straightforward, even if executing on them requires discipline.
Do not use AI as a shortcut to lower standards.
People forgive new technology. They do not forgive laziness. The backlash cases all share a common element: audiences perceived that the brand chose efficiency over craft in a context where craft was expected. The technology was not the problem. The decision to deploy it without sufficient quality control was the problem.
Keep human direction in the loop. AI accelerates production. Strategy, taste, and judgment still require humans. McKinsey's research on AI high performers found they are much more likely than peers to have defined processes for when model outputs need human validation ¹¹. The organizations seeing the best results use AI to handle volume and speed while humans decide what should exist and whether it meets the bar.
Use AI where it improves the outcome, not where it dilutes it. That is the only calculation that matters. Not "can AI do this?" but "will AI doing this make the result better or worse for the person receiving it?" If the output is worse than what humans would produce, expect backlash. That is the current line. Organizations that treat AI as permission to lower the bar will learn the same lesson Coca-Cola and McDonald's learned, publicly and expensively.
Be transparent when transparency adds value. Brands do not need to hide AI usage. If AI enables speed, personalization, or accessibility that would not otherwise be possible, that is worth communicating. But the research on disclosure is nuanced: it hurts perception in emotional contexts and has little effect in functional ones. The choice should be contextual, not reflexive.
The Gift
Photography did not kill painting. It freed painting to become something photography could not be.
The backlash against AI in marketing draws a line, but it is not the line most people assume. The line is not "do not use AI." The line is "do not use AI badly."
That distinction is the gift, delivered through snarky YouTube comments and social media mockery.
The public is telling you exactly where the standard is. You do not have to guess. Through expensive, high-profile failures, brands with bigger budgets than yours have established the rule: AI is fine as long as it is good.
That clarity is valuable. It would have cost you money to learn it yourself.
The backlash also creates separation. Right now, most organizations fall into one of two camps. Some avoid AI entirely, afraid of the reputational risk, leaving efficiency and capability on the table. Others deploy it carelessly, chasing cost savings, and draw exactly the backlash they should have anticipated.
The middle path, using AI to amplify quality rather than replace it, remains surprisingly uncrowded. The backlash widens that gap, making the reward for getting it right greater and the penalty for getting it wrong more visible.
Public AI backlash means the bar is now higher than ever. The companies who continue to avoid, or worse, misuse AI will first draw ire and make people tune you out.
But the backlash raises the bar in a way that favors the disciplined. Every failed campaign makes audiences more sophisticated. Every "AI slop" comment trains consumers to spot lazy work. That rising standard punishes low-effort implementation, which means organizations willing to invest in craft face less competition from those cutting corners. The race to the bottom slows down.
The window to get this right stays open longer.
The question was never whether to use AI. It is whether you use it to elevate or to cut corners.
The audience can tell the difference in quality and intent, even when they cannot tell how you made it.
Endnotes
1. https://www.bynder.com/en/press-media/ai-vs-human-made-content-study/
2. https://www.sciencedirect.com/science/article/abs/pii/S0148296324004880
3. https://www.sciencedirect.com/science/article/pii/S0749597825000172
4. https://www.nim.org/en/publications/detail/transparency-without-trust
5. https://www.bynder.com/en/press-media/ai-vs-human-made-content-study/
6. https://www.pbs.org/wgbh/americanexperience/features/eastman-important-events-photography/
7. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
9. https://bloomberry.com/blog/i-analyzed-180m-jobs-to-see-what-jobs-ai-is-actually-replacing-today/
11. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai





