Advertisers Still Need Humans to Steer AI Budgets, Ethics
Serge Bulaev
AI is changing advertising fast, making some jobs easier and quicker. But people are still needed to control budgets, make sure things are fair, and keep ads safe and honest. While AI can recommend products or help with creative ideas, humans make the final calls and handle tricky problems. The best results come when people and AI work together, with humans watching over the important parts. This mix helps ads run smoothly, stay trustworthy, and avoid mistakes.

While AI transforms advertising, advertisers still need humans to steer AI budgets, ethics, and data strategy for safe, effective campaigns. Despite soaring investment in AI tools, boardrooms demand clarity on its limitations. This article explores the real-world boundaries of AI in advertising, the progress made, and why human governance is the critical factor holding back full automation.
Five Core Myths of AI in Advertising
AI excels at automating repetitive tasks like campaign builds and drafting copy, but it lacks the contextual understanding for strategic oversight. Humans remain essential for managing budgets, ensuring brand safety, validating data quality, and making final ethical judgments that protect brand reputation and ensure accountability in advertising.
Recent analysis from Digiday highlighted several pervasive myths at the crossroads of technology, economics, and trust:
- AI replaces media buyers
- Technology alone fixes ad tech waste
- Better models solve everything
- Autonomy should come first
- Governance is a minor detail
In reality, automation may cut a two-hour campaign build to ten minutes, but human buyers are shifting their focus to strategy and brand safety. Challenges like hidden labor costs and inaccurate attribution persist because AI simply magnifies the quality of the data it receives. And while more advanced models are available, their rollout is often stalled by delays in updating contracts and liability rules.
Why Governance, Not Algorithms, Is the Real Bottleneck
A cross-industry task force is developing standards for AI, covering disclosure, audits, and mandatory human sign-offs The ad industry's plan to define what counts as AI. However, progress is slow as stakeholders grapple with renegotiating contracts, liability clauses, and performance metrics. This governance gap is the primary obstacle; a 2025 McKinsey survey found that while marketing sees high ROI from AI, 63% of leaders identify a "lack of clear ownership" as the top blocker to full-scale adoption, explaining why no fully autonomous ad trading agent exists outside of a lab.
Proven Use Cases: Where AI Delivers Value Today
Successful AI applications in advertising share common traits: they leverage rich first-party data, operate in low-risk environments, and produce measurable results. Prime examples include e-commerce product recommendations and dynamic creative optimization (DCO). For instance, Amazon's algorithms drive double-digit sales lifts, and similar models reduce churn for services like Netflix, as noted in a 2026 Product School report 15 AI Business Use Cases in 2026. Publishers also rely on AI for real-time invalid traffic (IVT) filtering, while brands use it to test creative concepts in synthetic environments, cutting research costs without compromising user privacy.
The Human-in-the-Loop Model: A Blueprint for Success
The most effective AI implementations follow a "human-in-the-loop" model, embedding human oversight at critical stages:
- Data teams scrub measurement pipelines before any model training.
- Legal teams insert AI clauses that specify audit rights and liability caps.
- Media buyers supervise automated bidding against brand-safety checklists.
- Creative leads approve final assets that GenAI drafts.
This framework enables brands to adopt AI incrementally, managing expectations and mitigating risks. The outcome is a powerful hybrid approach where AI automates routine tasks and humans provide essential strategic context, ensuring efficiency without sacrificing accountability.
Will AI replace media buyers in 2025?
No.
AI compresses a two-hour campaign-build into ten error-free minutes, but the human moves from keyboard-clicker to supervisor who signs off on every bid and boundary. Across 2025 live deployments, zero autonomous trading agents are scaling without a buyer in the loop; instead, teams use AI for grunt work like normalization and reporting while people retain final budget and brand-safety decisions.
Where is the industry drawing a hard "no-AI" line?
High-stakes areas - budget allocation, creative approval, ethical reviews, and fraud validation - remain human-only. Even the most bullish holding groups quietly keep causal measurement, clean-data audits and liability contracts off-limits to models, citing governance gaps that no vendor has yet indemnified.
Does AI clean up ad-tech's measurement mess?
It magnifies it. When broken attribution or polluted supply paths feed an LLM, the output is "garbage in, scaled doctrine out," as one CES exec warned. 2025 tests show models repeating hidden fees and viewability errors at speed; scrubbing data first is now a pre-condition, not a bonus step, for any brand testing agentic buying.
What is the safest on-ramp for AI in 2025?
Start with ideation and repeatable production (copy variants, background images, bid sheets) while keeping humans on the strategy, sign-off and exception-handling rungs. Pilot programs that added a 64% production-speed bump kept head-count flat and error rates down precisely because senior staff still eyeball every live asset.
How big is the trust gap between hype and reality?
Agentic AI spend is forecast to rocket from $7.55 billion in 2025 to $199 billion by 2034, yet two-thirds of early adopters told McKinsey they extract value only when humans interpret results. The takeaway for 2025 budgets: pilot fast, but fund oversight, audit trails and training before you fund models.