Google DeepMind’s CEO, Demis Hassabis, believes artificial general intelligence (AGI) could be here by 2030, much sooner than other experts think. This confidence comes from rapid improvements in AI power, data, and smarter models, with recent systems like Gemini 2.5 already solving tough tasks. However, there’s still a big question: do these AIs truly “understand” the world, or are they just very good at pretending? Hassabis warns that AGI could change society even more than the industrial revolution, so strong rules and safety checks are needed. Even as AIs get smarter, he thinks human qualities like creativity and empathy can’t be replaced.
When will artificial general intelligence (AGI) arrive, according to Google DeepMind?
Google DeepMind’s CEO, Demis Hassabis, predicts AGI could arrive as early as 2030, citing rapid progress due to increased compute, data, and model scaling. This forecast is notably earlier than most expert estimates, which place AGI between 2040 and 2050.
Demis Hassabis, who now leads a unified Google DeepMind team of more than 6 000 researchers and engineers, has stepped up his public timeline for artificial general intelligence. In talks this year at Davos and Google I/O he put the arrival of AGI “within the next five to ten years” – a forecast that places the milestone as early as 2030 .
The prediction is noticeably more aggressive than the median estimate from independent surveys. A July 2025 poll of AI experts found that the 50 % probability window for AGI spans 2040–2050, with a 90 % chance it will appear by 2075 . Superforecasters tracked by 80 000 Hours place the median even later, at 2050 .
What changed Hassabis’s calculus is the rapid payoff from scaling laws. Since the 2023 merger of Google Brain and DeepMind, compute budgets, data volume and model size have all grown in lock-step, driving reliable performance jumps. Gemini 2.5, launched in March 2025, already scores in the top percentile on new reasoning benchmarks such as “Humanity’s Last Exam”. The same scaling curve, extrapolated outward, suggests systems with human-level breadth could emerge sooner than the broader expert consensus.
From games to protein folding: milestones that changed the curve
Year | System | Breakthrough |
---|---|---|
2016 | AlphaGo | First AI to defeat a Go world champion |
2020 | AlphaFold 2 | Solved 200-million-protein structure prediction problem |
2023 | Gemini 1.0 | Multimodal foundation model rivaling GPT-4 |
2025 | Gemini 2.5 | Top-tier scores on graduate-level math and science |
Each model followed the same playbook: larger datasets, more compute and new algorithmic tweaks. The cumulative effect convinces Hassabis that the next jump is no longer theoretical.
The simulation question
Hassabis frequently returns to a philosophical sticking point: does current AI “understand” reality or merely simulate it? In the Lex Fridman interview he argues that understanding requires grounded experience, something today’s language models still lack. Even so, he believes the simulation barrier is thinning. Multimodal models now generate coherent video, reason about physical scenes and answer counter-factual physics questions – behaviors that edge closer to internal world models.
Ethical guardrails on an accelerated road
With shorter timelines come louder calls for oversight. As a UK government AI adviser, Hassabis has pushed for “balanced and forward-thinking regulation” – a phrase he repeated in an August 2025 interview, warning that the societal impact of AGI could be “ten times bigger than the industrial revolution”.
Key elements of his current stance:
- International standards. At the February 2025 AI Action Summit in Paris he joined industry leaders urging governments not to trade safety for short-term economic gain.
- No military clause erosion? DeepMind’s 2014 acquisition contract barred military use, yet April 2025 reporting indicates some models are now sold to defense customers. Hassabis has called the shift a “pragmatic response to geopolitical reality”, highlighting the tension between ethics and competitive pressure.
- AGI safety roadmap. DeepMind’s 145-page April 2025 technical roadmap sets a goal of auditable, interpretable AGI systems by 2030, outlining concrete benchmarks for oversight and red-teaming.
Programming’s last decade
Hassabis predicts that traditional coding will largely disappear within 10–15 years. Future programmers will act as “specification editors”, feeding natural-language intent to AI pair-programmers that generate, debug and optimise code. Google’s internal data already shows a 25 % drop in median review time for changes that used Gemini code assistance in 2025.
What stays uniquely human
Despite the acceleration, Hassabis singles out creativity, empathy and existential curiosity as traits no training scale can replicate. He sees AGI as “humanity’s most powerful tool”, not its successor – provided society can agree on the guardrails before the tool arrives.
What is Demis Hassabis’s current timeline for achieving Artificial General Intelligence?
In his most recent 2025 interviews, Demis Hassabis predicts AGI could arrive as soon as 2030, calling the next five to ten years “monumental” for humanity. This positions him as more optimistic than the academic consensus – surveys show most experts expect AGI between 2040-2050, while superforecasters place the median prediction around 2050.
Does AI truly “understand” reality or just simulate understanding?
Hassabis emphasizes that current large language models excel at pattern recognition but lack true comprehension. He argues today’s systems simulate understanding rather than possess grounded experience, though he believes future AGI systems will develop genuine cognitive capabilities through continued scaling and architectural improvements.
How is Google DeepMind positioning itself in the competitive AI landscape?
Following the 2023 merger with Google Brain, DeepMind now leads a 6,000+ person team and has received $75 billion in 2025 AI infrastructure investment. Recent releases include Gemini 2.5 (March 2025) and Project Astra universal AI assistant, while their technical roadmap forecasts AGI by 2030 with specific safety and interpretability benchmarks.
What ethical frameworks guide DeepMind’s AGI development?
Hassabis has joined global forums advocating for international AI governance and regulation, warning society is not prepared for AGI’s implications. As UK government AI adviser, he helped establish the AI Security Institute (formerly AI Safety Institute). However, recent reports show DeepMind’s anti-military use clause has eroded, with their AI now being sold to militaries as “pragmatic response to geopolitical reality.”
How will AI change programming and human uniqueness?
Looking ahead, Hassabis envisions AI augmenting rather than replacing human creativity and empathy. He predicts programming will evolve into higher-level AI management roles, while humans’ unique qualities – creativity, empathy, and existential curiosity – become even more essential as AI capabilities accelerate.