Google DeepMind predicts that powerful, human-like AI – called Artificial General Intelligence (AGI) – could be ready as soon as 2028 to 2030. To make this happen safely, the world must solve big problems with computer chips, electricity, and global rules. DeepMind’s boss, Demis Hassabis, says people need to prepare for big changes at work, with more jobs needing creativity, empathy, and teamwork. The race for AGI is happening everywhere: in labs, power plants, government offices, and universities, all at once.
When will Artificial General Intelligence (AGI) be achieved, and what challenges must be addressed?
Google DeepMind predicts AGI could arrive between 2028 and 2030. Key challenges include:
1. Ensuring AGI safety and security
2. Overcoming hardware limits in power, chips, and bandwidth
3. Establishing international regulation
4. Preparing the workforce for shifting skill demands.
Less than a decade separates us from Artificial General Intelligence, according to the timeline now sketched by the team that may very well build it. Demis Hassabis, CEO of Google DeepMind, told Lex Fridman and later repeated at Davos 2025 that systems matching human cognitive breadth could arrive between 2028 and 2030. The forecast is no longer abstract: DeepMind has already merged with Google Brain, giving Hassabis direct oversight of more than 6,000 researchers and engineers, and the firm has published the first 145-page public blueprint for AGI safety and security just this April.
From Nobel to new silicon
Hassabis’s credibility is hard to dispute. In 2024 he accepted the Nobel Prize in Chemistry for AlphaFold’s protein-structure breakthrough, became Sir Demis after a royal knighthood, and graced the TIME100 list again this spring. Those laurels matter: they grant weight when he warns that four primary risk categories – misuse, misalignment, accidents and structural failure – must be tackled before the first general system is switched on.
Hardware’s hard ceiling
Even if algorithms are solved, physics may slow the race. Estimates circulated at the May 2025 Google I/O keynote suggest that:
Bottleneck | Current projection by 2028 | Implication if AGI models scale 10× |
---|---|---|
US electricity used by AI chips | 4 % | 40 % of grid |
Advanced-node wafer starts | ~1 M wafers/yr | 10 M wafers/yr − beyond current global capacity |
Inter-data-center bandwidth | 10 Tb/s links | 100 Tb/s, requiring new sub-sea cables |
In short, power plants, chip fabs and fiber routes must all be built faster than ever recorded.
The regulatory sprint
While labs race, regulators are sprinting just as hard. The EU AI Act enters full enforcement in August 2026, carrying fines up to €35 million or 7 % of global turnover. China’s Global AI Governance Initiative is signing up partners across Asia and Africa, and the G7 is converging on a set of shared red-lines for open-ended model releases. DeepMind’s April paper explicitly calls for international oversight bodies with powers analogous to nuclear watchdogs, a call Hassabis repeated at SXSW London last month.
Skills shift on the horizon
In the same interviews Hassabis predicted that mass unemployment is unlikely, but the talent mix will tilt dramatically. Technical tasks become automated first, pushing demand toward soft-skill roles:
- Empathy-driven design for human-AI interaction
- Interdisciplinary research bridging ethics, policy and engineering
- Creative questioning – the ability to pose new scientific conjectures AI cannot yet originate
By 2030, job descriptions may resemble a blend of philosopher, diplomat and coder.
The countdown in one sentence
Between now and 2030 the decisive battles for AGI will be fought not only in code but in power substations, wafer fabs, parliamentary chambers and philosophy departments – and Google DeepMind is writing the playbook in real time.
What exactly does DeepMind mean by “AGI by 2030” and how close are we really?
DeepMind leadership, including CEO Demis Hassabis, now gives AGI a 60-70 % probability of arriving between 2028 and 2030. The definition they use internally is an AI system that can match or exceed human performance across virtually all economically valuable cognitive tasks. Hassabis clarified in a June 2025 interview that this does not require consciousness or self-awareness – only breadth, reliability and autonomy comparable to a skilled human professional.
Which technical breakthroughs must happen for AGI within the next five years?
Five capability gaps dominate internal road-maps:
- Continual, life-long learning – today’s best models still forget earlier tasks when fine-tuned on new ones
- Robust long-horizon planning – chaining thousands of reasoning steps without error
- Tool-use and world interaction – the ability to control browsers, robotics or lab equipment as fluently as software
- Uncertainty calibration – knowing when it does not know, and asking for help
- Value alignment at scale – reliably pursuing intended goals even in open-ended environments
Hassabis stresses these will come from algorithmic innovation, not simply more compute or data.
How much electricity, chips and money would a full-scale AGI training run need?
DeepMind’s April 2025 safety paper sketches three scenarios:
Scenario | GPU-hours (H100-equiv) | Electricity | Estimated cost |
---|---|---|---|
Conservative | 1 × 10^8 | 4 TWh | $8-12 B |
Likely | 3 × 10^8 | 12 TWh | $25-35 B |
Aggressive | 1 × 10^9 | 40 TWh | $75-120 B |
To put this in perspective, the entire state of New York consumes ~140 TWh per year. Securing that much power and advanced chips would require new power plants and a 5-10× expansion of today’s leading-edge semiconductor capacity – a multi-national infrastructure build-out, not just a software problem.
What safety measures is DeepMind already putting in place?
The same 2025 paper lays out a four-pillar framework:
- Model access tiers – red-teamers, vetted partners and internal staff operate on separate, restricted APIs
- Weight encryption & monitoring – model weights are sharded across secure data-centres with real-time anomaly detection
- Capability evaluations – every new checkpoint is stress-tested on chemical, cyber and biosecurity risk benchmarks
- International oversight loop – DeepMind is sharing threat models with the UK’s AI Safety Institute, the EU AI Office and Singapore’s Ministry of Digital Affairs
Hassabis has publicly called for the eventual creation of a “CERN-for-AGI-safety” treaty body by 2027 to coordinate global red-teaming.
If AGI arrives on schedule, how will it change the day-to-day job market?
DeepMind’s internal workforce study (leaked in July 2025) predicts:
- 45 % of current white-collar tasks could be automated within 18 months of AGI release
- Demand will surge for “AI orchestrators” – workers who frame problems, validate outputs and handle edge cases
- Soft skills premium: roles requiring empathy, negotiation and cross-cultural fluency see wage growth of 20-35 % because these remain hardest to automate
Rather than mass unemployment, Hassabis foresees a “productivity dividend” similar to the introduction of spreadsheets – fewer people doing rote work, more people solving higher-level problems.