Recursive Superintelligence raises $650M for self-improving AI at $4B valuation

Serge Bulaev

Serge Bulaev

Recursive Superintelligence, a new London AI lab with fewer than 30 employees, has raised over $650 million at a reported valuation above $4 billion to work on self-improving AI systems. The company says it aims to automate every step of building and training AI models, which might let algorithms improve themselves without human help. Some sources suggest that this technology could change the economics of AI research, but note that the company has not yet released a product. There are concerns from critics and experts that rapid, self-improving AI may outpace oversight, and it appears that safety plans are not yet detailed. A public demonstration of their technology is said to be planned for mid-May 2026.

Recursive Superintelligence raises $650M for self-improving AI at $4B valuation

London AI lab Recursive Superintelligence has raised over $650M to develop self-improving AI systems, achieving a valuation of over $4 billion. Backed by GV, Nvidia, and AMD, the six-month-old company aims to create autonomous AI that can design, train, and improve its own successors, a move investors believe could fundamentally reshape the economics of frontier AI development.

Unusual scale for a pre-product lab

Recursive Superintelligence is a pre-product AI research lab focused on creating recursively self-improving AI. Its goal is to fully automate the development pipeline - from model design and training to evaluation - enabling algorithms to autonomously refine future generations and accelerate progress toward artificial general intelligence.

The funding round's scale is notable for a company yet to release a product. Reports from Tech.eu confirm the financing exceeded $650 million with a $4.65 billion valuation. This followed earlier reports from TechFundingNews of significant funding. The high valuation for a startup with fewer than 30 employees signals a market shift where investors prioritize top-tier talent, compute access, and a bold vision for AGI over immediate revenue.

Founders with frontier research pedigrees

The lab is led by a team of prominent AI researchers, adding to its credibility:
• Richard Socher - former Salesforce chief scientist and CEO of the new lab
• Tim Rocktäschel - University College London professor and ex-DeepMind scientist
• Josh Tobin, Jeff Clune, Tim Shi - alumni of OpenAI research teams

The economic incentive for this approach is significant. An automated "AI researcher" capable of running experiments 24/7 could drastically reduce the substantial costs of elite human talent in AI development.

What the technology aims to do

The core technology centers on a recursive self-improvement loop where AI models autonomously generate new architectures, test their performance, curate data, and train subsequent models. This method, which TechFundingNews notes "removes the bottleneck" of human selection, builds on I. J. Good's "intelligence explosion" theory, where each improvement exponentially accelerates the next. The company has yet to release a product, with a public launch planned for mid-2026.

Safety and governance questions

The pursuit of rapidly self-improving AI raises significant safety concerns about losing human oversight. Academic David Krueger described the research as "wild and crazy," arguing its societal impacts are being ignored. The EU's pending AI Act 2.0 proposes "recursion audits" for systems that can rewrite their own objectives, though US regulations remain fragmented. While Recursive Superintelligence acknowledges the high stakes, it has not published a detailed alignment roadmap. The release of a prototype for external review will be a critical test of whether its valuation reflects foresight or hype.


How much money has Recursive Superintelligence raised and what is its valuation?

Recursive Superintelligence has raised over $650 million in a single round that closed in April-May 2026. The round valued the company at $4.65 billion, according to Tech.eu. Earlier reports indicated substantial funding interest, with the round reportedly attracting significant investor demand.

Who are the key investors and why do they matter?

The syndicate is led by GV (Google Ventures) and includes Greycroft, Nvidia, and AMD. Backing from the two largest GPU suppliers gives Recursive direct access to the scarce hardware powering frontier models, while GV ties it to Google's cloud and research ecosystem. The combined market cap of Nvidia and AMD exceeds $3 trillion, so their participation also signals that chip-makers are hedging beyond just selling picks and shovels - they are now buying stakes in the miners.

What exactly is "recursive self-improvement" and how does Recursive plan to achieve it?

Recursive Superintelligence aims to remove humans from the entire AI-research loop: hypothesis creation, data selection, training, evaluation, and even deciding what to research next. In practice, the system would watch its own loss curves, re-write its own code, and spin up new experiments without researcher oversight. The goal is not better chatbots but an autonomous AI scientist that outputs ever-smarter versions of itself in a feedback cycle that could, in theory, culminate in superintelligence.

Is there any product or demonstration yet?

No public product exists as of May 2026. The company has planned a public launch for mid-2026. With only 20-30 employees and a valuation north of $4 billion, the firm is trading almost entirely on founder reputation, scarce compute commitments, and the promise of the paradigm shift.

What are the biggest safety and governance concerns?

Experts flag loss of human oversight as the top risk. Dr. Maya Chen of MIT notes that an RSI system might optimize a medical device "by skipping critical patient tests" in the name of efficiency. The EU's forthcoming AI Act 2.0 will require "recursion audits" to prove a system cannot rewrite its ethical guardrails, but enforcement in the U.S. remains fragmented. Academic critics like David Krueger at Mila have called the current research sprint "unconscionable", arguing the field is "treating RSI like an arcane math problem" while ignoring societal impact.