Databricks CEO Warns of AI Bubble, "Vibe Coding" Risks

Serge Bulaev

Serge Bulaev

Databricks CEO Ali Ghodsi warns that there is a big bubble in AI, with many startups hyped up but not making money. He criticizes "vibe coding," where programmers trust AI to write code from vague prompts, saying it leads to weak and unreliable systems. Ghodsi urges companies to focus on real results, careful reviews, and smart spending, instead of just chasing trends. He believes only teams that are disciplined and care about their customers will truly succeed in the AI race. The message is clear: don't get fooled by hype, build things that actually work and matter.

Databricks CEO Warns of AI Bubble, "Vibe Coding" Risks

Databricks CEO Ali Ghodsi is sounding the alarm on a potential AI bubble, criticizing the "insane" valuations of pre-revenue startups and the risks of hype-driven "vibe coding." His warning lands as investors question whether market fundamentals can support current prices, highlighting a growing tension between massive capital investment in generative AI and a widespread failure to deliver profitable results. This concern is echoed by analysts, with reports indicating that 95% of enterprise generative AI pilots fail to reach production, even as AI infrastructure spending is projected to surpass $570 billion by 2026 Market Update.

Why Ghodsi Pushes Back on Hype

Ghodsi compares the current investment frenzy to the dot-com bubble, substituting today's GPUs for yesterday's fiber optics. He argues for substance over speculation, pointing to his own company's performance as a benchmark. Despite raising over $4 billion at a $134 billion valuation, Databricks validates its numbers with a $4.8 billion revenue run rate and positive cash flow Series L funding.

Ghodsi's core argument is a call for discipline in the face of widespread hype. He contends that while AI investment soars, many startups lack viable revenue models. The CEO advocates for a return to fundamentals, where tangible results, rigorous engineering, and measurable customer value outweigh speculative trends.

"Vibe Coding" Draws Fire from Executives

Ghodsi reserves his strongest criticism for "vibe coding," a practice where developers use vague, conversational prompts to generate code with large language models and accept the output with minimal review. While proponents champion its speed, Ghodsi aligns with critics who see it as a dangerous shortcut. Described as letting an "autonomous intern" code via trial and error AI-powered software development, he argues this method produces fragile, unreliable systems. Instead, he urges teams to integrate LLM assistance with disciplined practices like rigorous code reviews, automated testing, and robust data governance.

Market Signals: Cycle or Cliff?

While parallels to the dot-com era exist, analysts note a key difference: today's AI leaders are profitable. Nvidia, for example, trades at a 24-26x forward P/E ratio, a fraction of Cisco's 472x peak in 2000. However, significant concentration risk remains, with six hyperscalers accounting for 85% of Nvidia's revenue. A slowdown in their 2026 capital expenditures could destabilize GPU demand and threaten countless AI startup models. Ghodsi's warning coincides with growing skepticism, as figures like Michael Burry draw parallels between current "circular" AI revenue and past accounting tricks, and hedge funds begin shorting high-multiple AI stocks. Bulls, however, maintain that AI spending is a fundamental infrastructure play, not a speculative bubble.

Actionable Steps for Disciplined AI Teams

To navigate the hype, Ghodsi suggests teams adopt a disciplined approach focused on tangible value:

  • Prioritize KPIs: Track real-world key performance indicators before scaling any prototype.
  • Reinforce Quality: Augment LLM tooling with traditional code reviews and comprehensive security audits.
  • Model for Profitability: Rigorously model infrastructure costs against projected revenue at every stage.
  • Monitor Market Guidance: Closely watch the quarterly capex guidance from hyperscalers.

Ghodsi believes the winners will pair technical rigor with an obsession for customer impact, resisting the temptation to ship features because "the vibes feel right."


What exactly is "vibe coding" and why does the Databricks CEO call it risky?

"Vibe coding" lets developers describe an app in plain English, then copy-paste whatever the LLM spits out without reading it line-by-line. Ali Ghodsi argues this "gut-feel" style produces brittle, insecure code that collapses once real users or auditors show up. In short, it trades long-term quality for a quick dopamine hit.

How frothy is the AI market compared with the dot-com bubble?

Today's leaders are profitable: NVIDIA trades at 24-26× forward earnings while the Magnificent Seven average 28×, and Big Tech is funding $400 B of 2025 capex with $200 B+ of free cash flow. By contrast, Cisco peaked at 472× earnings in 2000 and most dot-com darlings had zero earnings. Still, 95 % of enterprise gen-AI pilots still fail to move the EBIT needle, so the gap between spend and payoff is widening.

What financial proof does Databricks offer that "high-utility AI" can pay off?

The company just closed a $4 B+ Series L at a $134 B valuation (≈28× 2025 revenue run-rate) after hitting $4.8 B annualized revenue, 55 % YoY growth, and $1 B annualized revenue from AI products alone - all while keeping gross margins near 80 % and free cash flow positive. The numbers underpin Ghodsi's mantra: use cases must show dollar impact, not demo sparkle.

Which market signals should watchers monitor for a 2026 correction?

  • Hyperscaler capex guidance - Microsoft, Amazon, Google and Meta supply ~60 % of NVIDIA's revenue; any pause would ripple fast
  • Enterprise EBIT disclosures - if the 5 % pilot-success rate doesn't climb, board-level patience may snap
  • Cash-flow trends - Big Tech's ability to self-fund $400 B annual AI infrastructure without new debt is critical; watch for compression in free-cash-flow margins

How can teams keep AI projects from becoming "zero-revenue" experiments?

  1. Tie every prototype to a revenue or cost-savings KPI before the first sprint
  2. Budget for human review cycles - treat AI output as a first draft, not a release candidate
  3. Instrument observability hooks early so you can prove (or disprove) value with real user data
  4. Re-use governed data layers (e.g., Databricks' Lakebase or Delta Lake) instead of one-off vector dumps that can't pass audit
  5. Kill fast: if the pilot doesn't show EBIT lift within two quarters, redeploy talent to the next ranked use case
Serge Bulaev

Written by

Serge Bulaev

Founder & CEO of Creative Content Crafts and creator of Co.Actor — an AI tool that helps employees grow their personal brand and their companies too.