Generative AI expands the cyber attack surface faster than defenses can evolve, a stark reality detailed by Wiz Co-founder Yinon Costica. He warns that large language models (LLMs) are amplifying the speed, scale, and creativity of cyber threats. Critical new risks like prompt injection, data leakage, and “vibe coding” – shipping unaudited AI-generated code – are creating vulnerabilities that outpace the response capabilities of most security teams.
This warning comes as Google finalizes its landmark $32 billion acquisition of Wiz, a strategic purchase designed to close critical multi-cloud security gaps in its Google Cloud Platform, according to a Public Comps analysis. This deal highlights a significant market trend toward premium, agentless, cloud-agnostic security solutions that protect data across diverse environments like AWS, Azure, and on-premise servers.
Why attackers move faster in 2025
Attackers are accelerating their operations by using generative AI to automate and scale previously complex tasks. This technology democratizes advanced capabilities, allowing malicious actors to create sophisticated phishing campaigns, clone voices for social engineering, and generate polymorphic malware that evades traditional detection methods with unprecedented speed and efficiency.
The growing gap between attackers and defenders is confirmed by experts from CrowdStrike and the World Economic Forum, who note that generative AI democratizes once-elite cybercrime tools, according to a Cube Research report. Tools like WormGPT produce flawless phishing emails in any language, deepfake audio generators clone executive voices in minutes, and AI-driven malware mutates faster than scanners can update. This forces defenders to grapple with alert fatigue and complex data governance challenges.
Costica highlights three immediate consequences for CISOs:
- Increased Credential Theft: Hyper-personalized phishing attacks significantly boost click-through rates, feeding credentials directly to ransomware operators.
- Evasive Malware: Polymorphic malware that constantly changes its signature bypasses traditional defenses, creating a need for advanced, behavior-driven detection.
- Unvetted Production Code: The “shadow AI” trend sees development teams pushing unaudited, AI-generated code into production, introducing unknown risks.
Guardrails that scale
According to Wiz customer surveys, top security leaders rely on Zero Trust architecture, robust multi-factor authentication (MFA), and continuous security posture management as their most effective defenses. Reinforcing this, new Zero Trust AI trends research indicates that verifying every access request can reduce compromises from phishing by as much as 34%.
To build scalable defenses, Costica’s team advises four key safeguards for both cloud-native and hybrid environments:
- Establish AI-Specific SDLC: Integrate AI-focused security into the development lifecycle. This includes red-teaming LLM prompts, testing for injection vulnerabilities, and mandating human review of AI-generated code.
- Enforce Strict Data Governance: Implement automated policies to strip sensitive data before it is used for model training or inference. Ensure log immutability with legal holds.
- Deploy Behavioral Analytics: Use tools that can detect anomalies in sentiment or audio patterns to identify and block threats like deepfake-based wire fraud.
- Map the Blast Radius: Utilize graph-based risk analysis to understand the potential impact of a breach across identities, containers, and AI pipelines. The Wiz CNAPP blueprint remains cloud-agnostic, simplifying multi-cloud security management post-acquisition.
The acquisition ripple effect
The acquisition is sending shockwaves through the cloud market. Competitors are reacting, with AWS losing a key marketplace partner and Microsoft Azure shifting focus to its native Defender for Cloud suite. Industry analysts project Google Cloud could capture an additional 2-3% of enterprise market share by 2027 if it successfully maintains Wiz’s multi-cloud neutrality. With the Department of Justice approving the merger, Google can now deeply integrate Wiz’s Cloud Security Posture Management (CSPM) engine with its Mandiant threat intelligence and Chronicle security analytics.
For customers, the primary benefit is a unified platform that correlates vulnerabilities, identity risks, and AI supply chain threats in one view. Early adopters are already reporting significant efficiency gains, including reduced onboarding times and a 20% decrease in false positive alerts.
Cybersecurity Threats in the Age of Generative AI: A Deep Dive with Wiz Co-founder Yinon Costica – main takeaways
Costica’s central argument is clear: as generative AI dramatically lowers the cost and complexity of cyberattacks, defenders must respond by automating security context and response actions. While the integrated Google-Wiz platform offers a powerful solution, the ultimate responsibility remains with organizations to curate safe training data, maintain rigorous human oversight, and thoroughly test all AI-powered features before deployment.
What exactly is “vibe coding” and why does it worry CISOs?
“Vibe coding” is the practice of letting an AI write or modify source code based on a casual prompt such as “make this faster” without the developer reading every resulting line. The risk is that subtle back-doors, hard-coded secrets or logic flaws can slip into production because the human never truly reviewed the diff. Yinon Costica warns that this habit is already showing up in breach-post-mortems: attackers only need one hidden SQL-injection or secret key to pivot inside a cloud estate.
How is generative AI shifting the attacker-defender balance?
Costica’s core message is that AI shortens the attacker’s innovation cycle while lengthening the defender’s to-do list. Offensive teams can now spin up polymorphic malware, deep-fake CFO voices, or synthetic GitHub profiles in minutes, whereas enterprise blue teams must still patch, test, log, and re-certify every change. Research shared at Fal.Con 2025 quantifies the gap: attackers need one vulnerability; defenders must close every exposure across 5,000-plus vendor tools, creating an asymmetry that gets worse as agentic AI moves at machine speed.
Which new AI threats moved from “proof-of-concept” to “in the wild” during 2025?
- Deep-fake finance calls – At least two Fortune 500 firms in 2025 sent wire transfers after AI-cloned “CFO” voice calls, the largest loss reaching $25.6 million.
- WormGPT-generated phishing – Dark-web subscriptions sell prompt packs that craft context-aware spear-phish in 40 languages, raising click-through rates from 8 % to 35 %.
- Polymorphic malware that mutates its own hash every run – Russia’s Forest Blizzard group and others now couple reinforcement-learning modules to exploit kits, cutting average dwell time inside a network to under six days.
What guardrails are proving effective against prompt-injection and data-leakage?
Costica’s checklist, echoed by G7 expert guidance released this year, starts with treating every LLM endpoint as an untrusted input source:
- Separate user context from system context through hardened prompt templates.
- Enforce strict “need-to-know” token limits so the model cannot surface training data.
- Add semantic output filters that redact PII, secrets, or internal repo names before the answer reaches the chat window.
- Maintain an AI-SPM (AI Security Posture Management) inventory; Wiz customers, for example, scan Vertex AI pipelines nightly to flag over-privileged service accounts or open storage buckets that could be reached via model-generated code.
How will Google’s $32 billion Wiz acquisition change the security options for multi-cloud users?
When the deal closes (expected 2026), Google Cloud will embed Wiz’s graph-based risk engine inside Chronicle, Mandiant, and Security Command Center, giving customers a single console that scores issues across AWS, Azure, Oracle, and GCP. Early adopters already see 30 % faster mean-time-to-remediate because findings include upstream fix advice inherited from Wiz’s Dazz acquisition. The catch: analysts warn the roadmap may prioritize “GCP-first” features, so organizations that rely heavily on rival clouds should negotiate contract language that guarantees continued agent-less scanning parity for at least three years.
















