Runway CEO Cristóbal Valenzuela is tackling the AI ‘sameness problem’ by focusing on user literacy, not the technology itself. He argues that the wave of generic content comes from low-effort user prompts, rather than limitations within generative models. While many critics blame model architecture for this creative stagnation, Valenzuela points to a gap in user skill, asserting that creators must move beyond basic prompts to achieve unique results.
Why the camera metaphor matters
In a recent Big Technology interview, Valenzuela compared Gen-2 to a high-end camera: the tool is powerful, but a masterpiece requires artistic vision and iteration. A single click, he notes, won’t produce cinematic genius. To achieve originality, users must storyboard, gather references, and refine outputs repeatedly. To that end, Runway encourages creators to fill the model’s context window with assets like style guides and reference clips. Internal data confirms this method’s success, showing projects with at least five references score 42% higher in distinctiveness during viewer tests.
Runway CEO Cristóbal Valenzuela argues that AI’s sameness problem is not a technological flaw but a user-literacy issue. Generic outputs result from basic prompts, while unique, high-quality work requires iterative refinement, detailed references, and a deeper understanding of how to guide the AI model effectively.
H3 Runway CEO Cristóbal Valenzuela on Solving AI’s ‘Sameness Problem’
Valenzuela’s product roadmap is built on three core principles:
• Full-Stack Control: By building its video models in-house, Runway ensures uninhibited experimentation, a strategy detailed in a Sacra briefing.
• Collaborative Iteration: New workspace features empower multiple editors to branch and merge versions, cutting project turnaround times by up to 35%.
• AI Literacy: Runway’s learning hub offers dedicated prompt design courses for both studio teams and solo creators to close the skill gap.
Case studies that break the mold
Success stories validate this iterative approach. BuzzFeed boosted conversion rates by 45% after training models on its unique voice and having writers refine the output. The New York Times saw a 17% rise in headline clicks using a similar AI-human loop. These examples prove that iteration trumps a single prompt.
The principle holds true beyond media. Airbnb improved bookings by 3.75% by using AI to refine guest communications, while Microsoft’s MedScribe achieved a 99% approval rate on medical letters through successive AI-human revisions. This data shows that rich context and repeated passes enhance both originality and accuracy.
The cultural stakes
Analysts warn that “creativity echo chambers” emerge when creators rely on the same templates. This trend has tangible consequences: YouTube reports a 23% retention drop for channels using one-shot AI thumbnails. Conversely, brands that resist sameness cultivate an organic reach 340% higher than template-driven competitors.
Valenzuela believes the solution is a practical workflow: expand the context window, integrate human judgment, and iterate until the output feels truly authored. As generative tools become ubiquitous in production, this method may determine which creators thrive and which fade into mediocrity.
















