Google launches Search Live globally, powered by Gemini 3.1 Flash Live

Serge Bulaev

Serge Bulaev

Google is releasing Search Live worldwide on March 26, 2026, letting people talk to Google and use their phone cameras to get instant answers. Powered by Gemini 3.1 Flash Live, this feature works in over 200 countries and over 90 languages, helping users solve problems, recognize objects, and translate things just by speaking or showing them to the camera. Search Live gives quick spoken replies, even in noisy places, and shows helpful links on the screen. The new tool is easy to use and helps people get things done without using their hands or typing, right in the Google app.

Google launches Search Live globally, powered by Gemini 3.1 Flash Live

Google is launching Search Live globally, a major update powered by Gemini 3.1 Flash Live that introduces real-time, conversational AI search. Starting March 26, 2026, users in over 200 countries can talk to Google Search while using their phone's camera to get instant, hands-free answers about the world around them.

This expansion brings interactive troubleshooting, object recognition, and live translation directly into the Google app for Android and iOS. To begin, users tap the new "Live" icon, grant permissions, and receive spoken replies with supplemental on-screen links.

What Is Google Search Live and How Does It Work?

Google Search Live is a new feature that lets you interact with Search using your voice and camera simultaneously. Powered by the Gemini 3.1 Flash Live model, you can point your phone at an object, ask a question, and receive an immediate spoken answer with relevant web links.

The underlying model, Gemini 3.1 Flash Live, processes text, voice, and video with exceptionally low latency. It supports a context of 128,000 tokens, enabling it to follow extended conversations. The visual search component is key: a user can point their camera at a product and ask, "Is this gluten-free?" to get an instant audio response supplemented by on-screen web pages. The model is also designed to filter out background noise, ensuring it can understand follow-up questions even in busy environments.

Global Reach and Market Impact

A key feature highlighted in Google's official announcement is the model's ability to understand over 90 languages without manual switching. This broad linguistic support allows Search Live to launch immediately in regions with previously limited voice assistant capabilities, like parts of Southeast Asia and Sub-Saharan Africa. Industry analysts, including those at Intellectia.ai, predict this frictionless experience will drive higher user engagement and longer session times, potentially boosting ad click-through rates.

Core Features of Search Live

  • Live Multimodal Input: Utilizes camera and microphone for real-time object recognition and problem-solving.
  • Instant Voice Answers: Provides spoken responses grounded in Google Search results.
  • Extended Conversation Memory: Tracks context for twice as long as its predecessor, Gemini 2.5 Flash Native Audio.
  • Adaptive Interaction: Automatically adjusts its tone and response length based on detected user sentiment.
  • Developer Access: A preview is available via the gemini-3.1-flash-live API.

Developer Integration via Google AI Studio

Developers can integrate the same multimodal search capabilities into third-party applications using the Gemini 3.1 Flash Live preview in Google AI Studio. The model supports function calling, which allows it to trigger external APIs or query databases in response to user requests, such as, "Order a replacement part for this device." To maintain low latency and user privacy, the live API does not support batch processing or remote code execution.

Performance Benchmarks and Accessibility

The model's ability to handle complex, multi-step tasks is validated by a 90.8% score on the ComplexFuncBench Audio benchmark. Early user feedback reported by TechCrunch indicates a significant reduction in conversational pauses compared to last year's beta. For improved accessibility, the system also mirrors the user's speaking pace. The global launch includes over 200 jurisdictions, excluding markets where AI Mode is not available, positioning Search Live as an integrated assistant for daily tasks.


How do I use Google Search Live?

Open the Google app and tap the new "Live" icon located below the search bar. After granting mic and camera access, you can point your phone at any object - from product labels to street signs - to get an instant spoken answer and relevant on-screen links.

What languages and countries are supported?

At launch, Search Live is available in over 90 languages across more than 200 countries and territories where Google's AI Mode is active. The feature allows seamless language switching within a single conversation.

How does it handle noisy environments?

Gemini 3.1 Flash Live is engineered to filter out ambient noise like traffic and crowds. It tracks the user's pitch and pace to understand intent and features doubled conversation memory to reduce the need for repetition.

Is there an API for developers?

Yes, a preview API (gemini-3.1-flash-live-preview) is available in Google AI Studio. It provides developers with the same low-latency vision and search-grounded capabilities found in the consumer feature.

What is the expected impact on Google's business?

Analysts quoted by Intellectia.ai anticipate that keeping users engaged with voice and camera interactions within the app will lead to longer sessions and higher ad click-through rates.