Industry News | 6/21/2025

Google Introduces Gemini AI Features for Conversational Search

Google has launched two new AI-driven features, Audio Overviews and Search Live, enhancing its search capabilities to create a more interactive user experience. These features, powered by the Gemini AI model, aim to transform traditional search into a conversational assistant, allowing users to receive spoken summaries and engage in voice-driven dialogues with the search engine.

Google Introduces Gemini AI Features for Conversational Search

Google has unveiled two innovative features designed to enhance its search capabilities: Audio Overviews and Search Live. These features, which are currently available to U.S. users through the experimental Search Labs platform, represent a significant shift towards a more conversational and interactive search experience.

Audio Overviews

The Audio Overviews feature allows users to receive spoken summaries of search results. When searching for specific topics, users can opt to generate an audio overview, which is particularly useful in situations where reading is inconvenient, such as while multitasking or exercising. The audio summaries are generated by Google's Gemini AI and are presented in a conversational format, providing a quick overview of the topic. The audio player is integrated into the search results page, offering controls for playback and links to the original web pages for further exploration. However, this feature has raised concerns regarding its potential impact on website traffic, as users may find answers without needing to visit the source.

Search Live

The second feature, Search Live, enables real-time, voice-driven conversational search within the Google app for iOS and Android. By tapping a new "Live" icon, users can initiate a spoken dialogue with the search engine, asking questions and receiving audio responses generated by a customized version of Gemini. This feature supports natural back-and-forth conversations, maintains context, and allows users to switch between voice and text input. A transcript of the conversation is also available, enhancing the user experience, especially for those on the go.

Multimodal Search Experience

These new features are part of Google's broader strategy to create a multimodal search experience, which was emphasized at its recent I/O conference. The company aims to integrate various forms of interaction, including visual search capabilities through Google Lens. This multimodal approach allows users to ask complex questions about images and receive detailed answers, leveraging Gemini's ability to process both visual and textual information.

Future Developments

Looking ahead, Google plans to enhance Search Live by incorporating real-time camera functionality, aligning with its Project Astra initiative. This project aims to develop a universal AI agent capable of understanding and responding to users' environments through sight and sound.

In conclusion, the introduction of Audio Overviews and Search Live marks a pivotal evolution in Google Search, transforming it from a traditional search engine into a more interactive and conversational assistant. While these features are currently limited to U.S. users who opt-in through Search Labs, they provide a glimpse into the future of search, which is expected to be more intuitive and deeply integrated with artificial intelligence. However, the rollout has not been without challenges, including concerns about the accuracy of AI-generated information and the potential impact on website traffic.