
Google is carrying Google Nano Banana, its AI-powered image generation and editing tool, beyond the Gemini app. The company has begun integrating it into Google Search through Lens and NotebookLM, and has also confirmed that it will be coming to Google Photos soon.
Based on the Gemini 2.5 Flash Image model, this AI transforms text instructions into visual results that maintain character and style consistency. According to the company, users have already generated more than 5.000 billion images, driving its rollout to products that many people use every day, initially in the United States with planned expansion to more languages and regions.
What is Nano Banana and how does it work?
In essence, Nano Banana interprets natural language prompts to create images from scratch or apply precise changes to a photo. You can change backgrounds, adjust colors, remove objects or people, restore old images, zoom out, modify expressions, and merge multiple shots into a single cohesive scene.
One of its keys is the visual consistency: Preserves facial features and object arrangement across multiple edits, especially useful for maintaining a character or style across multiple variations. The more detailed the request, the more faithful the result.
To strengthen traceability, Google adds SynthID (visible watermark and digital signal in metadata) in generated or edited images, helping to identify the content produced with this technology without affecting its perceived quality.
Search and Google Lens: Create and edit from your mobile
In the Google app, Lens debuts the "Create" modeFrom there, you can take a photo or choose one from your gallery and type the instructions so the AI can instantly apply the changes. On some devices, a "Nano Banana Create" button appears next to the search and translation options for quick access to these features.
Experience guides with examples of the type "Turn me into a puppet" or "Put me on a street in Europe," and allows you to switch between the front and rear cameras before sending the prompt. After capturing, the image is added to the AI mode text box to describe the transformation you want to see.
A practical case: if someone wants to try on an accessory without physically putting it on, it is enough to take a photo of the subject and another from the article; AI can combine both and show how it would look. The entire workflow is concentrated in Lens, so there's no need to leave the browser for quick editing tasks.
This integration is being activated first in in English , both on Android and iOS, and Google indicates that it will progressively expand to more markets and languages.
NotebookLM: Video styles and summaries with visual support
In NotebookLM, Nano Banana works in the background to enrich the Video Overviews with contextually generated images from user-added sources. The tool includes six creative styles and allows you to adjust the video format.
- Styles: watercolor, anime, papercraft, whiteboard, retro print and heritage.
- Formats: a more detailed video (“Explainer”) and a short one (“Brief”).
The goal is for the visual explanations to be more useful and appropriate to the context, with illustrations that are not limited to generic stock images, but rather reflect the actual content of the documents uploaded to the platform.
Google Photos: What's Coming
Google has announced that will arrive in Photos in the coming weeks. Although no details have been provided, the idea is that users will be able to edit and enhance their images directly from the library, combine shots, or enhance portraits without leaving the app.
Price, plans and availability
For the general public, Nano Banana can be used free from the Gemini app on mobile and web. Simply upload a photo and enter your desired instructions to quickly generate or edit content.
For professional use, access is provided via Google AI Studio and Vertex AI with usage billing: million tokens at $30 (a benchmark that Google roughly equates to €0,039 per generated image, according to the cost examples). Some plans, such as Google AI Pro, include high daily edit quotas.
Expanded availability has begun in the United States, and the company says the rollout will extend to more countries and languagesMeanwhile, Lens's Create mode will continue to gradually add capabilities and improvements.
Practical uses, limits and safety
In addition to creating from scratch, AI shines in complex editions such as restoring old photos, changing the atmosphere of a scene, or maintaining a person's identity after multiple transformations. You can also merge images and adjust lighting and perspective to achieve a consistent result.
As in all generative AI, there may be misinterpretations of prompts or imperfect results. Accuracy improves with detailed instructions and successive iterations; in fact, the system remembers the image state to apply changes in succession. The adoption of SynthID and enhanced metadata aims to mitigate risks of misuse and facilitate the identification of generated content.
The combination of text-guided editing, direct integration into popular products, and traceability makes Nano Banana a target to become a relevant part of the Google ecosystem. Between Search, Lens, NotebookLM, and its future arrival in Photos, the line between capturing and creating becomes shorter, and the workflow for generate reliable and fast images is within everyone's reach.
