Google confirms Gemini-powered AI smart glasses for 2026 in two versions
Google will launch Gemini-powered smart glasses in 2026: two versions, an audio-camera model and one with an in-lens display, running Android XR with prompts.
Google will launch Gemini-powered smart glasses in 2026: two versions, an audio-camera model and one with an in-lens display, running Android XR with prompts.
Google has officially confirmed plans to launch its first AI-powered smart glasses in 2026, arriving in two versions. Both will integrate Google Gemini and are designed as an assistant you can use on the move, without constantly checking your phone. The framing suggests a bet on ambient help over yet another screen to manage.
The first version follows a straightforward, practical formula: built-in speakers, microphones, and a camera so you can talk to Gemini and receive real-time prompts. The camera lets you take photos and then ask the AI about your surroundings—for example, to identify an object, suggest which way to turn, or read what’s on a sign. It’s a measured starting point that favors everyday usefulness.
The second version is notably more advanced. In addition to the same AI features, it adds a display inside the lens. That screen can surface helpful prompts such as turn-by-turn navigation and real-time translation captions, effectively turning the glasses into a tiny, glanceable display that delivers information exactly when it’s needed.
Both models will connect to a smartphone, with most data processing handled on the phone. They will run on Android XR, Google’s platform for wearable devices. Samsung is involved in development, while Warby Parker and Gentle Monster are responsible for design. Google emphasizes style, lightness, and all-day comfort so the device doesn’t come off as a gadget for its own sake—an approach that points to technology meant to blend in rather than demand attention.