Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
I tried Google and Samsung’s Android XR headset and glasses: Gemini AI can now see my life
Android XR, coming in 2025, will arrive alongside Samsung’s mixed reality headset and glasses after that. I’ve tried a bunch of demos, and the always-on AI that can see what I see is amazing and full of unanswered questions.
Google has announced that Android Xr Samsung has a mixed reality headset for Google, and glasses are in the works. This is a lot to digest and I just got off the demo version of this thing. Let me talk about it. Google has been promising cooperation with Samsung and Qualcomm for a year. Now when we talk about this future of mixed reality and we already have Apple in the landscape, we already have meta in the landscape and we also have other French players. Now Google is back on the landscape. For the most part, they were out of VR and AR and back into Android. Xr is a new platform and it’s going to work with phones and it’s going to work with other things and it’s Gemini powered so Gemini AI will work through it to work, not only on mixed reality headsets, but Google says you’ll wear AI glasses AR glasses and all kinds of other emerging wearables for your face. What I have to try because it’s only for developers right now is a headset made by Samsung, which is a mixed reality headset and also a pair of notification glasses that Google calls Project Astra. They have been in the works for some time and will soon begin field testing. Now, bear with me because I’m going to talk a lot and we don’t have any photos or videos of my demo because they’re not allowed. In my experience, that’s a pretty standard issue for early VR and AR. So first of all, Project Mohan, which is currently the name of the mixed reality headset that Samsung will release next year, looks a lot like the Vision Pro or looks like the uh meta quest pro. If you remember that headset, it’s a visor headset that sits on your head. There are lenses here that can block light if you want Titans in the back. It has a nice sharp screen, it has hand and eye tracking. That’s not what’s exciting. Much is known. But when I tried the demos in these headphones, two D apps appeared in an Android-like interface. I was able to launch familiar Google apps and move around using my hands and bring out different pains. But I could also include a Gemini to be my companion throughout the journey. Now you already have things like Siri running on the Apple Vision Pro and there’s some voice AI stuff and uh meta search. But this is something that can also see what you’re doing and what makes it fascinating is that I can actually point to something and say, hey, what’s that? Or tell me more about this and move my hand there and Gemini will tell me. And what I discovered is that I can search the web. I could ask him a question about something or ask where I live. She pulled out a map. And Google also has impressive apps for this. At this point, youtube and Maps were the most prominent, Apple doesn’t even have a vision pro immersive maps app. However, the maps allowed me to bring this large 3D environment that I zoom into my home impressively. But I could also point to things in the landscape and ask about them, and Gemini could tell me what they were, and that combination became really interesting. I felt like I was starting to get lost in the apps and scrolling around. Gemini can not only recognize what you are doing, but it can be your memory. When I was done with a bunch of demos, I asked Gemini to repeat what I was doing and he told me all the different things I was doing and he had tricks I hadn’t seen before like playing a youtube video and Gemini improvised subtitles for youtube videos. Google has a few other tricks up its sleeve with Android XR. One of them is to automatically convert two D photos to 3D, which is something that Apple now also does with Vision pro, but it also does it for videos. I watched the video on the headphones that was already pre-converted. Uh, Google said, but I also watched a youtube clip that was converted. I didn’t get to see the conversion process live, but it was pretty wild to see that the ability would be there. So that’s all the mixed reality stuff that’s going to be on this developer headset that’s meant for people to start getting a feel for it and to kind of build a starting point for everything else. But then there’s the glasses that I put on these glasses that look like uh Meta’s ray bands, kind of chunky but pretty normal and wireless. And there was one display on these glasses, a small part here etched with waveguides and a micro led screen projector here that could give me a heads up display. What could I do with it? A whole bunch of things I could activate Gemini with the button here and turn it into always-on Gemini mode. Identify fixed living room wall art, identify items questions. I had questions about the books. I was able to point out some things again and have him tell me about them and the translation. You could translate things and then ask for the language back. And what I find fascinating is that they also do live interactive translations. So someone in the room came up and spoke to me in Spanish, immediately not only did the captions appear for what they were saying to me in English, but they also appeared in English when they started speaking in Spanish. All of these things assume that you will have a permanently involved Gemini in your life. Now with conversational responses that might seem very intrusive, but on the glasses you can pause it with a simple touch on the side and basically suspend it. But it was very different from something like meta raid bands that assume you won’t use it until you ask for it here. It was like it would assume you had it on until you wanted to pause it. The same thing applies to VR and mixed reality. Am I going to turn on Gemini and then suddenly do this all the time which could really affect the experience of how I use the apps? It’s fascinating for things like let’s say I’m playing a game and I want to know a tutorial or have questions about something. It sort of breaks the fourth wall of VR and AR and can change the way apps are developed. But let’s take a step back. This is all for developers right now and Android XR will be released in Fuller state next year. According to Google and Samsung will announce and go into more details about what is potentially very expensive. The next reality headset will be again, which is expected to be like a vision professional and many people may not necessarily want to buy it, but it will be there. The other thing that is really interesting is where all the glasses are going and it could be a year after starting with these auxiliary types of glasses, there are already other partners in the works like X reel. We just saw some glasses they made. Google will have many different partners making AI and AR glasses that work now that they will work with phones. Well, the ball will be in Apple’s court. When is Apple introducing generative artificial intelligence to Vision pro? And when will Apple start allowing iPhone connectivity for vision products? Because right now that’s not happening. But I think that’s really key to what’s coming next, and it looks like it’s going to make the idea of everyday glasses feel a lot more immediate, but we don’t know about battery life yet. So that’s one wild card here that could be quite significant. That’s a lot to digest and I’m still digesting it in my head. You have questions that I’m sure you’re leaving below, and I’ll follow up on more things as we learn more about this. But I was just glad to get the demo, and I’m asking a lot more questions about what AI will be in VR and AR than I ever have before. Thanks for watching.