Samsung has charted new territory by being the first among partners to announce an MR headset—labeled under the catchy codename “Project Moohan”—based on the newly unveiled Android XR. Set for a public debut in 2025, I’ve had the chance to experience an early prototype first-hand.
A quick note up front: Samsung and Google are keeping mum on many key specs, such as resolution, weight, field-of-view, or pricing. Strict demo rules meant no photos or videos, so all we have is the official release image.
Picture Project Moohan as a blend of Quest and Vision Pro. It’s striking how these devices echo each other’s capabilities and designs. From its palette to button layout to the calibration process, you can see how they’ve cleverly incorporated elements from other leading markets.
On the software front, imagine if you tasked someone with merging Horizon OS and VisionOS, and came back with Android XR; it’s pretty spot on.
The likeness between Project Moohan and Android XR with other significant headset platforms is astonishing. But let’s be clear, this is not to claim any sort of design theft. Tech firms continuously swap, refine, and perfect great ideas from one another, striving to adopt the best while dropping the worst. If Android XR and Project Moohan successfully embody the best aspects of their peers without inheriting their shortcomings, it’s a win for both developers and users.
And yes—many positive attributes seem to be there.
Now, let’s talk hardware of Samsung’s Project Moohan. Upon first glance, it’s a visually appealing device, capturing the ‘goggles’ aesthetic similar to Vision Pro. However, unlike the Vision Pro’s sometimes uncomfortable soft strap, Samsung opts for a rigid strap with a convenient tightening dial, closely echoing the ergonomic style of Quest Pro. This design maintains an open-peripheral view, advantageous for AR experiences. However, similar to Quest Pro, those looking for isolation can add magnetic snap-on blinders for a more absorbing session.
It’s fascinating how much of the layout and button details draw parallels with Vision Pro, though Project Moohan skips an external display for showing the user’s eyes. The ‘EyeSight’ feature of Vision Pro might have its critics, but I see it as an asset, something I’d wish Moohan included. After all, it can be awkward not seeing the eyes of someone who, thanks to Vision Pro, can see you.
Samsung is still cagey with tech specs, emphasizing that it’s in the prototype phase. What we do know is that it’s powered by a Snapdragon XR2+ Gen 2 processor, a beefier chip than those in Quest 3 and Quest 3S.
During my time with it, I discovered some intriguing details. The headset leverages pancake lenses with eye-tracking for automatic IPD adjustment. Although the field-of-view seems narrower compared to Quest 3 or Vision Pro, I need to explore different forehead pad configurations, which could adjust the eye distance to the lenses for a broader field-of-view.
From my test, the field-of-view felt a bit restricted, yet still immersive, despite noticeable brightness dips at the screen edges. This might improve if the lenses were closer to my eyes, but currently, it seems Meta’s Quest 3 leads lens-wise, followed by Vision Pro, with Project Moohan trailing slightly.
Though Samsung confirmed Project Moohan would feature its own controllers, I’ve yet to see or test them. It’s unclear if they’ll ship with the headset or be available separately.
The experience relied heavily on hand and eye-tracking inputs. Surprisingly, it combines features familiar from both Horizon OS and VisionOS. Project Moohan allows raycast cursor use similar to Horizon OS or eye+pinch inputs reminiscent of VisionOS. A unique addition is its downward-facing cameras that can detect pinches even when hands rest in your lap.
When I got to try it out, the first impression was the striking clarity with which my hands were rendered. The passthrough camera quality seemed sharper than Quest 3’s, with less motion blur than Vision Pro’s, though my tests were in optimal lighting. Interestingly, my hands appeared notably clearer, hinting the passthrough cameras might be focused roughly at arm’s length.
Shifting to Android XR, it immediately channels Horizon OS and VisionOS. The home screen mirrors Vision Pro’s with app icons over a clear backdrop. Selecting an app involves a simple look and pinch, popping open floating panels. The system windows lean closer to Horizon OS’s opaqueness, with easy repositioning by reaching out to an invisible frame.
Android XR supports immersive activities too. I previewed an immersive Google Maps reminiscent of Google Earth VR, allowing globe exploration and 3D renderings of major locations, enhanced now with interior volumetric captures.
In contrast to Street View’s monoscopic images, these volumetric captures offer real-time, interactive exploration. Google hinted at existing site photography contributing to this but didn’t clarify whether a new scan was necessary. Though not as sharp as photogrammetry, it wasn’t bad, and Google anticipates sharper captures with time.
Google Photos is optimized for Android XR, allowing 2D to 3D conversions of images and videos. The brief glimpse I had showcased impressive results, akin to Vision Pro’s similar function.
YouTube too, is getting an upgrade. Beyond the typical flat viewing, it supports the platform’s vast 180, 360, and 3D content. While quality varies, it’s nice to see older content not left behind, and I anticipate their library’s expansion to accommodate future headsets.
Caught my eye was a 2D YouTube video magically reimagined in 3D for the headset. It bore similarity to Google Photos’ conversion quality. It remains uncertain if it’s a default creator feature or an auto-process by YouTube, but details will surely unfold.
In terms of AI, Android XR and Project Moohan shine. Google’s AI agent, Gemini, specifically the ‘Project Astra’ version, stands out. Right from the home screen, it can be summoned. It hears what you say and sees what you see, both in reality and virtually, continuously. This gives it a more intuitive and conversational edge over competing headset AIs.
Sure, Vision Pro has Siri, but Siri’s interaction remains rather transactional, limited more to task execution rather than a fluid conversation.
Quest introduces a Meta AI agent that perceives the real world but lacks awareness of virtual content, which leads to a jarring gap. Future updates might change this, but presently, it analyzes the world through snapshots initiated by user inquiries.
Conversely, Gemini receives more of a low-framerate video feed from both realms, eliminating any awkward breaks to focus on objects during queries.
Gemini’s advantage lies in its contextual memory. Google says it retains conversational details for about ten minutes, allowing references to past dialogues and sights.
A common demo tests its facility: identify objects in a room by inquiry. I attempted a few trickier questions, but Gemini deftly dodged pitfalls.
To test its translation ability, I asked it to convert a Spanish sign to English—speedily done. Then, cheekily asking for a French translation of a French sign, it accurately replied, maintaining the appropriate accent. Later queries on the remembered signs confirmed its impressive recall and context tracking.
Gemini goes beyond just answering questions. It effectively manages headset functions too. A command like “take me to the Eiffel tower” launched an immersive map view in 3D, while further curious inquiries about it were seamlessly integrated.
Gemini could locate specific YouTube content, like a suggested ground-level view video of the Eiffel tower, aligning with the location in question.
Hypothetically, Gemini should handle routine assistant tasks (texts, emails, reminders) while expanding with XR-specific capacities.
As of now, Gemini on Android XR feels like the most cohesive AI on a headset, surpassing current offerings like Meta’s Ray-Ban smartglasses. But with Apple and Meta investing in similar evolutions, it’s anyone’s guess how long Google will hold this advantage.
Gemini and Project Moohan together excel in enhancing spatial productivity but hint towards a greater potential in everyday wearables like smartglasses, which is another topic for another time.