The Metaverse was never about Meta
Research continues to advance the ecosystem
The Metaverse was never about Meta. But, Zuckerberg was the only one with the audacity to put billions of dollars into product development and reposition the company’s brand to a more forward-thinking vision. (Let’s face it: Facebook is so early 2000s; I check it daily to see what a few friends are up to but mostly it’s a big yawn in my day.)
The dream, which some call a nightmare, of the metaverse has a foundational restriction: hardware, specifically optics.
These optics are not a metaphor, e.g., the optics of whether data centers are good or bad for society. These optics must exist in a physical device worn on a person’s face at an affordable price. Many scoff that people will never wear augmented reality glasses, much less wear VR goggles. Those who scoff seem to forget that more than 70% of adults in the US wear sunglasses, and billions across the globe wear eyeglasses for corrective vision. I’ve worn glasses since the fifth grade. If eyewear is useful, you’ll wear it.
When the value and affordability reaches a mass market, we will be in a different era of computing. Zuckerberg is right that another platform will replace the smartphone. (Maybe it won’t be eyewear, but our devices will get smaller and more portable. Or maybe as neural implants. I’m not even ready for that last one.)
Unlike software products, the materials and engineering of these reality adjusting eyewear, whether glasses or VR goggles determine when the products come to market. The tech is maturing, but the price point is not yet there.
Meanwhile, as Zuckerberg was starting his move towards the metaverse, OpenAI had the audacity to pour massive amounts of money into a purpose that came to consumer’s as software via their existing web browsers. I know that Sam Altman claims that they didn’t have a product in mind when starting OpenAI. But he’s an entrepreneur. Surely, he knew where OpenAI was going.
Maybe Zuckerberg should have made the bet on AI instead of optical hardware. But, as we’re seeing, AI is becoming a commodity. And we’ve also seen that a premium headset finds an audience: Apple Vision Pro.
The audience for Apple Vision Pro is small. Many who have the Apple device report excellent experiences. Whereas, many who have the Quest have it piled in the back of a closet. The Quest was intended as a gateway to VR and never as the epitome of the technology.
A market report published on March 13, 2026 by treeview, a spatial computing company, estimates that more 475,000 units of the Apple Vision Pro has been sold.
My advice to any following the tech market always has been to follow the reports coming out of the research labs. Every day I get a feed from scholar-inbox with a dozen papers detailing advances in the ecosystem that will encompass our digital lives ahead of us. All the work in 3D and world models are directly applicable to the products that we will experience through smart glasses and headsets, even if these articles do not mention virtual reality.
A few examples from just today, April 3, 2026:
Lifting Unlabeled Internet-level Data for 3D Scene Understanding (arXiv link)
Reflection Generation for Composite Image Using Diffusion Model (arXiv link)
Omni123: Exploring 3D Native Foundation Models with Limited 3D Data by Unifying Text to 2D and 3D Generation (arXiv link)
LivingWorld: Interactive 4D World Generation with Environmental Dynamics (arXiv link)
Anime-Ready: Controllable 3D Anime Character Generation with Body-Aligned Component-Wise Garment Modeling (arXiv link)
ActionParty: Multi-Subject Action Binding in Generative Video Games (arXiv link)
Director: Instance-aware Gaussian Splatting for Dynamic Scene Modeling and Understanding (arXiv link)
Large-scale Codec Avatars: The Unreasonable Effectiveness of Large-scale Avatar Pretraining (arXiv link) Note: this one comes from the Code Avatar Lab at Meta.
Generative World Renderer (arXiv link)
And that’s just from one day.
I don’t want to be misleading. Many of those papers were submitted to conferences and are only now being published on arXiv. But it’s still an astonishing amount of research being produced.
For those of you keeping count and concerned about the US vs China dominance in AI: six of those nine research papers were produced by Chinese institutions. The one by Meta was the only US representative in the lot. There also was one from Korea, another from a joint Canada/UK effort. One of the China-produced papers was in collaboration with a Japanese studio. This sample is not scientific, just a random set from one day. The whole US/China discussion is extensive and, I think, misleading.
What’s most difficult to track: when and which parts of research will come to market as products.
Let’s set aside the word metaverse, which came from a barely readable novel. With this newsletter, I admittedly hopped onto that train that went seemingly went nowhere. To extend that railroad analogy: that train carrying the metaverse relied on tracks not yet laid. It’s not the end of the line for extended reality, and that concept—extended reality—is a mood more encompassing than the metaverse.
I’ll be renaming this newsletter. It won’t be called extended reality is open. My next post will reveal the name. I’ll be using a term that has stuck with me now for nearly 25 years.
I will continue to write about what I’ve always written about: how we tell and experience stories, whether fictional or factual, in digital media. And that term digital media is so vague that I rarely use it anymore. (It has a late 1990s feel.) Digital media is software. But we only have experience software through our devices.
Software is the easier part, the software based on open standards and open source. The future is open for us to make an endless hybrid of experiences.


