Text is not for everyone. This is why I decided to start accompanying my posts with audio narrations. As a non-native speaker, it feels rather awkward to record my own voice but I'd rather swallow an entire Cybertruck than having an AI-generated Gwyneth Paltrow speak out my own thoughts. So, with that being said, apologies and good listens.
Over the weekend, I caught up on one of my favorite podcasts, Pivot. If you’re interested in tech and politics and found Elon Musk’s recent appearance at Trump’s rally kind of unsettling, it’s worth a listen. In the episode, hosts Kara Swisher and Scott Galloway talked about Meta’s new Orion AR glasses. Mark Zuckerberg’s main message? These glasses are designed to make taking pictures and recording videos easier, and in the future, they’ll pair with a new wristband that can display augmented reality (AR) information around you—potentially replacing the need for a phone altogether.
The concept of glasses is not really new. Back in 2011 we had Google Glass which was discontinued in 2015, re-branded as enterprise-only and then killed for good in 2023. Then we had Snap’s Spectacles, Amazon Echo Frames, Meta’s collaboration with Ray Ban and the most recent Apple Vision Pro which, granted, may be marketed as a headset rather than glasses but function-wise they align pretty well with the aforementioned products.
The reasons why this family of products has not managed to dominate the market yet has been extensively discussed. First off, they’re expensive—pretty straightforward and likely a problem that will sort itself out within the next decade. What we will dive into in this post, however, is the fact that, second, they often look bizarre. And third, and perhaps most importantly, they lack a clear and compelling reason for existing. So if the form factor is off and there’s no real product-market fit (meaning no genuine need they address), why are we still talking about them?
Most people seem to agree: glasses, so far, have been awkward. Even the so-called sleek designs still look somewhat unattractive, making users self-conscious and less likely to wear them in public. This struggle with form factor has been a persistent barrier, as nobody wants to walk around looking like they’re wearing a sci-fi prop instead of an incospicuous accessory.
Yet whenever the discussion shifts to matters of design, I am reminded of Jaron Lanier’s piece in The New Yorker, written in response to the launch of Apple Vision Pro back in 2023. In this essay Lanier discusses how, when he was making VR headsets, he was going for campy and bold looks. He writes:
“You wouldn’t want to disguise a motorcycle as a bicycle.”
This quote is illuminating. The concept of “disguising” the wearable technology that ultimately functions as an extension of your identity has flactuated through the years. Wearables in general have gone from bulky to sli and back again, as if the technology is constantly renegotiating the space it wants to take up in our lives. Remember the Walkman? A chunky brick that we carried around like it was no big deal. Then we had the Discman, the iPod, and before we knew it, music devices were small enough to lose in your pocket.
However, the trend is neither universal nor horizontal: we still have a large number of people sporting oversized headphones again (think the Apple Airpods Max). As of 2022, the total shipment of headphones globally resulted in 553 million and in the US the average buyer is expected to buy a new pair every two years. The sole existence of the Cybertruck, a car that looks like someone hit “export” at 10% rendering, proves that this design mentality extends far beyond the realm of pocket-sized gadgets. Is it comfort? Style? Or are we just swinging between hiding and flaunting our tech as a way of signaling our relationship with it and the future it points to?
Let’s switch perspectives for a moment. Instead of viewing the Orion glasses as an attempt to cram an entire computer onto your face, we could see them as an effort to expand the capabilities of an everyday object such as your glasses. This line of thinking points to what David Rose calls "enchanted objects"—everyday items infused with technology to subtly augment their functionality. Rose’s go-to examples are a round table that visualizes conversation dynamics in real time, and an umbrella that lights up to signal an upcoming storm, reminding you to take it with you on your way out.
These examples are both charming and also incredibly short-sighted. For one, I’d like to argue that, in our attempt to amplify the functionalities of everyday objects such as a table or a pair of glasses, we inevitably turn human skills and responsibilities into outsourced digital tasks. This, I argue, is because these functionalities need to be borrowed from the same context they will exist in in order to be somewhat useful and relevant from the get-go. Ensuring that everyone has a voice in a conversation is a core part of our social development, not delegating to a flashy object that lights up when someone’s being left out. By lazily transferring our responsibility to a charmingly designed digital tool, we’re not enhancing ourselves but rather diminishing our ability to be fully engaged social beings.
Rose’s other example, the umbrella, is not AI-infused, but it does stand as the pro-genitor of an entire family of objects that are soon to be exactly that. This is why, when I look at Rose’s umbrella, I’m reminded of William Gibson’s famous quote: “The future is already here—it’s just not evenly distributed.” This isn’t just because AI-powered umbrellas and fridges and glasses aren’t available to every community or country, but because the environmental impact of these enchanted objects is already being felt by very specific regions around the world. As someone living in a city that, exactly one year ago, experienced the most devastating storm in the country’s history, which later went on to kill over 5,900 people in a much less fortunate country than mine, I can confidently say that a glowing umbrella would have been completely and utterly useless during that catastrophe. This kind of devastation we now know is exacerbated by climate change—ironically, the same climate change fueled by technology that demands enormous amounts of energy and water to keep servers cool and the “enchantment” of AI running.
As I briefly explored in last week’s post, similarly to AI products, the current narrative and functionalities of glasses can be argued to be, well… completely unnecessary (at least to the average person, I’m sure that a heart surgeon could very much use them for something more than spying on random strangers on the street). But the real issue isn’t just that these glasses don’t respond to any pressing need. What’s fascinating is that we’ve known this since 2013, yet somehow, teams around the world are still obsessively trying to push them into the mainstream. So what’s going on?
Let me circle back to the discussion between Swisher and Galloway on Pivot. While talking about spatial computing, Galloway mentions how his son tried out the Meta x Ray-Ban glasses to take a few pictures while skiing, but beyond that, he didn’t really use them for much else. Despite this, Swisher argues that we’ve hit a dead end with smartphones and that heads-up displays are destined to be the next big breakthrough. But this eagerness to push for a paradigm shift makes me wonder: is this whole narrative being forced?
Lanier’s article begins with the phrase “Apple’s Vision Pro headset suggests one possible future—but there are others”. This last part, the possibility of alternative futures, seems to have taken the very back seat in our collective imagination these days. On the contrary, it almost feels like we’ve already pre-decided that, similarly to AI, smart glasses are already the next paradigm, and what we are witnessing right now is engineers around the world scrambling to make this prediction come true. In other words, we’re trying to hammer a square peg into a round hole, pushing forward simply because it must work.
Speaking to architecture students at Bartlett, Tobias Revell once argued that there is a looming inevitability behind any discussions around AI, ML and now I’d like to add into the mix AR glasses as well. This is evident in the way all of the discourse, as he points out, is largely about limiting harms. To be fair, artificial intelligence is creating very real problems at a rapid pace right now and teams led by people like Amba Kak are doing an excellent job at bringing our attention to them.
But this does not take away from the fact that things like AGI and smart glasses feel eerily like a sci-fi prophecy waiting to be fulfilled. We’ve all seen the movies—the augmented reality overlays, the immersive data projected into thin air. It’s as if we’re not exploring what the future could be, but rather running towards a future that was already scripted in the 1980s. It does not help either that the people with the power to realize these fictions are actively embracing sci-fi elements as blueprints, like Elon Musk’s fascination with Isaac Asimov’s “I, Robot” or Douglas Adam’s “The Hitchhiker’s Guide to the Galaxy”.
And this, I think, is the crux of the issue:
Design has played a major role in this. Take Tobias Revell’s example of Minority Report: remember that iconic scene where Tom Cruise manipulates floating screens with his hands, only to have one glitch mid-air, forcing him to awkwardly drag it back? That tiny flaw, the design decision to add this bug there, actually makes the technology feel more real.
Similarly, Black Mirror has mastered this art of making the outlandish feel tangible. In an interview, designer Joel Collins explained that the believability of this dystopian tech hinges on visual familiarity. The show’s sets blend high-tech with domestic elements—warm wood tones and everyday furnishings—making each episode’s bizarre technologies feel eerily plausible. Take “The Entire History of You” episode, where the concentric rings of the memory-recording device were inspired by the natural patterns found in the cross-section of a tree trunk. These subtle design choices ground fantastical ideas in something we already recognize, anchoring us in the familiar before letting us arrive to the extreme.
So, where does this all leave us?
In our quest to innovate, we may be losing sight of other possible futures that might be bolder, brighter and perhaps more fitting to our collective psyche. The relentless push to materialize a vision from Minority Report or Blade Runner risks turning technology into something performative—enchanted objects that dazzle and distract, but ultimately fail to deliver any real value. Instead of responding to genuine human needs, we’re building things to match a fictional future, amplifying products at the expense of our own sense of purpose and responsibility in the here and now we have found ourselves in.
Maybe the answer will not come from Apple Vision Pro 34 but instead from one of Simone Giertz’s useless robots; mechanical rubber hands that will slap us relentlessly until we figure out what is that one thing we are truly missing in our miserable existence, and that technology—and technology only—can offer us.
Until then, I’ll keep my ordinary, non-augmented glasses right where they belong: on my face, not forcing a prophecy, but simply doing their best by allowing me to see my own heavily distorted version of the world.
And maybe, just maybe, that’s enough for now.