the interface is not dying.
on altman, musk & co, the questionable ux of "invisible" design and why facist tech leaders spiritual belief in technology will be their downfall
The thing about stories is that they are made up, and when you make things up, they don’t have to reflect reality. When you try and put these stories, copied and pasted, into reality, disaster strikes.
For at least 30 years now, about as long as it has existed, the end of the graphical user interface has been prophesised. This generally stays in tech-nerd circles, but very now and again, however, it toddles its way into the mainstream cultural consciousness.
In the last couple of months, Altman, Musk and Zuckerberg have all hailed their shiny new LLMs and the trinkets they’ve cooked up to go with them to be the end of the traditional user interface. People are starting to listen. Maybe this time it is different. Jacob Nielsen himself even said so. Threats of an Altman designed screen free AI gadget are everywhere you look.
buttons, buttons, lovely buttons
Currently, most buttons describe a step, or a sequence of steps. With the help of AI, some claim, we could see buttons that only state the outcome of the task, making things like arduous Photoshop edits a lot quicker.
This can be thought of as “removing interface bureaucracy,” and it’s based on an idea of the computer interface that has been around since the 60s, called “Do What I Mean,” (DWIM)
DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious. - Larry Masinter, 1981
This could lead to physical devices beyond a smartphone, because in order to operate correctly, this kind of computing would need access to maximum context. Technology is catching up to this long-lasting DWIM vision, but leaping to predictions of an “interface-less” future is a bit of a stretch.
we don’t actually live in an episode of star trek
Their vision about what the future of technology beyond the interface looks like is a lot like what I discussed in my sci-fi series, which you can find on my Substack profile.
In essence, they think of this technology as exactly as in all these sci-fi series, which explores how these adaptations serve as propaganda for Big Tech. But they also genuinely inspire the likes of Altman and Musk, who think themselves visionaries creating a new world, where we all abide to whatever their hyperfixation sci-fi tale tells them should be the future.
I cannot over-emphasise how ridiculous this is, in so many ways. They are, firstly, not all powerful. As much as they may like to, they do not actually control every single human being on the planet. They are not unstoppable, and these visions of technology hidden behind every corner are not something that will be accepted by the masses.
ULEVEN
The infamous Burniston sketch of two Glaswegians attempting to operate a voice activated lift describes exactly what the reality of these “interfaces,” would be.
This is almost a play-by-play of every interaction I have ever had with one of these infernal voice activated devices. My accent is pretty standardised professionally, but that disappears pretty rapidly the second someone (or something) starts to agitate me.
This is the real world, not an episode of Star Trek. And Altman and Musks fantastical visions about future interfaces aren’t based at all in real everyday experiences. They’re based in fantasy. We’re never all going to have these perfectly attuned smart devices that anticipate what we want in a way that we don’t have to use an interface. My accent is one demonstration of this, but there are many.
Neurodivergence, and other disabilities, pose questions of effective interaction. Different cultures have different norms around all sorts of things that would impact the effectiveness of this ridiculous vision. The ungodly amount of energy this would presumably use. Who are the workers creating and maintaining all this automation? All of these things create a perfect cocktail of disaster, whereby we are not finding ourselves living in a Star Trek episode, but in a patronising voice activated lift.
This is where the tyranny of the persona comes into mind:
seamless vs seamful design
This whole project is rested on the one golden rule of technology design:
“Good design is invisible.”
Like a great many ideological decisions in the field of technology, this is presented to young designers as an absolute, non contested rule. That, however, is not the case. Seamless design has been critiqued as long as it has been around, particularly by designers and computer scientists who consider themselves Marxists.
Seamless design as a pursuit, and DWIM and invisible interfaces by extension, are used as tools of abstraction. It abstracts data, it abstracts labour, and it only serves to be a new form of gatekeeping by the powerful, still full of failure points.
The alternative to this is what is known as “seamful design” where the limits and mistakes of a system are clearly displayed to users, rather than hidden. Based in the idea that people interacting with technology have a right to know what that technology is and how it operates, seamful design advocates that “good design is invisible” is antithetical to an equal society built on democratic technologies. You cannot, after all, build democratic technologies if the technologies are all invisible.
Interestingly, evidence suggests that designers can very quickly adapt to this approach to design, particularly where AI is concerned.
not just “ethical concerns”
When the conversation about these technologies frames transparency and privacy merely as “ethical concerns,” it obscures reality. They don’t just present ethical questions - they also present reasons why this technology won’t work the way they think it will.
A device that cannot clearly explain itself cannot reliably work in the real world. As designers point out, transparency is critical not just for trust but for usability; opacity leads directly to frustration, error, and rejection. This is why “frictionless” and “invisible” design often backfire: without clear cues or choices, users are left guessing whether the system is working, trapped by its misinterpretations with no recourse but to abandon it.
In Star Trek, the writers can script away these problems. But in lived reality, lack of transparency is a design failure, not just an ethical one. And as reliance on hidden context and massive data pipes grows, the system’s core weaknesses only intensify, leaving users more helpless and alienated.
For a truly “intelligent” system, transparency isn’t just an ethical question. It’s the difference between something that powers the future, and something that vanishes the minute anyone with a non-standard accent, need, or expectation tries to actually use it.
Most interface-less AI devices rely on cloud processing, constant data streaming and vast amounts of background computation. This increases energy usage and places new burdens on workers and infrastructure. Big Tech leaders abstract away these problems, but the Paradox of Automation persists.
As systems become more automated, the amount of labour required to power them only increases, creating abstracted labour systems. The very word “automation” is misleading, because it implies the notion that they work on their own - they don’t, the work is just hidden. And when it’s hidden, it’s easier to ignore the exploitation. Taxi drivers can unionise - but the disparate groups of workers required to create and maintain a self-driving taxi? That is a more challenging task.
These questions of labour are central to ask ourselves as leftists - but they also show a technical issue. You can’t magic away the sheer amount of manpower and energy required to make these “automated” systems, and this creates a very long list of things that could go catastrophically wrong and bring the whole thing crashing down. Designers working with invisible systems are forced into blindfolded labour, without transparency to iterate, debug, or advocate for real user needs, inevitably leading to bad design that doesn’t work for real users.
They see a future powered by perfectly ran interface-less devices. This is nothing more than delusion.
This is because Altman, Musk and their ilk don’t actually understand people. Their devaluing of the humanities will be their downfall. Software might script away reality, but lived complexity always returns: neurodivergent users locked out by narrow defaults, support workers tangled in invisible manual overrides, and designers condemned to tinker in the dark.
Who knew? In order to get people to buy your tech, and to get your tech to work for people, you have to understand people, which means not systemically destroying the studying of people and those who study it. Crazy stuff.





legitimately one of the best critiques of star trek as technofantasy i've seen of late, and paralleling some of the things jessie gender has articulated too. i also love love love the critique of frictionless design. as someone who needs accessibility measures in a lot of different digital spaces, i actually need to see the hand of the author/designer to be able to do stuff. i've been wondering too if you might delve into what "automation" means as used in different contexts? i'm wondering if it's become a floating signifier to most people.
These techbros think we’re all going to be shouting “Boil!” at our kettles in 5 years.