ui that <think> with us, not like us
i came across this old manifesto by David Gelernter - the guy who co-invented lifestreams, beautiful time-based document system. was reading his piece from 2000 called "the second coming" and honestly? it hit different now that we're living in the age of ai interfaces.
Gelernter was already seeing the cracks in our digital world:
Computing will be transformed. It's not just that our problems are big, they are big and obvious. It's not just that the solutions are simple, they are simple and right under our noses. It's not just that hardware is more advanced than software; the last big operating-systems breakthrough was the Macintosh, sixteen years ago, and today's hottest item is Linux, which is a version of Unix, which was new in 1976. Users react to the hard truth that commercial software applications tend to be badly-designed, badly-made, incomprehensible and obsolete by blaming themselves.
He dedicates an entire section of the essay to the problems he sees with the current file-and-folder organizational model analogous to the current chat-based interaction model:
27. Modern computing is based on an analogy between computers and file cabinets that is fundamentally wrong and affects nearly every move we make. (We store "files" on disks, write "records," organize files into "folders" ββ file-cabinet language.) Computers are fundamentally unlike file cabinets because they can take action.
[...]
30. If you have three pet dogs, give them names. If you have 10,000 head of cattle, don't bother. Nowadays the idea of giving a name to every file on your computer is ridiculous.
the chat trap
i totally agree with these points. these are things i've been thinking about for a while now, especially every time i see another ai product launch with yet another chat interface π¬
our standard policy on ai interfaces has far-reaching consequences... doesn't merely force us to communicate in linear text... also imposes strong limits on our handling of complex ideas - ones that exist in multiple dimensions. a complex query can't exist as a multi-dimensional exploration... can't branch into parallel investigations, evolve dynamically, be manipulated spatially... it has no form, so it must be squeezed into sequential messages that follow chat conventions.
Gelernter then goes on to describe (at a very high level) the organizational model that we should be using on computers:
36. File cabinets and human minds are information-storage systems. We could model computerized information-storage on the mind instead of the file cabinet if we wanted to.
37. Elements stored in a mind do not have names and are not organized into folders; are retrieved not by name or folder but by contents. (Hear a voice, think of a face: you've retrieved a memory that contains the voice as one component.) You can see everything in your memory from the standpoint of past, present and future. Using a file cabinet, you classify information when you put it in; minds classify information when it is taken out. (Yesterday afternoon at four you stood with Natasha on Fifth Avenue in the rain --- as you might recall when you are thinking about "Fifth Avenue," "rain," "Natasha" or many other things. But you attached no such labels to the memory when you acquired it. The classification happened retrospectively.)
what if we let ai be ai?
but here's where it gets interesting. the other day i was thinking about how real creativity happens - not the clean, linear kind we pretend exists in chat interfaces, but the messy human kind. last month, i wrote about randomness as context... sharing how my best ideas come from weird connections, like seeing someone play bubble shooter on the bus and suddenly knowing how to design an app interface.
ai has all this knowledge, but does it have those random "oh shit" moments? can it make those weird leaps that come from living a messy human life?
modeling interfaces after conversation is rarely the best approach. our best tools don't mimic human limitations - calculators don't count on fingers, and airplanes don't flap. language itself is a tool, but we don't limit all human expression to words. if ai interfaces were truly modeled on how we think and create, we'd be manipulating objects in space, sketching ideas, and having the ai respond across multiple dimensions simultaneously.
to me, forcing ai into chat windows is just as much of a limitation as forcing computers into file cabinets. let's build interfaces for how our minds actually work, not just how we talk. the best tools amplify our natural abilities. my brain doesn't think in formatted text, but it excels at pattern recognition, spatial reasoning, and associative thinking.
imagine these interaction patterns:
[parallel streams] (deep research does it now)
||||||||||||||||
||||||||||||||||
||||||||||||||||
ββββββββββββββββ
multiple processes
flowing together
ββββββββββββββββββββββββββββββββββββββββββ
β spatial canvas β
β β
β βββββββ βββββββ β
β βidea β ββ β ai β β
β ββββ¬βββ ββββ¬βββ β
β β β β
β βββ synthesis β
ββββββββββββββββββββββββββββββββββββββββββ
[thought constellation]
*
/ \
/ \
*-------*
/ \ / \
* * * *
branching ideas
that connect
ββββββββββββββββββββββββββββββββββββββββββ
β gesture-based β
β β
β π select βοΈ connect β
β βοΈ split π€ group β
β π expand β compress β
ββββββββββββββββββββββββββββββββββββββββββ
[dimensional layers]
βββββββββββββββ
β surface β
βββββββββββββββ
ββ deeper ββ
βββββββββββββββ
βββ core βββ
βββ concept βββ
βββββββββββββββ
βββββββββββββββ
βββββββββββββββ
maybe the answer isn't to model ai ui on conversations OR thoughts. maybe it's about creating something entirely new... interfaces that let ai be ai, that acknowledge its ability to hold thousands of threads simultaneously, to see patterns we can't, to work in dimensions we struggle to visualize.
let's build interfaces for how ai actually works, not how we pretend it should work with fancier sparkles β¨