Tools address human needs by amplifying human capabilities. Great tools have parts that fit the problem as well as fit the person.
We need new mediums for exchange of ideas and active construction of thoughts (crutch and shoe metaphor). Too much telecommunications work will eventually lead us to building crutches rather than shoes. Stop trying to remedy a perceived defect and instead focus on new functionality
Go ahead and pick up a book. Open it up to some page. Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.
Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.
Current technology is very much just Pictures Under Glass. All interaction are glassy and feel like they have no connection with whatever task you were performing. Almost as if it was just under a pane of glass. List of interactions you can do with Picture Under Glass:
When working with our hands, touch does the driving, and vision helps out from the back seat. Moving our limbs and bodies is so well choreographed, its just telerobotics for us. Why should we limit our interactions to a single finger or two? This quite literally just interaction failure
Humanity is using the dynamic medium merely to emulate and extend static representations from the era of paper.
New representations of thought — written language, numerals, mathematical notation, data graphics — have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity’s collectively-thinkable territory. Why then, have we been trapped using the desktop/paper metaphor for the last few centuries? Why have we essentially bottle-necked our bandwidth for interaction design?
Ted Nelson, the guy who coined the terms hypertext and hypermedia, called the continued use of paper simulations as “like tearing the wings off a 747 and driving it as a bus on a highway.”
And what about screens as a whole? Is the future of computation really just sliding fingers around slabs of glass? — Jason Yuan
Desktop metaphor was originally designed in 1973 to suit a very different need in computation—the need to mirror digital content with its physical equivalent (thus, the need for folders an documents)
Computers in Friendlier Forms
Source: Omar Rizwan on Notion Blog
If you have objects on your computer, you can have holographic projected versions of them on your desk and then you can suck them into physical objects if you want to physicalize them. Or you can turn them back into holographic objects if you have the physical object
On making things that are just toys:
- I kind of want to make more things that are just toys where it’s fun to interact with the thing, because I feel like that actually sets a very high bar, a game can be fun because it has a story or it has cool characters or there’s a scoring system or something. But a toy, it has to be fun to play with just because it’s fun to play with just from the interactions themselves.
On physical analogues of software:
- I think that they’re all about trying to take stuff in the computer and give them some of the richness and texture and embodiment of things in the real world and scale also. Where everything on the computer is kind of pristine and closed and perfect and you can’t touch it and it doesn’t decay, and it all kind of fits in this 11 inch rectangle.
Modes of Human Communication
Face-to-face, ephemereal, improvised. As it stands today, most of this happens through spoken word, hand-waving, and static sketches. This makes grasping the same idea as another person extremely difficult (low bandwidth communication)
Words are terrible at representing systems
Can we reduce the time to generate models for ideas down from hours to seconds? Is there any way we can integrate the stage into presentations (much like a play)?
A lot of this is related to creating good organizing systems
Blackboards are more flexible than a computer for presentations right now.
What’s the point of a living, dynamic speaker, if the presentation itself is completely static?
Can we create the visuals of a well-polished science YouTube live like a blackboard?
Can we map concept space to physical space and use the stage as an outline of the presentation? Kind of like a book where you can tell how much of it you’re finished by the weight of each side, can the audience see what the presenter has already covered?
Can we dynamically create content customizable/personalizable for each user? Are there other channels we can use for communicating information outside just text?
Getting this right is critical for effective knowledge distillation
Is it possible to create dynamic spatial media? Virtual museums of information? Are there ways to engage with things other than the hands? Can we storytell as a way to ingest and interact with information?
The focus is on spatial representation of usable knowledge.
Can we create a memory palace for people to walk through to recall information and learn new information?
As it stands, writing is just manipulation of symbols. Even for static materials, the symbols just are more literal representations. Coding still, is manipulating indirect symbolic systems and representations
Can we show multiple representations of dynamic behaviour? Can we transform between different representations easily?
The goal is to have manipulation of the behaviour and data itself rather than a structure or set of symbols that ‘represents’ that behaviour/data: related to the idea of turtles and the LOGO language in Mindstorms
Why are almost all representations used in intelletual work (both final product and intermediate scratch work) mostly 2D? Mostly paper or pixels
Can we create dynamic tactile mediums?
Playfair’s invention of data graphics was transformative because it tapped into capabilities of the human visual system which had gone unused in intellectual work. It may be similarly transformative to tap into the profound capabilities that enable a person to tie a shoelace or make a sandwich, and bring them to bear on more abstract thinking.
In the seventies, Alan Kay introduced the concept of Personal Dynamic Media that let a user “mold and channel its power to his own needs” (see: agentic computing)
Two decades later, Mark Weiser envisioned a future of ubiquitous computing, where heterogenous devices of varying sizes and capabilities interact easily with each other and technology disappears into the background.
Substrates are software artifacts that embody content, computation and interaction, effectively blurring the distinction between documents and applications.