s15e01: Marks on Flat Planes
0.0 Context setting
It’s Thursday, 8 June 2023 in Portland, Oregon, where the high will be 78f / 25.5c today, and the rough carbon load in our atmosphere is 420ppm.
It’s been ~90 days since the last episode, so I think that calls for a new season. I’ve been a little busy - bought a house, found a bunch of things that need fixing and learned how to do things with electricity without killing or injuring myself. I have significantly more thoughts on smart homes now.
I have also been on a bunch of stupid mental health walks1.
1.0 Some things that caught my attention
1.1 Marks on Flat Planes
Yes, I suppose I have thoughts about Apple Vision Pro just like everybody goddamn else and I don’t know – I’m just tired? It is possibly because I’m old and there are many more things in my life that are a combination of more important, more pressing, or more immediate than a couple decades ago, when responsibilities were fewer.
First: the marketing communications around Vision Pro was weird and I think in places a little tone deaf. It was instantly clear to a number of people that some of the choices in presentation weren’t so much dystopia-adjacent as at the very least dipping a foot in, I think I remember someone somewhere saying: Yes! Our reach has finally met the grasp of the finest satire Britain delivers at the intersection of satire, technology, and society! Most experiences with the product I’ve read are all along the lines of “well, technically, that’s pretty damn good”.
I don’t know if it’s just our visual language for talking about these types of objects (which has been in development for at least the last 20 years if you start counting from, e.g. Minority Report and Strange Days), I don’t know if it was the safe choice in terms of creative direction, but these videos were awfully close to other Premium Streaming Content on Apple TV+ like that Extrapolations series, never mind the whole divorced dad on his own in a darkened room watching (obsessively) a video of his kids, I mean, come on.
Second: okay, that aside, the choices in the product design are interesting in the abstract. The pass-through! The externally visible screen! The wonderfully crisp UI that’s rendered in spatially situated detail and appears like, er, Aero-era translucent/frosted glass and casts a shadow! The new UI guidelines and language that the device will need! The, uh, application window paradigm that’s been ported over? I get that we’re in chicken-and-egg land here.
It’s interesting because this is clearly demo-ware, or at least nobody’s figured out what native software experiences would look like. It’s clearly good enough for a small prosumer product, a sort of Next Cube for your face right now. But in the same way that the iPad is still growing into its “what’s an iPad best at, Conan?” age, the reason why I think Vision Pro looks dystopic is because it’s our crapsack office information worker interfaces (i.e. giving a goddamn powerpoint presentation) but, you know, with a thing on your face. Sure it’s bigger, but it’s not different enough? I mean, I have a Meta Quest and played with it and I get that this is a qualitatively better experience.
Lastly, and the part that I got most stuck in thinking about, is what Vision Pro can be used for right now.
As things stand, I think spatial computing née virtual reality or augmented reality, or whichever attempt at “owning the space” a corporation has put its stamp on, looks super dystopic right now for information workers/knowledge workers [sic] is that all information work is still broadly document-centric, which means windows of things that still broadly look like bits of paper.
The uses of Vision Pro right now appear to fall these kinds of buckets:
- regular computing, but with a bigger screen/more screens, and with fewer distractions
- passive entertainment (more like a private cinema for one) and slightly-interactive entertainment (sports, probably)
- active entertainment (videogames)
- meetings / multi-user realtime presence
- specific applications/tasks that have the potential for better mapping to spatial computing than they currently do to 2D window/icon/pointer interfaces (3d modeling)
I think, broadly, that one of the reasons why spatial computing demos for “office” work outside of “having meetings with people” or “infodumping at people”3 is that the current method of working with documents, i.e. Mac apps or iOS apps in a window in a space has simply (I acknowledge that “simply” is doing a lot of work here) been ported over for lack of something better.
Or even that doing work in documents in spatial computing might not be that great at all, or at least not that different than a flat windowed interface situated in space with your regular text input methods like keyboards.
This is more a reflection that most “work” in an information worker context is still document-centric. I don’t even know how much of post-document-centric organization you can have right now if you’re doing this kind of information work. An alternate expression of this is office work is bureaucratic work is still fundamentally paper-document based.
Newer, web-first, SaaS-y type organizations might count as the first post-document organizations, but they still have to deal with documents at the edge. See how long you can last without having to fill in a PDF. (Governments! The last frontier of documents being the data!)
So if the majority of “work” for a lot of people who use computers is document-based, then we still don’t have a real vision for how you’d do that work in spatial computing other than “your windows, but bigger, and around you”. It’s interesting that the most compelling fictional spatial computing interfaces aren’t document based – they’re Starking engineering, or manipulating video (Tom Cruis using a clip browser in a spatial Final Cut Pro?). These are all much less like text/presentation documents.
Apple’s WWDC session on building apps for spatial computing2 reflects this: the fundamentals they’ve landed on (for now) are (1) a shared space; (2) windows; and (3) volumes.
Windows are the concession for dealing with information-worker-working as it exists right now: your document apps, in windows, in a shared space. Volumes are clearly intended as the starting point for spatially-native, 3D computing (realistically, how else?)
If not windows, then what would be the equivalent of a spatial presentation that isn’t the equivalent of a piece of paper projected onto a 2D plane? Does it end up being an Unreal/Unity volume environment you can navigate through? One you can spin around? (And why is it better if you can spin it around?) Or would it be a combination of the two, depending on what’s being presented? One could easily imagine presentations involving models or volumes where it’d be much nicer to embed the model or location. Does a presentation actually end up being more like the virtual studio setups that are used on election nights or The Weather Channel?
(Of course the silly part here is that after the presentation someone is going to ask you to send the PDF of it)
There’s a counterpoint here that “documents” – that is, things that are the equivalent of marks on paper – are actually really good at what we need them to do in terms of preserving and communicating information. Maybe the best thing for a memo is still a bit of paper with the memo on it, or the representation of the bit of paper.
That’s for viewing and editing. One thing that could be different, that we haven’t yet seen, are spatial interfaces for document management, but I’m not optimistic that we’ll get to see those for a couple reasons.
First: your regular “document management is search now” argument; insert the regular rant about Google Drive. Everything from the terrible latency of Google Drive’s browser interface through to what I think is early web implementation of lists and tables. It’s curious, because you have applications like Miro and such that have no problem with being infinite 2D canvases with grouping and, well, what if Miro were your document management interface? People have for a long time talked about wanting better ways to arrange collections of documents (or resources, or whatever) in a spatial manner, the stereotypical “I have a desk and I want stuff on my desk” and the perennial “when are we ever going to get stacks or piles on our 2D desktops and file managers” refrain.
Second: I think “files” haven’t been in Apple’s vision of the future of computing for a long time, ever since iOS. File management is a sop and concession to how the rest of the world works and that you need to work with files because, well, see the above on how information work is generally document-centric, which means file-centric. And so it took a while for Files.app to come along, whereas native iOS applications come with their own document stores and share sheet mechanisms/clipboards for moving information/data/data types from one place to another. Which, you know, isn’t bad! It’s quite nice to have another option! It’s quite nice that Photos are photos and not “a file”, and that you can have a system-wide photo picker. (The implementation could be better, but philosophically speaking, not bad!)
So if applications are their own containers, then yes, Keynote documents are in their special Keynote iCloud container, and Pages documents and so on, and even Microsoft Office I think wants to default you to saving stuff in their cloud. So whither your messy desk.
(Now that I come to think of it, WWDC has also shown how Apple might think document management as it exists also lives inside the app/task in which you’re doing work. Notes.app is now a pseudo PDF document manager, so in that way, why couldn’t Freeform be a document manager as well, for the documents you put in it?)
I suppose my point here is that spatial interfaces for managing data like files/documents haven’t really taken off in 2D WIMP environments because they’re just not that good or easy to use. They might be much better suited to spatial computing (I mean, duh, your desk), as well as the stories you hear about how astronauts love working in zero-g in some cases because your entire volume becomes a working environment thanks to the lack of gravity: put something somewhere and it stays there. If I worked at a trillion dollar company I would totally pitch a research session in zero-g to understand how people work and what works in a spatial environment like that because hey: in a spatial computing environment gravity isn’t a thing.
In a thread version of this, I said “I don’t think a Vision Pro device will replace computers, not for a long long long time, if ever” and I suppose I can be more specific in my opinion.
- documents in the sense of representations of 2d planes with information embedded in them won’t go away for a long time, if not forever, in part because that’s just part of our cognitive architecture and evolution
- the keyboard won’t go away for a long time either, I don’t think, but I’m speaking from the perspective of someone who can type super fast and super accurately
- a bunch of tasks that computers are used for that involve space will be/have the potential to move over to spatial computing
Word processing in spatial computing? I suppose in the sense that you don’t need a display and you can have lots of them, you can do it anywhere, but the deal there is you still need something like a “keyboard” until you have alternate high-speed text input, and speech doesn’t cut it yet, I don’t think.
So perhaps the easier way to think of spatial computing as it exists right now is:
- the presentation layer/visual output layer
- interesting/novel/finally good-enough input mechanisms like gaze, combined with the evolution of direct manipulation
You’ll note that I am neatly skirting every single issue to do with the other physical aspects of this kind of spatial computing, namely that it visually presents as shutting yourself off from the outside environment, even acknowledging the efforts to which Apple’s gone to trying to ensure you’re still “present” and that others can present into your environment. We don’t have the contact lens version yet, nor the lightweight glasses/spectacles version.
I was going to say “maybe this is the first time mass consumer electronics has to deal with the issue of how what you’re doing looks to other people”, but that feels incredibly hubristic and dismissive of what’s likely to be an entire body of work about that exact subject. So what I will say is that the related versions of this are: wearing headphones to shut out the outside world (to which also take note of people wearing headphones to signal leave me alone and the propensity of some absolute garbage people to take that signal as personally offensive), or having your head down in a book on public transit. Wanting to escape or immerse yourself in something is fine.
I wrote earlier that noise-cancelling headphones are a thing and what would distraction-cancelling look like? What would mess-cancelling look like? In many cases, being able to control your aural environment can be an accommodation and a choice. You might flip this around and instead ask: in a default shared physical environment like, uh, reality, there’s an expectation that we do share this reality and that your opting out might inconvenience someone. Which… you know, is an interesting thing? Why did you make it harder for me to get your attention? Starts sounding selfish when you put it that way.
Hell, we have quiet coaches in public transport (at least in some places where there’s civilized rail transport), and aren’t those an accommodation? And hey, with noise-cancelling headphones, can’t you make so many more places quiet coaches?
We build private spaces for ourselves and we try to create private spaces for ourselves. Private spaces in themselves are not bad things, escape in itself isn’t a bad thing. We create rooms with doors that we can close, teenagers get to the point where they ask for a lock on their bedroom door so they can control their space. The argument here is less, I think, the technology “a private space for yourself, wherever you are”, and more the motivation and economic environment. What’s to be gained from making it easy to retreat into private space? How easy will it be? Will we give in to our base desires and choose to spend more time without being physically present in shared space?
To paraphrase Danny Lavery, I think, an evolution from “what if phones, but too much?” to “what if being alone in a room, but too much?”, “what if wearing headphones in the public, but too much?”, “what if being in your room with the door shut but in the living room, but too much?”
An argument here might be, “well, because we evolved as a social species where privacy is a technology” and as a social species we like to see what other people are doing and we’re worried about what we might choose without considered control.
In the next section, I briefly note what caught my attention about Erin Kissane’s essay, Tomorrow & tomorrow & tomorrow and her remarks about the patterns of spaces, and now I am thinking about our expectations of spaces. When we talk about “tech and society”, we’re not even, I think, at the point where we’re able to navigate easily conversations like “what is it to desire privacy in public spaces” or even “reclaiming privacy in public spaces”, because we’re at the same time under assault from “how technology is being used to influence and support capitalism in society”.
Anyway. I suppose we will fuck around and find out.
1.2 Some smaller things that caught my attention
Make a thing designed for one thing do other things
The standard used for cable internet uses MPEG-2 transport streams to deliver IP packets4, which makes sense, because cable is… designed to deliver video, and at the time digital video (itself data) was more important than general purpose (ha) data.
Caught my attention because: just a wonderful example of optimizing a thing for a thing, but then having to shoehorn a different thing into the original thing and a showcase for engineering ingenuity.
Men explaining things and feeling superior on the internet: a case study
Fast Company had a big long story on the history of snopes.com5. It finally and depressingly provides an account of the origins of snopes, trolling, and the terrible, terrible leadership involved. Such leadership may well fit your expectations and fulfil any stereotype you may have in your head of the kind of people who were able to take and make power in the early days of the internet, how that worked out, and who was abused and lost out in the process.
Caught my attention because: this crossed my stream via Brooke Binkowski, managing editor of Truth or Fiction and former Snopes managing editor.
Caught in loops in spaces
What I got from Erin Kissane’s essay Tomorrow & tomorrow & tomorrow6 was a beautiful piece on the reinforcing patterns of behavior that arise from the spaces we live in, create, and spend time in. It crosses all of my favorites: the choices we make, the choices that are easier to make (hello, Mrs. Davis and force), reading somewhere recently and being reminded of the idea that we’re no longer letting go of the past because it’s right there, all the time. Kissane says (and I wholeheartedly agree) that we won’t technologize ourselves out of the ghost machine we’ve created for ourselves, but also says that we won’t mod[erate] our way out either, which is as much of a clear articulation as I’ve seen lately of what sort of foundations or floor we need in our spaces.
Caught my attention because:: Kissane’s writing is, well, something I’m jealous of in the good way: I learn something, she makes accessible things that I don’t usually find accessible (at a high level, ‘literature’ and ‘history’), and if it wasn’t clear already, is a damn smart baked snack.
Shitty studio notes
I dropped into the Near Future Lab’s Future of Hollywood General Seminar a few weeks ago. Here’s a couple of my notes from our session on “interesting things that might exist in the future” as, uh, provocations. Some of them also neatly and predictably fall into my “imagine the shitty thing” schtick:
- Shitty studio notes: imagine, if you will, Disney (who else?) developing a model of one of their movie franchise characters. Originally my example was Tony Stark, but that’s an insipid choice, so let’s just go one tiny little bit better and say Sherlock Holmes. Fine-tune your LLM character-based model on all your Sherlock Holmes data so that when you see your dailies of your Sherlock Holmes piece of content, you can send a shitty studio note along the lines of “yes, that was an okay performance from actor, but it’s not in line with what our internal model predicted the character would do. Please tweak to make sure you fit the model’s prediction better. Thanks!”
- Less shittily: tax credits for productions based on datacenter location, and how long will it be until wee see for example an nVidia logo next to Panavision or Dolby in production credits?
- I am also still on this artificial attention bent, which I’m trying to work out. But something along the lines of the Nielsen Artificial Audience, where everyone submits their content to the standardized simulated audience and you get to take a look at simulated box office take/streaming minutes because everyone knows more data is better, even when it’s shitty data.
Related: during the session and chat afterwards I had the belated realization (perhaps even a repeat realization, and one I expect was obvious to people at the time) that TikTok’s duets feature is the ultimate yes and feature that makes improv easy. Which means I also have to reference the book Impro for Storytellers.
OK, that’s it for today. Clearly there are takes and opinions to be had.
How have you been doing? I have been away and tired.
Get started with building apps for spatial computing, Apple WWDC 2023 ↩
see: PowerPointing or Keynoting or doing a deck ↩
Inside Snopes: the rise, fall, and rebirth of an internet icon, Chantel Tattoli, 2 June 2023 ↩