Monday, November 25, 2019. I have to admit that it has been difficult getting back into writing, but here I am, making the words bleed out of my forehead. Or fingertips. Either way, I forget the metaphor. And on the third way, I don’t suppose Macbook Pro keyboards do well with fingertip word bleeding.
ANYWAY you probably didn’t want to read that.
On with the show:
I’ll do a few things first, and then we’ll get on to the next part of Snow Crash. That way anyone who’s just not in to Snow Crash can just skip past it and honestly, I don’t blame you.
ARGs are still around!
It is 2019 and people are still making alternate reality games (ARGs), which is a thing that I have some experience of. Here’s one that Andy Baio wrote up for Harry Styles’ new album. My surprise (“still making”) is more along the lines that when we went through this in the early 2000s, there was a bunch of Crazy Internet Marketing Money and it was a bit HitchHiker’s Guide (When Crazy Internet Marketing Budgets were Real Crazy Internet Marketing Budgets), and then people kind of looked at ARGs a bit more and realized: well, this appears to be a lot of effort and money aimed at a very small number of people and maybe there are other things we could be doing with that money instead? This is not to say that I am against ARGs, just that the case for them as a marketing campaign always seemed a bit suspect after the first few. But hey, the people who they’re made for always seem to like them!
So there’s this thing called My Storytime which lets a parent/caregiver record a story for someone and then play it back anywhere “with the Google Assistant” which is to say that it is a Voice Experiment by Instrument (a digital agency up the road from my houes), Made with some friends from Google (an advertising company that happens to have a sideline in technology). Now I thoroughly acknowledge that I am in a cynical mood tonight, so I will start off with some things that I like about My Storytime. First, the logotype is super cute! The cursive curve that the S in storytime begins with has a little bit that makes it look like the icon for a microphone! And the tail-end of the cursive e at the end of the word has those concentric curved lines that are the icon for sound! Or, if you don’t pay much attention, a bit like the Spotify logo.
Anyway. You can use a Google thing to record some stories and then someone else can play them back by using the magic phrase “Hey Google, talk to My Storytime” although it specifically says Google Nest, which sets off my “was the brief come up with something nice and cosy for families that shows off how inoffensive the Google Nest is” Peter-Tingle and… there’s nothing wrong with that? Technology, as they say, is neutral  and look why not use the smart speaker with the microphone to tell stories with your kids. People have got to eat.
There is a video, too.
I mean, it’s pretty clear that I am being cynical about this. It is not a new idea, but that should not be a knock against it. It might work pretty well because there’s a critical mass of voice assistants that just didn’t exist before when other people might have come up with this idea. I think it would be overly cynical, even for me, to ask What Problem Is This Solving because not everything has to solve a problem, and even if the underlying problem is People Are Too Busy To Spend Time With Children Under Capitalism, and maybe the answer Isn’t To Use A Device Which Is Specifically The Spawn Of Surveillance Capitalism, can’t something just be nice for once? Can’t it just be okay to use a Google Nest or whatever to record you reading something and then have someone else play it back? Can’t it just be OK to subtly encourage children to have a Google Nest in their bedroom or whatever?
OK I lied, I am being very, very cynical. The people at Instrument are very nice and this is not personal.
A quote you could use to show how much of a thought leader you are
I can’t remember exactly where I discovered the phrase, but I came across something that was apocryphally attributed to “British R&D during World War 2”, which is that they were proud of being “second best today instead of perfection tomorrow”. Which is another way of saying “done is better than perfect” if you need to motivate some people about shipping a skateboard or whatever. Anyway, I really like it, and the closest I could find a reference to it was [cw: Quora] this Quora thread about the British and guns or something.
For some reason, I was thinking about why autocorrect is sometimes so maddening and the theory I came up with has to do with register and dialect. First, autocorrect is, I think, subtly different from predictive text. Predictive text is the markov chain type model on iOS that tries to figure out what your next word is going to be and the source of a bunch of memes where you’re supposed to go “Here’s Some Words Then Let Predictive Text Choose The Rest” or pick them from that little bar above your keyboard.
Autocorrect (at least, for the purposes of this bit) is the thing that changes what you typed for a specific word because it is “helping”. At its most benign, it is the thing that capitalizes the beginning of sentences for you, or the thing that changes the characters “yuo” into “you” when you type them on your phone.
I think autocorrect doesn’t have a model for dialect or register. I mean: I code switch between more formal language (where I use capital letters, punctuation, spacing and so on) and more informal language (lowercase, more internet neologisms, less “correct” punctuation) depending on what I’m trying to achieve with that particular piece of text.
And I think the reason why autocorrect doesn’t have a model of this (ie: it doesn’t pay attention to the register in which I’m typing) is because historically, it hasn’t had to.
If I’m right, autocorrect has its roots in Microsoft Word 97 or so (unless, um, Microsoft “borrowed” it from somewhere before bringing it to the rest of the world), where its introduction was at the same time as the squiggly red line of realtime spellcheck.
In other words, autocorrect comes from Word Processing, which is Proper Typing and Proper Talking for Business. You know, like when you go to work in your Microsoft Office. Autocorrect is not something that, say, would’ve gotten along well with Iain Banks’ Feersum Endjinn or, I dunno, e e cummings.
I mean, back in the day, keyboards were  all about the serious business business business numbers numbers text. They were expensive! Who would use a keyboard for such frivolous nonsense as telling hundreds or thousands of people what sort of behavior your therapist would be upset about you performing.
But now there’s a keyboard in every pocket, never mind on every desk. And gosh have you seen some of the absolute filth that’s being tapped out on those keyboards? Pretty much every other word is duck, or something just as obscene!
Which is to say that my hunch is that the percentage of text generated by keyboards that’s speech as opposed to text has flipped, and much of what is typed is now more like vocal utterances and speech and not Things That Are Written.
That’s just a long way of saying: autocorrect wasn’t built for shitposting on Twitter, and that’s why I can never decide whether I should have it turned on (for my Serious Talk) or off (for when I am goofing off).
Last time, we were over half-way through chapter 13, having learned that the Librarian was a piece of complicated software hacked together by another librarian, Dr. Emmanuel Lagos.
Hiro dives in and asks for “every piece of free information in the Library that contains L. Bob Rife” in chronological order and is very emphatic about only including free information. I suppose that’s a reminder that we’re living in a full-on Information and Knowledge Economy, and that Hiro has No Money To His Name.
Because this is 1992, the kind of information that is in libraries and free happens to be “television and newspapers” which, ha! In 2019, television is not free unless it’s been ripped and stuck on YouTube or wherever and newspapers if they even exist anymore are increasingly behind paywalls too. We should remember though that these are information sources in The Library, an evolution of the library of congress, so maybe we should cut Stephenson some slack here? In 2019 there’d be a surfeit of information about Rife, and there’s a whole discipline of open source intelligence that covers what you can find out about something with the emphasis on publicly available information (and sometimes, not entirely free information, either. Just the kind of stuff that you can get reasonable access to without being a nation state).
While the Librarian’s busy doing that, Hiro takes a look at Earth and just the resolution and clarity of it tells Hiro “that this piece of software is some heavy shit”. In 1992, this type of information fusion would indeed by some heavy shit: we’ve got not just the continents and oceans, but weather systems complete with shadows and, in perhaps one of the only minor references to potential climate change, the “polar ice caps, fading and fragmenting into the sea”. There’s even an impressed bit about the terminator sweeping across off the Pacific. I don’t know if it’s the fact that it’s a realtime view of a terminator that’s impressive or that the terminator is even displayed. Any sufficiently privileged person who’s travelled internationally with a seatback display has probably seen the stereotypical world map with light/dark overlaid upon it.
Just like our Google Earth and other zoomable user interfaces, and in another reference to current attempts at VR head mounted displays, Hiro’s goggles notice his eyes attempting to focus further away, so his point of view zooms in to the thing he saw moving across the globe - it turns out to be a low-flying CIC satellite. This is one divergence from what Google’s Earth has delivered to us - as far as I know, our Earth doesn’t include any realtime data sources. Even Maps (sigh, I hate these nouns) when it purports to show realtime traffic doesn’t actually show realtime traffic. But Snow Crash’s Earth shows either a realtime merged satellite view that also shows something like a Planet Labs satellite, or a computed view.
Meanwhile, the Librarian has completed his search, presenting it as a hypercard. Again, the idea of a search engine isn’t really here, even though there are Daemons. In fact, the last time we heard about Daemons they were even presented as embodied software entities - the not-ninjas that creep out after a sword fight and drag the body away to be burnt in the eternal fires under The Black Sun. The Librarian is not a tool, not a background search process. It’s an interactive human interface, one that can be configured to be a bit louder (much like TARS or CASE can have their humor settings fine-tuned).
The hypercard is a reminder that this is a future where the web doesn’t exist and we don’t live in a REST-y document-centered information universe. The only real connection we have to cards right now, I think, are in design systems like Google’s Material Design or Apple’s latest HIG, where cards are a visual metaphor, but not necessarily one for information architecture.
In this way, Hiro’s hypercard results aren’t a set of pages to be… paged through, like Google’s results. They’re still discrete results (“snapshots of the front pages of newspapers” and “colorful, glowing rectangles [of] miniature television screens showing live video.” just presented in a different way, in a different container. There’s a cute phrase here and I’d have to check the etymology, but Stephenson describes the search results as “fingernail-sized icons” which I had actually typed as thumbnails before realizing what the author had done.
The description of the hypercard (not necessarily a hypercard stack though - we just see the one card at the moment) is something that feels like it’s straight out of Douglas Adams’ Hyperland, which I wrote about a good five years ago. Just the idea of motion video on something that looks like a card feels like peak early 90s ZOMG MULTIHYPERMEDIA.
In any event, Hiro is Freaked Out because the hypercard implies the presence of a shit-ton of full-motion video (honestly though, it’s only NTSC resolution, so it’s not like it’s that big), and because he’s “jacked in over a cellular link,” the Librarian “couldn’t have moved that much video into my system that fast.”
Also! The model here is that the video is in the hypercard. There are still no network-addressable resources. The hypercard is self-contained, a blob or package of text and binary information. Hiro is behaving as if there’s no way the hypercard contains references to video accessible in the Library. So the only possible explanation is that Juanita already collected all the video, anticipated the search Hiro would perform, and included it in the Babel/Infocalypse stack in the first place. Which she did. And which is the end, finally, of chapter 13.
OK, that’s it for this episode! Not an obscene number of words, and definitely under 2,500 make no mistake!
How are you? I am okay, which is to say there is a lot going on.
I hope you’re well, at any rate, and if you feel like it, drop me a note and say hi.