Episode Two Hundred and One: The System Of The World; Hard Takeoff
0.0 Sitrep
8:14pm on Tuesday, March 3, 2015. The new process/practice/habit/general writing thing is in the evening: after family dinner with the toddler, after we've put him to bed, after I've cleaned up the dining room and the kitchen and put everything away. And then, some time to write.
I said before that I didn't want to write in the evenings - it took time away from my family - but it turns out that this isn't so bad, and is, like most things in life, a compromise. It's also a little bit time-constrained: it means that in practice, I only really have around forty-five minutes to an hour to write, which also isn't that bad. I get to splurge stuff out, but am kind-of prevented from just writing and writing and writing.
1.0 The System Of The World
Ha, fake-out. This is not a bit about the Neal Stephenson book that I never started, never mind got through: whilst I enjoyed Cryptonomicon, I never could get into the whole Quicksilver thing and instead did what any other reasonable person would do and just bought a proper doorstop instead.
ANYWAY, via Margaret Robertson[1], news of the game Amnesia[2] (no, not the one with pigs), which was a) a text adventure; b) that came out in 1986 and c) "[simulated] every block and street corner south of 110th Street [of Manhattan]".
A few things here. Thing the first: text interfaces are back, but they're not command lines, they're conversational interfaces. I spent lunch today catching up with Matt Haughey and besides geeking over his futuristic robot electric car, we talked lots about, well, writing and talking to things through text and what that feels like. So there's what feels like a massive body of knowledge in terms of writing not just parsers, but *interactive text*, for interactive systems, that might just be on its way back. There was a while when we thought Google was the command line for the internet, and now it's kind-of the command-line for the internet, but Apps are turning out to be a thing, because it turns out that a command line is pretty good when you've got a keyboard and sometimes you don't have a keyboard. You have a tiny thing in your hand. Or a phablet. But you know what I mean.
The second thing is this: the people who went through 1986 or at least have a passing remembrance of it are, I hope, going to have a bit of an edge because they'll have seen what worked and didn't work back then. I like this theory because I am someone who was alive in 1986 and this is a thing that is different about me than all the young people who are alive these days, who weren't alive in 1986 and this is still something that my therapist and I are working on finding new and interesting ways for me to accept.
The third thing is: we clearly had a bunch of ideas back in the 1980s (and before!) that were hard to realise to the degree that we wanted to with the technology that we had available. One easy reference point for this is stuff like Elite, which Kickstarter remake, Elite: Dangerous is more of a "the same fundamental idea and game, but gussied up with better production values" and rendered in something a little bit more - well, not *realistic* but, *high-fidelity* what with even our mobile phones having more graphics-FLOPs than we know what to do with when they're not rendering things that look like paper or casting shadows on other things that are not real. One of the reasons why I pick a thing like Elite is that it, like Amnesia, attempted to be a high-fidelity, detailed simulation on, from our current-day perspective, pretty crippled hardware. I read something like the Wikipedia writeup for Amnesia and think to myself: what would this game look like now, if we tried to remake it, but the focus wasn't on graphics and the presentation layer, but on the fidelity of the simulation? The text interface may well be something that doesn't scale, but nineteen billion dollars and however many DAUs and MAUs for Whatsapps and their assorted brethren put paid to that notion, I think. Text *is* an interface, and it's still something that's usable by lots of people, no matter how fucked up our education systems all over the world may be, and no matter how much gifs are coming to *accompany* and not supplant, our main methods of communication.
The fourth thing is what I think is the big one, and that I kind of alluded to in yesterday's episode about textual interfaces and all the copy that's going to need to be generated. There's the Strong-AI version of doing this which is: somehow, build a system that can construct the text that it needs itself, that makes sense and can be used by humans as an interface. In other words, make a thing that is smart enough to express itself in words that can be conversed with. The much more opportunistic and doable version of this in the short-to-medium term is: we already have things that are really good at parsing text and also have things that are really good (for certain values of "really good") at writing text, too: humans. So what does it look like when the architecture of the interface of an application is reams and reams of text such that, in the same way that most of our utterances are unique and never-before-said in the entire history of the universe just because of the combinatorial way our grammars work, what does it look like when we start constructing applications that we converse with - however we converse with them, whether by free text entry or smart, conversational-ish interfaces, like the kind you might use on Lark these days?
[1] https://twitter.com/ranarama/status/572841491552448512
[2] Amnesia (video game)
2.0 Hard Takeoff
I've been doing more reading around the state of the art in artificial cognition (note: not intelligence, cognition, and not "computing" either). One of the best introductions that I read recently came to me via Greg Borenstein, a condensed and edited interview with Yan LeCun, Facebook's new-ish director of AI, in IEEE Spectrum[1].
It's good because it's a readable explanation of what convolutional neural networks are, when the idea came into being and why we're finally able to do things. It's also the thing that made me wonder if one of the reasons why the US still uses checks/cheques is because they built a stupendous neural network that *reads cheques* and processes them, whereas the Brits instead built a stupendous interbank electronic clearing system for about thirty million people. Anyway, the cheque bit is a side-issue. One of the interesting points that LeCun explains about current work with neural networks is that because they're inspired by the human visual cortex, the kind of things we've gotten them to do so far is mainly around recognising things, and not necessarily *predicting* things. The easy way, LeCun points out, to do this is to show the neural net a thing and not to train it on whether it has seen a cat or a dog, but instead to ask it what it "thinks" the scene will look like in one second's time, and then compare that with the still frame from the video of, well, what happened a second later. I think what LeCun is saying is that this is an approach that may well help with unsupervised learning: grab a bunch of video and see what a convolutional neural net can learn about the properties of the world without us having to tell it things like "hey, did you know gravity exists?"
This approach is, in a way, similar to what's described in Artificial Intelligence Goes To The Arcade[2] by Nicola Twilley in the New Yorker. The main rival to LeCun's group at NY and Facebook is the DeepMind group, formerly independent and now a part of Google. DeepMind is in the news because they have, well, "built" a neural net that has learned how to play 1980s videogames really well. Demis Hassabis' goal with DeepMind is to get to something that can play late 90s videogames - you know, StarCraft and the ilk - and not just play them in the way that our "AI" opponents play them when we don't have human players, but learn how to play them and infer the rules and essentially discover and make-up their own strategies.
There's the argument here - that I can kind of agree with - that developing artificial cognition that can cope with the environment of a videogame is simply learning how to do the same with artificial cognition that can cope with the real world, but trying to not kill yourself with a very difficult task and starting with something a bit more achievable. Videogames are abstractions that have grown, more or less, only more detailed over the last few decades. Games now have 3D engines, model physics (and sometimes altered physics that means us humans have to learn new rules about how the world works, like: fast thing goes in portal, fast thing comes out of portal) and acoustic models, to name but a few. What better environment than the real world to try to make something that can make sense of that world? The rest, as some engineers might say, is "trivial" and just a scaling problem once you've figured out the former.
To have come across these two articles and at the same time discover the new novel Acadia[3] by James Erwin[4]. James Erwin is an interesting fellow because he's the one who answered a Reddit question about what might happen if a modern US Marine battalion found itself transported back to the the time of the Roman Empire being ruled by Augustus Caesar[5] and ended up with a film deal.
Anyway: suffice to say that Erwin's new novel is one that even though I haven't finished it yet, feels like it comfortably fits into the Egan/Ted Chiang subgenre of super-interesting writing about sentient software and humans having to do things. For starters, one of the character's full names is “4-Charlie osNASA 0z4ooh65 dm: 3-Azimuth.” Look, just take it as read that if you like hard science fiction with space and von-neumann probes and intrigue and what feels like a Neuromancer-esque tale of AIs flitting around the network, drones, drone warfare, and all that other stuff, just go read this book so I can talk to you about it. You know who you are.
[1] Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter - IEEE Spectrum
[2] What Google DeepMind Means for A.I. - The New Yorker
[3] Acadia: James Erwin: 9780978501686: Amazon.com: Books
[4] Official website for James Erwin | Dad, husband, author, screenwriter[
[5] Rome, Sweet Rome - Wikipedia, the free encyclopedia
--
9:38pm, because I had to go away because *someone* was too excited to go to sleep.
Best,
Dan