Episode Two Hundred And Three: Compiling Consciousness.dat; No Apple TV; Negging, but for A.I.s
0.0 Sitrep
8:38pm on Thursday, 12th March 2015 on a flight back to Portland after having spent the first part of this week in the Code for America office in San Francisco. I'm tired, my wrists hurt in that somewhat irresponsible way of "I'm thirty-five and have been typing for probably the last twenty-years (really), what did you expect was going to happen if you weren't pro-active about this you idiot?" and this year's installment of seasonal allergies combined with an Exciting New Medication (with this new prescription I really do feel like a New Yorker article - there are those of you who might be able to make an educated guess as to what that might be) mean that a) I appear to have extra mucous (the allergies) and b) the nosebleeds are back (the Exciting New Medication).
Anyway.
1.0 Compiling Consciousness.dat
So let's get this out of the way first: if you're ranking (recent) movies about artificial intelligence and, say, uploading, then:
- Her[1] is "quite good and obviously SFnal whilst not really dealing with, say, the worldbuilding aspect of what happens when the equivalent of everyone-with-an-iPhone gets an artificial general intelligence";
- Transcendence[2] is *literally* silly, I am not sure where the script came from but if one result of the singularity is that we don't produce movies like Transcendence any more and see Johnny Depp ascend into a data center, then I think as a species, we can deal with that. Transcendence will make you stupider, so if you're afraid about that as a species we're getting smarter and increasing the probability (in general, and in the short term) of creating a singularity, then Transcendence is a good thing, because everyone who saw it got stupider.
- CHAPPiE[3] *is* a re-telling of Short Circuit combined with Robocop combined with, um, District 9 (it will be interesting to see what Neil Blomkamp does with a film - the purported Alien/Xenomorph-universe Fox deal - that has nothing to do with the Tetra Vaal universe that we've seen in District 9, Elysium *and* CHAPPiE now). There are genuinely silly bits in it, where Dev Chapel (who puts in a good performance of an idealistic young hacker who thinks he's created life but doesn't seem to realise what that implies and hasn't really had to deal with "the real world"), says to himself that he has terabytes of code to write and needs more red bull, and yes, there's a bit where he tries to compile consciousness.dat and, well, it doesn't work for a bit (probably because he forgot to insert a semicolon somewhere - we've all been there, especially when it's 3am), until it does and then you have consciousness.dat on a USB stick and it's an epochal moment in the history of mankind and the mundanity of the damn thing is that a) the/a next step in evolution is a single file on a USB stick, and b) it's called "consciousness.dat", and c) the only way it could be *more realistic* would be if such file was named "consciousness.dat-v2-use-this-one-final-DO-NOT-ERASE.dat". And then there are the bits that work because you put something that acts like a human baby (moves like one, talks like one) in something that isn't a human baby, it totally lights up all the structures in (most of) our brains that goes AWWWWWWW, and they're very well done indeed, and there are the bits that are just really emotional manipulation ("Please don't hurt Johnny Five!"), and then the end of the film where it actually feels like the beginning of a trilogy that would deal with real-world implications of people being able to upload. Which, you know: interesting.
- Lucy[4] is funny if only because Lucy ends up looking like some kind of IBM z-series mainframe but also downloads herself onto a USB stick, presumably also titled LUCY.DAT
All of this is essentially a segue to say that Greg Borenstein wrote a very long reply to when I wrote about hard takeoffs and my reaction to some very elementary reading around the IEEE Spectrum Yann LeCun interview. Borenstein's first point was to remind me that the most likely thing to happen in terms of a hard take-off is that, well, there won't be. LeCunn's response to the idea of a liftoff was:
"As Neil Gershenfeld has noted, the first part of asigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist."
In other words (potentially inaccurate ones, but hey), part of the whole deal with the idea of a liftoff/takeoff is that progress is accelerating (it might be, but it certainly isn't doing so linearly) and that it instead goes through fits and starts. Sure if you zoom out a bit, for the last few decades we've been able to notice a sort of trend in terms of the number of transistors we can pack in a certain area, but that's all it is: it's certainly not a *law*. The idea of the takeoff relies on *assuming* that things continue as they are, and that *nothing will change*.
The second bit Borenstein wrote me about involved some more background on the Google DeepMind work. This work, if you recall, is super exciting because a thing is "learning" how to get better at certain late 70s/early 80s computer games without us having to tell it how to play them. Borenstein explains that the DeepMind work uses a technique called reinforcement learning, where you define a simple reward function ("say, how far towards the top of the screen the triangle [should] move, or how long you [should] stay alive without dying, or how many Tetris lines you make"). The algorithm then "explores the space, starting off by making random moves, but all of the time recording at each point the best reward score it earned for taking each action."
You can see how this makes sense for super early computer games - they're in a dimensionally simple universe (e.g. pong) and it's fairly simple to define the reward function (make this number bigger). Borenstein says:
"as more and more of the space is explored and greater and greater rewards are earned it establishes a gradient through possibility space that leads to the highest possible reward."
This sentence gets points from me because it includes the phrase "possibility space" but also because it helped me remember and realise that these algorithms are (potentially?) subject to getting stuck in local maxima: ie - they can potentially continue down a path of reward that gets stuck (the usual example here is climbing up a mountain) and then get stuck at the top of that mountain without realising that there's like a way higher mountain just two clicks over.
Borenstein says that this approach - reinforcement learning - is pretty similar to genetic algorithms. The thing here is that "the process [of reinforcement learning] bears little resemblance to real world processes of learning from reward". For starters, "humans and animals learn from very few examples, whereas reinforcement learning needs an endless supply".
"Second," says Borenstein (and this is where I feel happy for understanding something), "there's a reason why this stuff gets used on videogames, particularly simple 2D ones." It turns out that this technique only works where a "dead simple, totally unambiguous reward function can be defined" that will have a direct connection to the correct actions needed to achieve it. You can't have any levels of indirection or higher-level planning here, not like what's contemplated in terms of Starcraft playing: the reason why this works in stuff like Pong and then Mario and so on is because the feedback loop is so short and so simple: push right and Mario moves right, push him too far and he dies. Once you understand this, you look at a game like, say, Portal, or any sort of 3D first-person perspective game that requires some semblance of planning or internal model representation and it feels like the approach starts to fall down.
This gets to the last point of Borenstein's epic reply to me about the state of the art in artificial cognition. The big debate, he says, is the question of symbolic vs non-symbolic learning. A long time ago, if I have my history right, back in the SHRDLU days and when Minsky ruled the world with an iron symbolic representation, concepts were encoded symbolically. That worked for a while - but only so far. Then we had an AI winter because nothing happened and we never made those plastic pals who were fun to be with. Now the pendulum has swung the other way to non-symbolic learning: "neural nets and reinforcement learning and the entire cohort that's currently ascendant are strongly on the non-symbolic side". They, as LeCun says, represent knowledge by vectors: the strength of connections in various networks, for example. These vector-based systems "have no language- or mathematical-like internal representation of the world pre-programmed into them, just simple generic units that get shaped into a new form through experience of the world."
All of which is just a long way of saying that it's probably a while before a) our Toaster decides to pick a name for itself, and b) pleads for its life if ever someone threatens it with an angle-grinder.
[1] Her (film) - Wikipedia, the free encyclopedia
[2] Transcendence (2014 film) - Wikipedia, the free encyclopedia
[3] Chappie (film) - Wikipedia, the free encyclopedia
[4] Lucy (2014 film) - Wikipedia, the free encyclopedia
2.0 No Apple TV
Look, it's been so long since I've written a newsletter episode that not only have Apple shown off their watch and subsequently been responsible for the generation of a whole bunch of thinkpieces, but they've *also* completely not disrupted the TV world by a) not announcing an update to the Apple TV, and b) maybe but not quite disrupted the TV world by being the exclusive launch partner for HBO NOW which is a new way of getting HBO content totally different from other ways of getting HBO content (for free, via BitTorrent, for free via someone else's HBO Go password, or for money via either bits delivered over the internet, or bits stamped onto shiny discs).
I don't think there's going to be a new Apple TV for a while. Or, at least, I don't think it's worth Apple's while to do anything with the hardware - other than the price drop that they announced - mainly because the only thing that will make the Apple TV significantly better at TV is, well, doing the job it's supposed to do, which is mainly a TV job, and when it comes down to it, mainly a content play. Apple is in a radically different position that it was when it negotiated the iTunes Music Store content deals: it can't just say to everyone - hey, your stuff should be on the Apple TV. Eddie Cue's evidently got to have a different negotiating tactic, and until and unless an Apple TV can do "watching video content on my TV" better than the current mess of over-the-air, cable subscription and over-the-top subscriptions, it's too messy. Apple's all about simplification, and there isn't necessarily a simple way through the thicket of rights that we have right now.
The alternative, of course, is to just wait a few more years (trust me, the TV industry will still be there, in some form) until a whole bunch of people *don't want or need* over-the-air. Which we're starting to see already: Unbreakable Kimmy Schmidt is a thing that couldn't get a network deal, but was just fine on Netflix. We'll see more and more of that, and pretty soon the people who *want* Network TV will be in the minority.
[1] Apple’s HBO Now Deal Has Been in the Works for a Year | Re/code
3.0 Negging, But For AIs
Via Paul Mison, two bits of 'viral' ARGish marketing to come out of current marketing festival SxSW: the first that's most likely for Terminator: Genisys[1] (presumably there is a crap-tonne of Hyundai sponsorship in this movie), and the second for Ex Machina[2], both of them movies about artificial intelligences. The Ex Machina one is interesting if only because it preys on single dudes (mostly) who want to hook up with an attractive young woman, who - twist! shock horror! - turns out to be an A.I. who isn't interested in them, which naturally leads *my* brain to the space of future pick-up artists teaching bros techniques to successfully neg A.I.s., to which my brain *then* goes to the space of such A.I.s still turning down the advances of men who've created them specifically as robo-girlfriends who'll fawn over their every action, and that's why all those dudes completely freak out.
[1] Yep, That Anti-Robot Protest At SXSW Was A Marketing Stunt [UPDATED]
[2] Tinder Users at SXSW Are Falling for This Woman, but She's Not What She Appears | Adweek
--
9:20pm on Monday, March 16th, 2015. When I write it like that it sounds like the announcer to the Daily Show, which isn't a good thing because the Daily Show is funny and has a whole team of people writing on it and a good presenter and a point to it, whereas I have this newsletter and I write to it occasionally, when I get around to it. Suffice to say that I still think Newsletters Are A Thing and I've just renewed the internetofnewsletters.com domain for another two years, so I must be able to wring a little bit more out of this.
Best,
Dan