Episode Twenty Four: This Inexplicable Future, Where Did All The Agents Go, Brittle
1.0 This Inexplicable Future
Every so often, it's worth taking the time to stop and really look at what's happening around you. Case in point: yesterday, Mt. Gox a major bitcoin exchange, effectively ceased operations and, by some accounts, "lost" around 700k bitcoins. Stepping back from that, we have a former collectible card game trading site - the Mt. Gox is actually Magic: The Gathering, Online eXchange, run out of Japan faciliating the exchange of a cryptocurrency based on the algorithmically difficult work distributed amongst hundreds of thousands of computers worldwide.
This is *stupendous*. A hobbyist site ended up supporting the birth of a distributed currency (the merits of which are debatable, and are being debated). It's one thing to see that a sizeable portion of the world (or at least, an influential one) would be sucked in to the mechanics of a card game that involves some sort of trading mechanic. It's also one thing to see that, in theory, a libertarian-esque anonymous cryptocurrency would find enough favour and attention in today's networked populace to take off. And that such cryptocurrency in possession of a good following would then find itself in need of an exchange mechanism. But, of course, in the entirely explicable (at this point, you would think we wouldn't be surprised) world that we live in, of course the intersection between those two sets results in, well, the situation that we have right now.
I wish I had something more insightful to say other than a quip along the lines of "isn't it interesting how the world keeps surprising us". There are so many factors at play here; from the crossover of one interest group into other and the fact that an increasingly connected populace allows ideas to spread so quickly and the fact that the code and thinking behind cryptocurrencies, being open, allowed a million clones to bloom. For anyone who thinks the future is going to get less weird, I'm sorry to have to say that that's just not going to happen.
2. Where Did All The Agents Go
There were a few things hidden in that Hyperland documentary: one, which we're all familliar with now, and actually felt like it was the one less touched on, was the promise of the Interconnectedness of All Things, a theme that Adams would visit in his Dirk Gently series of books (and, I note, dirk[1], an early project of Matt Webb, now busy running a pocket universe on the substrate of his grey matter).
The other was that of strong AI, and the one definitely skirted over. Baker exhibited a phenomenal degree of intelligence and understanding, one that, in the guise presented in the programme (a fully conversant interface, and one that could easily pass the Turing test) is clearly unobtainable at the moment. Our conversant interfaces at the moment are bits of black boxes of functionality, word recognition engines hooked up to, at best, general purpose semantic knowledge graphs such as Wolfram Alpha. Siri and Google Now are not, in any way, going to fool us into thinking they are as smart as Baker.
Fred Scharmen wrote to point out that the intelligence that was vested in the agents of our purported future instead was distributed amongst millions or billions of fleshy meat brains, all Chinese-Rooming their way around, in yet another example of an EasyHard problem. It is easy, as it were, for Amazon to mine the purchasing habits of millions of shoppers and to attempt to provide meaningful and serendipitous suggestions as to other things I might want to buy.
Is this instead some mythical hand of the market? Are we ants burrowing around in the material of the internet, leaving behind patterns and traces for machines to obsess over in order to determine our meaning? Because the meaning that we're able to impart to machines and the understanding they derive from us when we choose to speak to them explicitly are shown only in terms of brittle speech: we must construct, still, our queries just so, from pre-ordained blocks that we have already taught the machines, in order for them to fulfil our query properly.
And what of long-term understanding of our intent? Scharmen saidi that, instead of relying on artificially intelligent software to act as curator and serendipitist, people now perform that service for each other: we do not have agents, we have but ourselves. I don't think that's strictly true: as Adams says, we are the ones creating the data and the shapes that we cannot see: it's the machines that construct the trending topics and are helping to tease out meaning from the streams and feeds based on (admittedly basic) mechanisms of feedback that we have built in. There is not, I don't think the Amazon equivalent of "people who have read the following on the web also read", but the way we self-select the networks that we're part of provides some of that functionality.
Perhaps the exuberance of Adams manifested it in the absolute trust of the algorithm, that the machine would be able to understand us perfectly, when all along it would be a cooperation, of sorts, some sort of symbiosis. This makes me think of the internet as discrete parts; and the possibility that it might, in some misunderstood way, act as a distributed brain. It feels like there are distinct neurological structures embedded on the internet and that at the moment, issues like corporate self-interest have imposed some sort of vicious lobotomy, preventing those structures from implementing cross-talk. Imagine what could happen when the cat-recognising part gets input from the purchase recognition part gets input from the face-recognition part. What's interesting about the way we're spreading code on the net is that this intelligence is also getting distributed: as one part learns how to recognise faces, because the substrate that the net runs on is so general-purpose, it is near enough that any part can suddenly also include that dedicated structure.
At the same time, there's this weird feedback mechanism where the instrumentation that the internet has (and I realise that I've digressed way off base from the talk of agents) is dealing with the very bottom of Maslow's hierarchy of needs and is generating, again through some sort of invisible hand, the type of content it thinks we want, hence all the Five Amazing Secret Tips To Get Rid Of Belly Fat. These algorithmic utterings that are farmed out to writers and then produced - are they not some sort of Chinese Room in and of itself? Would you know if there were some sort of real intelligence behind their production, or are they merely following a bunch of (ill-defined and not particularly clearly articulated) rules?
All this makes me feel that the idea of a singular agent seems increasingly unlikely barring an accidental Giant Leap Forward in the creation of a personality-focussed general intelligence. The internet itself, taken as a whole, with weather recognition bits and fault-tolerant bits and so on, is a much bigger thing, and why would it need a human face, anyway? What sort of bizarre personality would be in front of such strange machinery?
3. Brittle
That said, the internet is brittle in the same way that biological life and humans are brittle. Poke a human in the right place, with the right pressure, and it'll just keel over and die. It turns out that one line of code - in this case, ill-advised lack of braces and yet more proof that Goto be considered harmful - can effectively compromise the equivalent of an immune system. And then when you think about computers and the internet in terms of infrastructure - where's the equivalent of public health? We talk about anti-virus software, but we don't treat what's increasingly turning into critical infrastructure (indeed, invisibly, beneath the notice of the populace) whether it's actually hooked up to critical services, turning into services that are relied upon, or suborned to produce DDOS against critical services themselves.
Software is brittle, in the same way that humans are. It doesn't even self-heal. There were (obviously?) no unit tests that caught the OS X and iOS SSL failure, and once in the wild and deployed, end-user systems themselves weren't able to diagnose themselves in a method of self-reflectivity to notice that something critical "felt wrong".
--
That's it for episode 24. I might go watch Robocop tonight. Not sure which one, though.
Best regards,
Dan