Episode One Hundred and Ninety Seven: Smarts; Odds
0.0 Sitrep
12:55am, Wednesday 25 February. I should be asleep. I don't really want to talk about why I'm awake, suffice to say that there are at least two reasons, neither of which, in the grand scheme of things, are things to be worried about, and of which one is probably something that hopefully in the short, medium and long-term will turn out to be a good thing. Anyway. More of a brain-dump, today, rather than anything in particular that has caught my attention. What I'm going to try and do is write myself to sleep and on the basis that if I get this stuff out, then I might feel able to fall unconscious. I'm already second-guessing myself about that because as we all know looking at a lit laptop screen at night (ie: a blue spectrum cancer light) is the new sitting is the new smoking.
Anyway.
1.0 Smarts
I go through phases of reading and not reading. At the moment, I'm partway through Daniel Kahneman's Thinking, Fast and Slow (to the extent that one can be partway through a book they started reading *months* ago) and am more currently partway through Nick Bostrom's Superintelligence, which is a book by an otherwise rather intelligent person and supposedly a survey of a) what a considerably-greater-than-human-intelligence might look like, b) how likely it might be to happen, and c) what we might or ought to do about it given whatever odds we figure out in (b).
As an aside: it strikes me that someone ought to come up with (I'm looking at you, Mr. Greg Borenstein) the Drake Equation for Artificial General Intelligence, if someone hasn't already - in that it should be possible (ha) to map out a good and accurate survey of what conditions we think might be required for the production of Artificial General Intelligence, and the likelihood of each of those things happening. In the same way that the Drake Equation is full of a bunch of reckons about things like what we think the rate of star formation would be, the fraction of planets that could potentially support life (obviously a complete reckon just a few decades ago and potentially somewhat more of a milder reckon nowadays), the fraction of planets that might go on to develop intelligent life (at this point, this really is the equivalent of scientific fanfic), your Borenstein Equation would similarly attempt to map out: a) a sort of average rate of technological progress in whatever processing substrate, or capability - say, number of petaflops in the world, b) the current likelihood of us being able to image to sufficient resolution a working brain, c) the number of attempts around the world of generating artificial general intelligence, d) the number of GPUs that nVidia has sold in the last financial quarter, e) whether Facebook has hired any more experts in covolutional neural networks, and so on. You might have - even if you didn't really understand the words - have cottoned on to the idea that I was getting just a little bit facetious towards the end of that last sentence.
I digress.
Bostrom's book is vaguely interesting so far: essentially I don't feel like I've gotten to the potentially meaty bit which is the "how scared of Artificial General Intelligences should we be" part, which also somewhat feels like it recalls a sort of Roko's Basilisk and Sam Altman's recent musing on artificial general intelligence (ie: if they're going to happen, then bad guys are going to make them, so good guys should try to make them first), which in my mind *feels* like it's practically the same as the "the only way to stop a bad guy with a gun is a good guy with a gun" and, well, let's just not see what's behind the door that says "artificial general intelligences don't kill people, people with the ability to stimulate hard-takeoffs of singularity-level intelligences kill, or rather, rapture, entire species".
Anyway, some more brain-dumping. One of my favourite scenes from a Greg Egan book is from Diaspora, specifically the Orphanogenesis section, which in the universe of that particular novel tells the story of how, every so often, the programmed infrastructure of an uploaded "city" decides to create an orphan: a digital consciousness with no parents, and meticulously describes - in a way that's very different from, say, Ted Chiang's A Lifecycle of Software Objects - how an artificial consciousness might come into being from a pseudo-biological-process point of view. All very beautiful and all - to my mind at least - as wonder-invoking as anything in the natural world. Only made-up, of course.
It's also still on my reading list - and I don't really know how outdated it is, but there was something that struck me when I first learned about Minsky's ideas about the society of mind. There's something particularly compelling (or, even, guiltily) *believable* about the idea of most conscious actions being post-rationalisations and explanations-of-what-happened rather than "I chose to do this" - especially given the reams of experimental data that show that for certain tasks, we do initiate actions before we become consciously aware of them. Anyway: for those who've been following along for a long time and remember my story about the fast brain and the slow hands, there's something appealing when I try to look inside my own head about the complete and utter *chaos* that's in there, and all these concepts and thoughts jiggling around whilst at the same time, there's a pattern matcher in there vainly trying to make sense of it all.
Although I suppose you can read that and also end up thoroughly depressed at the whole of the human condition, so there's that, I suppose.
2.0 Odds
A collection of things that have, as they say, caught my attention:
- via Mike Isaac on Twitter, Uber have taken the next inevitable step and announced a tie-up with a frequent-traveller reward programme.
- via Callie Neylan on Twitter, Microsoft's latest productivity vision, also known as corporate video fanwanking, and Neylan astutely points out, "how does technology help the world's poor" - or, even, are we to assume that both Kat and Nola, the former a "young, independent marine biologist" and the latter a "corporate executive" are both zero-hours workers in this shiny future?
- via Charlie Lloyd on Twitter, Maersk, in the business of moving things from one place in the world to another place in the world, demonstrates in its annual report that you can certainly make a lot of money doing that
--
Look at that. Just over 1,100 words and it's only 1:23am. Maybe I'll try to go to sleep again.
Best,
Dan