Episode Ninety Seven: That Most Killer Of Deals; Need To Know
0.0 Station Ident
I got a bunch of notes back about what I wrote the other day about my depression. I don't really mind writing about it, and I think it's a tragedy that more people don't understand how someone can look fine on the outside but really not be on the inside. And in every single job where I've been in a managerial position, I've made it a point to make sure that everyone I work with knows it's OK - in the right way - to be able to talk to someone about what they're going through.
It was still there when I woke up today. I was supposed to go out with my son to a music class, and I just couldn't do it. I collapsed on a sofa and practically passed out. I *should* have been able to go (there's that should voice again), in fact, I'd done perfectly fine getting up in the morning and getting him dressed and fed and cleaned up and then cleaning up after him, but... I just couldn't take that next step. And I had an incredibly understanding wife who told me that if I needed to hide from the world from a bit, then that was OK.
I think it's a question of balance. I know that one of the problems I have is in dealing with things like they're absolutes: that if I take some time to slow down and recuperate by staying at home and away from people, essentially retreating a bit, then that's bad or a failure when I should be out getting stimulation. It doesn't matter if I might actually *need* to retreat for a bit, because that signal gets overridden by the guilt of essentially wanting to hide from the world and not deal with it. But today, I didn't take him to music class and instead looked after him at the park and we had a fun time on the swings and the roundabout and looking for planes. And whilst there's a little bit of it lingering, I can feel it receding. So that's good.
1.0 That Most Killer Of Deals
One of the things that stuck in my head from Mary Meeker's 2014 report[1] was her description of the invisible app. Meeker uses Matthew Panzarino's description of what an invisible app is:
"Now, we’re entering the age of apps as service layers. These are apps you have on your phone but only open when you know they explicitly have something to say to you. They aren’t for ‘idle browsing’, they’re purpose built and informed by contextual signals like hardware sensors, location, history of use and predictive computation.
"These ‘invisible apps’ are less about the way they look or how many features they cram in and more about maximizing their usefulness to you without monopolizing your attention."[2]
There's a bunch of ways you could describe this. One might be just-in-time-apps: software that surfaces just when you need it, and not at any other time; it's this kind of "without monopolizing attention" that Panzarino's talking about. Another way of looking at it is that this is the dream of "do-what-I-mean" contextual computing: that finally, a computer will be smart enough to only tell me relevant information when I need it, and not before, certainly not after, and definitely never, not ever, irrelevant information.
I have a feeling that the dominant way of looking at a problem like this was one of smarts: someone like Kurzweil would say something like: well, when we finally have enough processing power we'll be able to use *smarts* to predict what you might like, or want, and when.
The horror, of course, is that something like this doesn't require smarts at all, it requires simple brute force. That's the thesis in Beau Cronin's O'Reilly Radar article about untapped opportunities in AI[3], and it points to a confluence of a number of trends that some of us might not be too comfortable about. Briefly, while the strong AI academics were busy not fulfilling the promises they'd dangled in front of us for the last forty-odd years, Google quite quietly amassed an unreasonable amount of data, an unreasonable amount of computing power, and than ran somewhat-reasonable machine-learning algorithms over them and came up with something (paraphrasing Douglas Adams) that was almost, but not entirely unlike artificial intelligence in narrow domains. Perhaps the best example of this is Google's success in machine translation.
The dream behind the invisible app is one that's always been with us, ever since we've gathered around a fire and had an argument with the next hom. sap. because, damnit, why couldn't you just do what I *meant*, not what I *said*? Why do I have to bother with all of this communication nonsense, when you could just implicitly *understand what I want* and just do it? So Panzarino describes them as:
"[A] social network [that] knows exactly what posts you’ll want to read and tells you when you can see them, and not before? What about a shopping app that ignores everything that you’re unlikely to buy and taps you on the shoulder for only the most killer of deals? What about a location aware app that knows where you and all of your friends are at all times but is smart enough to know when you want people to know and when you don’t?"[2]
There is, as ever, a bit of Californian Ideology seeping through here. This is pure utility-by-algorithm, if only we let the algorithm peer into our whole lives, unobstructed, will it be able to anticipate our every need, at our side, in our pockets, on our faces, in our field of view. There's also a belief here: if *only* we had enough data, if only we had more context, if only we put more sensors in phones, if only we collected it all: and not just yours, we need everyone's - then we could iterate over it and map/reduce and hadoop it into just this one thing, that will forever and ever, tell you what you need, just when you need it. And then life will be so much easier.
It's worth looking at this the opposite way: this anticipatory or context-sensitive computing utility is now being pitched as a way to save us from information overload. We can't possibly be expected to deal with the storm of notifications out there, so these new apps, in exchange for *more information about us* promise to keep quiet until there's something they think is relevant to us.
But isn't the problem that all of these applications are *also* the ones contributing to the storm in the first place? This is somewhat of a devil's bargain: create an environment (whether by design or not) so noisy that the solution is to hand over *more information* and instead accept the quietness and the judgment of the algorithm.
So you ask: what do they make us give? What's the cost that we're paying? It's hard for us to calculate as punters, because it doesn't have a currency amount attributed to it: all of this is "free" at the point of consumption. As ever, the bargain is information about our lives, and that information is delivered (and stored) in an opaque way. I'm going to point to Maciej Ceglowski's talk again[4], because *right now*, with Google, the bargain is: have a bunch of implicit data collected about you, and then when you go and ask what Google says it knows about you, it's incredibly coarse and, well, wrong.
Part of the argument here would be: well, you wouldn't be able to make sense of all that implicit data we have about you. And anyway, it might be useful, you never know! You never know when we'll surface - in Panzarino's words - 'that most killer of deals'.
I should be clear: I *want* context-sensitive applications. I want that magic future. But I've also grown up enough and changed from my 14-year old self to understand the consequences of that magic future and what you need to exchange for it. Because as Ceglowski points out, right now the value exchange feels tipped in the wrong direction and at least not entirely equally balanced.
I would also *kill* for an example, any example, of this type of context-sensitive computing that is something other than the fabled "hey, here's that most killer of deals!" I mean, you don't see Picard pootling about in his Ready Room and then suddenly Main Computer says "Hey, Jean-Luc, but it turns out that someone just listed an original Shakespeare folio or whatever on FedBay, how much non-existent Federation money do you want to bid for it, oh, nm, I already sorted that out and it's in Transporter Room 2, Miles O'Brien is bringing it over now."
No (and I realise this is as lazy as the 'hey, here's a Starbucks coupon' perennial example that's merely been upgraded from its location-based incarnation), where's all my serendipity? Where's the genuine just-in-time *surprise*? Because all these examples are things that you know I'm interested in *when someone's got something to sell*. Where are the non-transactional examples? Where are the fun ones? Where are the even scarier ones? Where are the ones that are just goddamn frivolous?
And all of this is without the real horror: that with all of that context that's being harvested, with all of that implicit data, we'll find out how predictable we really are.
(Spoiler: very)
[1] http://www.kpcb.com/internet-trends
[2] http://techcrunch.com/2014/05/15/foursquares-swarm-and-the-rise-of-the-invisible-app/
[3] http://radar.oreilly.com/2014/06/untapped-opportunities-in-ai.html
[4] http://idlewords.com/bt14.htm
2.0 Need To Know
One of the notes I received in relation to one of my (many) rants about the quantified self was that there was one particular domain where it did indeed make sense, and it was one that I found myself in quite strong agreement with. Parents will understand this one, non-parents won't so much.
So it turns out, when you have a baby, people (qualified professionals like nurses, paediatricians and doctors) keep asking questions like: how much are they sleeping, how many times did they poop, what was the poop like, how often are they feeding, which breast, for how long, how often did they wake up.
Honestly, it's like they're obsessed with babies or something. The questions make a lot more sense when you understand that evolution has basically done a number on us - or women, at least - and attempted to perform some sort of optimisation tradeoff between how big our brains are and how big babies can be before they can't actually fit through the birth canal anymore. So the first three months *outside* the womb are pretty much a fourth trimester when, all things considered, it'd probably be best if baby stayed inside, but then baby wouldn't have a good way of exiting without doing an impression of that scene from Alien.
Anyway.
You want quantified self? That's quantified self. At the advice of good friends, we (or more accurately, my wife) used an app[1] to pretty much track *everything* about our newborn. And that meant that whenever a midwife or doctor or whoever asked a question about how well our baby was doing, we knew. To the minute. And then the awesome/creepy thing happened where it'd be able to predict his feeds down to the minute.
So, yes. Babies. Totally need to be quantified.
[1] https://itunes.apple.com/us/app/ibaby-feed-timer-breastfeeding/id395357581?mt=8&uo=4&at=11ly9m
--
Some housekeeping:
For my new(er) readers, the long-term archive of newletter episodes is at http://newsletter.danhon.com/archive/ in case Tinyletter ever disappears (I hope it doesn't). Also, you should drop me a note and introduce yourself and say hi!
For everyone, consider forwarding this to someone who'd find it interesting. Or useful. Or funny. Or anything, really. Just don't waste their time. And if you think of something I should do to celebrate my upcoming one hundredth episode, do let me know.
For some people who know *exactly* who they are, please put all of these in a corpus and run a Markov generator over them and set up a parallel tinyletter please so I can go on holiday sometime.
Have a good weekend,
Dan