Episode Eighty Eight: Smarts; The Real Internet
0.0 Station Ident
I'm writing this again at about 20k feet on the way back home to Portland from San Francisco. It's been a long, crazy week, one capped off with meetings that feel like they've been interesting and productive. So fingers crossed there.
1.0 Smarts
I woke up this morning to the news - or rumour, rather - that Apple was preparing an entrance into the internet of things, namely that there's evidence that they have some activity in the smart home area of things. The easiest course here would be to just sit tight and wait a couple of weeks: their WorldWide Developer Conference.
After sitting through O'Reilly's Solid last week, there's one thing that feels quite clear to me: the consumer benefit for an internet of things is still really opaque.
All of these benefits of smart homes: turning on lights when you get home, setting your security system, and so on - they all seem so abstract and so, well, the opposite of what we're probably going to end up using smart objects for.
What I mean to say is this: *even* with the Nest, which I think is probably the best or most developed smart-home-object that we've seen, it's still hard to see (from a consumer's point of view) what clear benefits result from internet-connected things. This isn't to say that the internet of things is a solution looking for a problem, more that the right problem hasn't been found yet, or that the particular problem that can be solved by a network of things that can communicate with one-another hasn't yet been extruded into reality.
In other words, it feels like we're in the Palm Pilot era of smart things. Generally a good idea that a bunch of early adopters can see the value of, but for the vast majority, the use-case and benefit isn't tangible enough yet. Sure, it's not as bad as having to learn how to handwrite in a completely different manner just so a slow microprocessor has a chance of understanding our scrawl, but it's clear that we're not in the "just stick it in front of someone and play a video" and everyone wants one era that something like the iPhone ushered in.
So part of the question is this: what are the missing pieces? On one level it's the articulation, the selling of the benefit that's missing: why do I need this in my life and what problem (whether it's one I knew I had or not) does it solve? On another level there's a genuine question as to whether we're simply seeing the effects of a bunch of people sniffing opportunity and desperately jockeying for position for something that *is* coming but hasn't quite rezzed in yet. What do we need? What needs disrupting? How do you persuade people to buy new toasters or microwaves or televisions or thermostats or door locks or lightbulbs? These are not traditionally things (obviously) that have a frequent replacement cycle because Moore's law just hasn't collided into them yet.
One particular aspect that I've been thinking about has been the wearable market. I suspect that the combined sales of activity trackers remain pretty (relatively) low. I find it telling that, having started wearing a Nike Fuelband again (an SE in volt, if you're asking), my Nike+ leaderboard, that used to be populated by up to twenty or thirty of my friends, is now practically *empty* on both the daily and seven-day slices of activity. Put it this way: no one wants to see their doctor every day, even when they're feeling well. So it's clear that, for whatever reason, the cadence of usage hasn't been properly considered for a general audience. I mean, really: who *wants* to see a dashboard of their personal fitness *every single day*?
The converse is my relationship with something like Moves, which just sends me a push/on-device notification every day to let me know how much I moved the other day. I now have a new awareness, a sort of long-term proprioception of how much my body moves and here's the thing: I don't need to look at it every day. The long-term implication of this, for example in terms of lifetime healthcare and preventative diagnostics, haven't been touched upon at all. I would rather talk with my doctor, for example, or have that data available to *someone* I trust who can say: well, let's see - since our last well visit about six months ago your daily exercise has declined a bit. How are you feeling?
Our relationship with the internet and with the devices we carry in our pockets that mediate our access to it are based on short cadences (maybe necessarily due to business imperatives?) and shown in our usage of terms like daily active user or monthly active user. But what of a yearly active user? What of that long-term relationship?
2.0 The Real Internet
Maciej Ceglowski[1] wrote an absolute barnstormer of a talk[2] for Beyond Tellerand. It's long, yes, but you should go away and read it, and if it helps, I'll describe it as the sort of thing that I wish I could write in this newsletter.
Ceglowski's talk is basically a depressing (and yet accurate) talk about the implications of the design of infrastructure and the things that that infrastructure enables. He draws an analogy with the types of second-order applications that the United States' interstate highway system enabled, and the types of behaviours that then arose from such pervasive infrastructure. A way of thinking about the interstate highway system - or, at least, the American implementation of it, is a sort of forcing function that makes certain artefacts and behaviours not only possible, but more likely.
Now that we have had a few decades of internet and a couple-and-a-half decades of web, now that there are adults who have grown up with it, now that we are in late-stage capitalism, it's time to take a good critical look at the sort of world the internet enabled, purely because of some early architectural and design decisions that were made in its birth. As much of Ceglowski's criticism is based upon the *computer* part of computer networking, because the first issue he brings up is a computer's inability to forget. I think what Ceglowski's getting at is that in the physical world, the act of remembering is an act that requires intention and effort: one of his particular examples is how taking photographs involves a combination of physical, gestural acts that evolved from mechanical/chemical through to electronic. And yet taking a photograph, until recently, has involved at least some sort of physical signalling system. And when that signalling system started to disappear - say, in Japan, with the acts of certain people being offensively enthusiastic as to how they treated women - the physical world tries to push back a little in order to remind us that remembering is an intentional act that should have some sort of physical feedback or consequence for those around us.
Computers don't do that. By design, computers just *remember*. It is trivial to have computers remember everything they do, everything that passes through them, everything they touch, see, process. It is easier, now, to simply have a computer store everything than to work out what to discard.
There are stories of what happens when humans remember too much. That our species needs to be able to forget things, that we need to file the edges off otherwise there is simply *too much*. Otherwise it is simply too hard to move on or to progress. Or that, essentially, as a species we have evolved *with* the capacity to forget, as opposed to without. And Ceglowski brings this to the fore, because with the capacity to never forget is the twin of never forgetting, to knowing everything, to caching and storing everything about us: not because it could be useful, but because we can. And that that ability is simply, unassailably, toxic.
I've had people say that I'm on a sort of empathy crusade, and I suppose this is one of those times when I'm going to be on that crusade again. It strikes me that as services strike to be 'friendly' or anthropomorphise themselves because it turns out we have a zero-day backdoor in our brains that allows some sort of short-circuting, the greater delta between how these services talk and how they act is going to be nothing short of an uncanny chasm, never mind a valley. In other words; it doesn't matter that Flickr can say that you know how to say hi in Swahili now and you think that's pretty cute, the fact of the matter is that *somewhere else in Yahoo!* there's a bit of that company that says: Hey, I know you said don't track me bro, but we're going to track you anyway. Sorry about that. But hey, you know how to say hi in Swahili!
I'm not sure at what point this becomes intolerable. I'm not sure at what point the cognitive dissonance becomes too great, but it feels like parts of it are pushing through into the cultural consciousness such that *some* people are aware and somewhat nervous about it. It comes through in there being more talk of people moving their email hosting from Google to a service like Fastmail, not just because it's (potentially) more secure, but also as a sort of statement. Never mind that, in practice, thanks to increasing centralisation, such a move may in the end, unfortunately, be more symbolic than active.
But the point is this: in seeking to make services more approachable and a more ingrained part of our lives, in making them conversational, we may be exposing that which they absolutely aren't. You may well put an accessible face on an algorithm, but it isn't, and never has been, human.
[1] http://idlewords.com/about.htm
[2] http://idlewords.com/bt14.htm
--
That was a bit of a downer. We should still be super-excited though. The internet is one of the things that has brought so many people together. It can still do that. It works for us. Not us for them.
Best,
Dan