Episode One Hundred and Seventeen: The Material (4); More Ways Of Seeing
0.0 Station Ident
Happy 4th of July, America! I hope you're spending it by watching something like Independence Day. We went out on a family day trip out to Bonneville Dam today - an FDR depression-era project that ended up damming pretty much everything it could dam out of the Columbia river system, established a federal non-profit that generates a stupendous amount of free hydroelectric energy and is now investing it in wind. In other words: it was a daddy day trip to see some awe-inspiring infrastructure, giant generators and a fish ladder with enough Pacific Lampreys to give you nightmares for at least the next six weeks. It ruled.
1.0 The Material (4)
I'm having a pretty good discussion with some of my readers in the invisible backchannel of newsletter email replies about all this smartness, wearables and sensing/interface malarkey. One of the good responses has been around the notion of smartness taken to an extreme - that smart dust[1] is something that's going to be inherently creepy because it essentially relies upon EasyHard solutions to doing the right thing at the right time through distributed intelligence.
I think (as I normally do, though) that there's a continuum here, and am pretty happy to agree that the particular notion of distributed smartness and an intelligence that it elsewhere (ie an agent, or "the cloud") is going to be difficult to make happen in a good enough way. Although, there's a contra-indication right now, sitting in your pocket - because that's exactly what Google's trying to do with their Now cards and this quite horrible Fast Company Design article about an ever-present Google field[2] that imbues the universe with Googliness.
So, if I may, here's my third way. I don't think I need to go far as smart dust as to have smart objects, but objects that are happy to sense instead of also be interface. That is, I'm going to keep picking at this scab that keeps sensing and interface combined together until it falls apart and something interesting comes through, because I think something interesting *will* come through. And again, the particular kind of interface that I'm railing against is the laziness and tyranny of the screen, of the black mirror[3] that we gaze into.
It's not an invisible do-what-I-mean, not-what-I-say relationship to technology that I'm saying should come about with an environment imbued with smartness. Down that way lies what feels like a stupendously hard problem that, if current beliefs that it are soluble are down to the sheer application of the blunt force trauma of big data (ie: find enough humans and individual humans become essentially statistically predictable) is pretty depressing. But anyway: nevermind that it is a hard enough problem for a *human* to do the right thing, and not what I literally say, or to correctly impute my state of mind to determine my intention even *with* explicit communication, I do believe that what we want to do is provide agency and control to humans to make sense of a smart environment, rather than trying to second-guess them.
And that's what I think things like Google Now are trying to do, and that they won't necessarily be able to do, because they're missing something in their design. In other words, attempts to predict what I want may well be interesting, relevant and useful n percent of the time but I wonder if we're about to fall into an uncanny valley that's exacerbated by an inability to interrogate and to retrain.
Imagine a human who tried to anticipate your every need, but was difficult for you to communicate to. This human or familiar or, say, piece of virtual cardstock, would follow you around and, a based upon myriad but not al of your prior behaviour, would attempt to precog your next action. Without you *telling* it where you worked, it would guess. Without you telling it where your home was, it would guess. Without you telling it when you normally left for work, it would look for patterns.
What sort of personality, and what sort of relationship might you have with that sort of entity? One trapped in some sort of black mirror-ish phantom zone, only able to apparate out to you through any screen that you might be near, able to whisper into your ear, but not necessarily able to be trained. This is a strange relationship indeed - the personality-free agent who tries to service you.
It feels like part of where this is illustrated is in an emerging philosophical difference between the way certain Google and Apple services have been designed. Google's Now is predictive, using the vast power of a near omniscient non-sentient set of ill-understood algorithms. Siri - and Cortana, for that matter - only appear when you want them to, when you summon them. At your beck and call, hovering but never necessarily interrupting or intruding upon your presence in the way that Google's cards do at the moment. This isn't to say that one method or the other is better. Just that, at the moment, if you're going to predict something, it helps if you're going to be *right* and or *relevant*. And then there are occasions when Apple gets it wrong too - the oft derided "Looks like you have a busy day tomorrow"/"You've got an early start" in the notification view in iOS 7 gets a lot of stick for just being *off* a lot of the time.
So I think what I mean in terms of smarts-as-material is more low-level smartness. More low-level interrogation, and sensing, rather than necessarily decision-making ability. Being able to ask what, how, when, and so on of many objects is going to make them a lot more interesting, but at the same time, they're still washing machines and kettles and toasters. A lot of the smartness is, I do think, just going to happen in "the cloud" mainly because the cloud has the luxury of time, space, power and computation. If we're going with a biological metaphor, the smartness-as-material is more of extending the nervous system out into more devices, more places, more "things" rather than sticking brains in all of them.
[1] Smart Dust
[2] Google Is About To Take Over Your Whole Life, And You Won't Even Notice
[3] Black Mirror
2.0 More Ways Of Seeing
I am thinking about more ways of seeing what's going on in the ether, in the algorithmic world that make that world understandable. One of the ways of dealing with this is to design systems in such a way that users don't need to see what's happening: when you're looking at more single-use-at-a-time devices like the iOS kind, there's less of a need to interrogate the state of a system to see what else is going on in there, mainly because they had the chance (and decided, obviously) to re-architect and make some big decisions again. Namely: do you really need to be doing more than one thing at once, and how might we go about approaching that if we had the chance?
General purpose legacy computing, though, is a minefield. These are complex systems (though no more complex, I suppose, than whatever your average smartphone is doing these days) and *knowing what your computer is doing* is part of the diagnosis process whenever anything is going wrong. Open up activity monitor or top or perfmon is to some an inscrutable glance at all the stuff that, swan-like, is churning away below the surface of the GUI. And, in olden, swap-laden, magnetic-spinning-media times, literally churning and thrashing. But it's not like any of that stuff is understandable - it requires knowing what the arcane names of each piece of infrastructure are, what they do, and spotting which-one-of-these-things-is-not-like-the-other in terms of using up resource or just bizarrely named.
So in that respect, I wonder: is there value, and what's the value in exposing more of this under-the-hoodness in a humane and relatable way? Toyota Prius cars are renowned for showing almost Star Trek: The Next Generation style Engineering Room thrumming diagrams of their warp cores providing power to their nacelles - I mean axles - and how their regenerative braking system is interfacing with the bussard ram scoops to charge the main batteries of the car.
So in the same way that you're able - if you care - to walk around your house or your flat and see if part of the structure is, say, architecturally sound, what's the same process for seeing what's happening with your networking infrastructure? Should you even bother to look? I mean, if everything goes slow for some reason, how do you start to diagnose stuff?
One of the most fun tools (for certain values of fun, of course) in the early days of 802.11b wifi was etherpeg[1], which sniffed unencrypted tcp/ip packets, reassembled them and then displayed the images that it could find that were being transferred around your local network. In other words: it displayed the images from the webpages (and other network traffic) that other people on your network were looking at. Let's just say that it was pretty amusing to use in hotels.
This is about making the invisible visible. So much of it is hidden behind administrative/administrivia sections when what you're really looking for is a "hey, who's slowing down the internet" view. But right now, the only recourse is to hit the admin section of the local router and see if you can turn on some sort of logging and then before that you're shaving yaks all the way down to seeing what's in cpan.org these days.
Along the same lines, Robin Sloan wrote an interesting piece about the Amazon Fire phone at the Farmer's Market[2] and, in my mind, what we choose to make visible. There is something about the Fire phone that - Amazon's latest push into the physical realm - that feels a little bit off. The fact that it is *such* a consumptive device (a concern that appears to not be leveled at it in the same way that it was leveled at Apple, for example) and that the primary interaction method appears to be a special button that lets you *find things to buy*.
And Sloan's ability to take a look at that phone and see: what are the things the Fire phone doesn't see? It doesn't see anything that doesn't have an UPC. This is a phone that is completely and utterly made to exist for commerce. It is not a bicycle for the mind, it's a bicycle for getting things: and the idea of a phone or device that does better - that sees everything and that helps understand that, annotate that and educate around that - what does *that* look like, one that is essentially the five year old's ultimate phone. The one that you can point at *anything* and it will tell you about that thing.
So this is the phone you get from Amazon - the phone for buying things. So what would the phone from Wikipedia look like? Or what would the phone from Wikileaks (ugh) look like?
[1] Etherpeg
[2] The Fire Phone at the farmers market
--
Have a good weekend, and I'll see you on the other side,
Dan