Episode Eighty Five: Solid 1 of 2; Requests: Cities and Technology

by danhon

0.0 Station Ident

It’s 11:33pm and I’ve just finished Day One of O’Reilly’s Solid conference. At the same time, I’ve been fielding email – the people who’re interested in having conversations with me continue to want to have conversations, and there’s a bit of juggling around with availability and time. And at the same time, thank you so much for the notes that continue to come in about my news from last week. I’m incredibly fortunate in that I’ve been able to come down to San Francisco this week and be surrounded by good friends from around the world, as well as meet super interesting new people.

At the same time, waking up this morning was a bitch. Since Friday, I haven’t had a good night’s sleep and last night wasn’t an exception. I pretty much woke up every couple of hours, and needed to get up at 7:30 anyway in order to hit a meeting in the morning at the conference location. And no matter how many emails I get from interesting people, I’m clearly still super stressed out with everything that’s been going on. So tonight’s an experiment and I’m going to see if taking some of my lorazepam is going to help. Because hey, pharmaceuticals.

1.0 Solid 1/2

There’s been a certain buzz around O’Reilly’s Solid[1] conference. It’s twelve years since the first O’Reilly Emerging Technology conference – a gathering as much of interesting people and the things that they were doing, organised around some sort of ineffable future-ness. Part of the rhetoric around Etech – that I think I only managed to attend once, in 2004 – was that it was concerned with genuine outbreaks of the future. Back then, of course, the future was in bits, not bits-and-atoms, and the palpable excitement of connecting people pre-Wikipedia, pre-YouTube, pre-Facebook and essentially pre- the consumer internet that practically everyone (okay, fine, only one billion people) is on.

But here we are at Solid and it is a weird thing. Like Etech, it doesn’t know who it’s for, I don’t think – we have marketing-style keynotes from the CMO of General Electric and at the same time beautiful art and the seeing the nascent introduction of new materials – radical atoms – from Hiroshi Ishii. So – and as friends at the conference pointed out – you have Cisco’s Cisco Live internet-of-things style conference down in the city proper, and out here at some sort of O’Reilly outpost, you have a semi-fringe – albeit one with Carbon Fiber(TM) level corporate sponsors.

Because you see, in the years since 2002, the internet grew up and reached out and touched practically everyone. You couldn’t do an ETech now without a corporate influence. We can’t ever go back.

But (there’s always a but) there’s a sort of weird undertone: there were the throwaway references that Nick Negroponte had led us geeks astray by preoccupying us with the pure digital, by entrancing us with magical programmable pixels when, all along, pure digital was a distracting rabbit hole that meant we hadn’t engaged with the world. So what the rallying calls sound like from people like Astro Teller of Google[x] when he proclaims that to solve physical problems, we need to make physical things, is that the geeks have emerged blinking into the Bay Area sunlight, discovered or remembered that they’re embodied and have decided to take charge. And that to change the world they have to interact with it, and sometimes off-colour affectations like questions on slides in giant 72 point Helvetica asking the rhetorical question:

“How do you interact with the world?”

to which us pseudo-humanists who can also bridge the technological divide rather facetiously answer: well, you managed to make it here.

This is, in a way, a language thing, of course. California – especially the part that comes to conferences like this – has its own particular ideolect, its own way of talking, and what we really mean when we say “interact with the world” is this: with built systems. But we have already built systems, we already interact with the world. The design question is: how do we *want* to interact with the world? Under what terms? How should the world respond to our needs and our desires and our preferences, because it has always done so, and you don’t even need to be conscious to affect the world so because you exist in it.

All of the advances and research I’ve seen have been amazing. They are all happening, whether I like them or not. And, in general, like most “neutral” technology, I like them. Apart from maybe the Boston Dynamics guys with their BigDog safely powered off in the corner.

It’s always been the way that we talk about this technology, how we harness it and the lens through which we apply it that’s been interesting. When we talk about Baxter as a robot that can do what a six year old human can do, and mention that six year olds can do what the human workers in China can do, then that reflects upon how we view the world. When we talk about APIs for humans and reducing ourselves to systems we’re not necessarily talking about augmenting our selves. And yes, I guess it is a step forward for us to talk not of user-interfaces but instead Human APIs, but then you unpack that word and you have a Human Application Programming Interface, and I don’t know about you or

There are so many interesting things here. Baxter (2012), a robot that’s designed to be teachable and trainable in the same way that a human worker can be, sports an anthropomorphic face, GERTY from Moon (2009) style, but a face that from my point of view is somewhat unsettling and shifty, which could’ve been an easy thing to fix in terms of making robots more accomodating to the foibles of humans. And when Rodney Brooks, Baxter’s father, talks about the fact that humans find robots like Baxter disconcerting when they are *unable* to precisely reproduce motion and tasks instead acting more reactively (like a human), and instead prefer them to be predictable in terms of their interactions. This, of course, can strike as strange given that humans aren’t particularly predictable in the first place and that one of the major problems in interpersonal relationships is that oh for god’s sake if only I knew what you were actually thinking and why you did that thing you just did. In a way, this want or need for robots to be predictable is almost as if the theory of mind model was some sort of computationally intesnive task and that it’s much easier to interact with things in the world if they have no animus of their own. Until, of course, everything goes Sorcerer’s Apprentice on you and your Roombas really fuck up cleaning your house.

It’s not a surprise, then, that when someone like Teller talks excitedly about being able to program cells with DNA and the cells being the runtime environment and it turns out that cells know how to copy-and-paste themselves, if you try and distance yourself from the excitement, you worry about a naive view of the world. “It’s great,” says Teller, when you’ve got an ex-marine who can take down your experiment when it escapes from you and can bring it back under control with a Bowie knife, and the inverse reaction is, of course: holy shit, you needed an ex-marine with a Bowie knife on hand to make sure your balloon didn’t escape.

What’s refreshing is when someone like Ishii is able to step back and say: this part, this is just art. This doesn’t have to be something massively disruptive, and what we’ve done with this programmable matter that, admittedly, is at a stupendously low resolution right now (we’re at the 320×240 CGA era of programmable matter) albeit still fascinating.

The rest of the conference has been slightly more predictable: yes, there’s a palpable excitement in terms of how big this field is going to get, and there are starting to be good conversations around what’s been learned over the last couple of years. A good session on cadence, for example, and timing, brought a high-level overview of the types of behaviours and the frequency and duration of interaction and messaging that make sense when the internet and digital interactions aren’t just something that’s over there, but instead something that’s persistent and pervasive. In fact, when they’re like something that’s part of the environment.

I think that’s the general gist when, at a high level, Negroponte is accused – in a way – of having misled us. The focus on the promise of pure digital, of the power of only the pixels, as opposed to the atoms that the pixels are necessarily laid upon as a substrate, was an ideological charge: a thrust to pay attention to the promise of digital. But extremes never win out, and in the end, as we’re seeing, the digital was always part of the physical. We are embodied, always have been, and will be – at least for the foreseeable future. The physical world is something that we don’t have a choice about, in a way.

Sure, there have been parts of today that have been, in a little way, perplexing or irritating. There have been panels or sessions that have felt like little more than advertorial, and it feels like there’s much more commercialism rather than academic theory this time round instead of in 2002 or 2004. But that’s the nature of the world, and the world changed in the last ten years. On the one hand, it’s easy to look at the art that Bot&Dolly are able to produce, but the in-person demo was unsurprisingly underwhelming. It turns out that it’s pretty easy to create a compelling piece of film, much harder to make that experience hang together in real life with a mirrorball on the end of a robot arm. Consensual reality is harder to edit than non-linear film.

[1] http://solidcon.com/solid2014
[2] http://en.wikipedia.org/wiki/Emerging_Technology_Conference

2.0 Requests: Cities and Technology

So, roughly, the requests that I got in response to episode eighty four were:

– (My) job search
– Cities and technology
– Communities
– Higher education and early education
– Gov.UK in 2018
– Industrial design retrofits

and I’m going to pick cities and technology to write about a little bit, just because it ties in nicely with some of what I was immersed in today at Solid.

There is a bit of debate, I think, as to whether we’re actually going to achieve the sort of end-goal of a smart city in which sensors genuinely are everywhere. I think it’s difficult to see at the moment: while we can see (or hope) that Moore’s law is going to help us achieve the prototypical sensing motes that are pretty much cheap or effectively free to throw around and dispose of, what we do find difficulty with in a way right now is seeing the commercial incentive for doing so. In other words: what are the incentives for pervasiveness of sensing across the material (note: not the infrastructure) of a city that will result in the wholesale integration of that sensing capability. Or to put it another way: who’s going to care enough to want to put all that sensing capability in, never mind making sure that it can all be aggregated in a useful and actionable way.

A way to think about this is instead to not think of sensing as an add-on to an object. For example, it’s easy to think of vision as an additional sensing capability that you can add to advertising hoardings. This means that you start considering the vision sense as an added cost that needs to come with a concurrent benefit otherwise it’s not worth doing. In which case it’s not worth adding vision to *all* billboards until the cost is practically negligible so that it all evens out in the end. You’ll get a few high-value billboards that make significantly more money because of their sensing capability, and you’ll get a usual long-tail relationship with every other billboard. But that betrays what I think Teller and Ishii would say would be a mechanical, 20th century civil-engineering mentality of looking at the world.

Instead, it’s perhaps better to think of what happens when sensing is an integral attribute of the materials that you’re using to construct the objects of a city. That’s why I talked about material and not infrastructure above, in that if we reach a point where *all glass senses*, then you don’t really get a choice. This is, I think, the demarcation between dumb material and smart matter: smart matter at least is able to sense its environment. I saw this today with wearable prototypes from Georgia Tech and in penumatic robotics from Otherlab: the piezoelectric pressure sensors used in prototype gloves from Georgia Tech and the pneumatically powered exoskeleton from Otherlab have as *inherent properties of their materials* sensing and reactivity built-in. You don’t need to do anything to the piezoelectric material – it’s an inherent characteristic of itself that it senses pressure.

We don’t have that right now. We have to augment the objects that we create with sensing capability, which necessarily leads, I think, to this sort of cost/benefit analysis as to whether or not to proceed with augmentation. It’s a different matter entirely if instead of augmenting we are building out of smarter materials. In a way, this makes me feel like the current iteration of smarter cities when talking about augmenting existing structures is the first tiny baby step: you’d much rather build something that can sense natively, rather than retrofit on, right?

There’s a wonderful demo of work done out of Disney’s Pitsburgh research lab that is a halfway house: adding sensing capabilities to existing materials in a slightly-less-than-built-augmented way. In that there’s no obvious sensor package, but by using what they call swept frequency capacitive sensing[1], they’re able to add touch and gesture sensitivity to everyday objects without needing some sort of sensor blister. The analogy that I have in my head right now is of something organic: skin is necessarily a sensor as well as an organ that’s got other important jobs to do. I wrote before about a certain sense of proprioception and what that might mean for a city: it seems that when we’re talking in terms of infrastructure – the wet, messy organs of a city – I would much rather that they natively sense rather than have sensing be something that’s added to them.

[1] http://www.disneyresearch.com/project/touche-touch-and-gesture-sensing-for-the-real-world/

Okay. That’s it for tonight. It’s been a long day. As ever, please send me notes. I’m working through a not inconsiderable backlog at the moment, but rest assured, I do read them all and I do try to reply to them all.

Best,

Dan