Episode Eighty Six: Solid 2 of 2; Requests – GOV.UK 2018; Next

by danhon

0.0 Station Ident

Today, reading LinkedIn recommendations as they came in felt like reading eulogies. Apart from me not quite being dead. Not yet, at least. Or, I was dead and I hadn’t realised it yet. It doesn’t matter, anyway: all the recommendations from people I’ve enjoyed working with over the past three years just feel, unfortunately, like double-edged knives – ultimately good but only really readable with a twist.

Right now is a bad time, one of those terrible times when it doesn’t even really matter that one of my good friends has pulled me aside, insisted that I have something to eat and sat patiently with me in a pizza joint while I stare off into space and mumble. It doesn’t matter that he’s great and doing these things for me and telling me that this too will pass: I am hearing all of the words that he’s saying, the sounds he’s making as that make all the little bits of air vibrate and hit my ear and undergo some sort of magic transformation as they get understood in my brain. But they don’t connect. Understanding is different from feeling. And right now, I’m feeling useless and broken and disconnected and above all, sad. But I can’t feel those things. I have meetings to go to. Hustle to hust. Against what felt at times like the relentless optimism of an O’Reilly conference I had to finally hide away for a while, behind a Diet Coke and a slice of cheesecake, because dealing with that much social interaction was just far too draining.

And so I’m hiding again tonight, instead of out with friends, because it’s just too hard to smile and pretend that everything’s OK when it’s demonstrably not.

1.0 Solid 2 of 2

Carl Bass’s opening keynote[1] this morning was fantastic, a great example of what a good keynote should be. Bass delivered it pretty effortlessly, and something that could easily have felt or sounded like a sponsored slot didn’t feel like it at all. It’s not like Autodesk have a vested interest in the Internet of Things – they did come out with their own 3D printer, after all – and like every expensive piece of professional software (I’m looking at you, Adobe) they’re looking to make sure they remain relevant and useful in a post-PC, fluffy cloud world.

Bass did a good job guiding us through four categories of hardware/manufacture: additive, subtractive, robotic assembly and nano/bio. The first two are pretty much mainstays of the new methods of making, but something that Bass was careful to emphasise was what you got for free with 3D printing methods and what the problems were. Bass’s point was that the big value isn’t necessarily in consumer 3D printing but in large-scale industrial 3D printing. Or, rather, what it means when we retool our manufacturing processes with these new capabilities. Because what we do get with 3D printing is things like shape complexity for free – the idea that we can generate arbitrarily complicated 3D shapes and that the manufacturing process doesn’t care that their, I don’t know, Sierpinski fractal triangles or a solid cube: they’re both as easy to make as the other.

What we get on the downside of these new manufacturing methods are problems like scaleability, where the time increase of increasing volume grows as a cube power, so it just takes *a very long time* to make big things. Almost unacceptable amounts of time. And it’s not necessarily something we can throw parallelism at. Bass’s point here is that although 3D printing and the ecosystem surrounding it makes use of Moore’s law, the problems that are being faced aren’t necessarily ones that are *solved* by Moore’s law. They’re different ones, and we’re not going to get advances in build time for free, for example.

The two most interesting parts of Bass’s keynote where when he talked about bio/nano manufacturing, in particular Cambrian Genomics and the Wyss Institute and what they’re doing with DNA printing (which is exactly what it sounds like: printing with adenine, cytosine, guanine and thymine) to create stochastically self-assembling structures that do useful things, like encapsulate chemical payloads and automatically deliver them upon encountering the right kind of cellular environment. Bass tells a compelling story of how tantalisingly close we are to bioforming materials and objects: that it won’t be that long until we’re building things the same way nature does, out of seeds and biological processes that only need the sun, carbon dioxide, water and dirt.

Alongside that was Bass’s submission that Autodesk really gets the cloud. I’m not entirely sure how persuaded I am about this because I don’t know enough about the computational complexity involved and whether it really does require leveraging compute infrastructure like ‘the cloud’ or instead you could probably do the same thing if you had one of those fancy new Mac Pros. In essence, what it sound slike is that AUtodesk were able to approach the problem of how to help designers from a different angle: in the case of cad/cam engineering there are a whole bunch of multivariate problems where there is a solution space that needs to be explored. Previously, Bass submits, it had been too time intensive to explore the entire solution space. You would pick a coarse, low-resolution grid of the solution space and ask those questions, and you’d try and get a good-enough solution for the variables you were solving for. With cloud compute, Bass was saying it was in principle easy (and cheap, and fast) to instead search the entire solution space. But searching the solution space in that way is simply asking a million questions and getting a million answers and picking the right one: in a way, it’s the going the wrong way around. Another way, that’s breathlessly reported in publications like Fast Company, is instead to provide a design system with a goal (say, designing a chair, because it turns out that computers, just like human designers, can’t get enough of designing chairs. Just wait til they start on boarding passes) and for the system to iterate over finding solutions for the goal *that can then be edited by a human*. Ultimately what Bass was able to show, pretty persuasively and entertainingly, was that Autodesk was still pretty relevant.

Neil Gershenfeld from the Centre for Bits and Atoms at MIT delivered another well-received keynote[2] that I interpreted as a sort of smart materials science. It isn’t often that you get someone opening with a sort of uber-combo move of Claude Shannon, John von Neumann *and* Vannevar Bush, but hey, if you’re setting tone I guess there aren’t that many less intimidating routes. But anyway: the crux of what Gershenfeld was saying, I think, was that hey: digitisation was a great conceptual advance in terms of stuff that humans have come up with, and it’s high time that we applied the concepts of digitisation *as advanced by people like Shannon, von Neumann and Bush* to materials themselves, and not just made digitising/digitised *machines* out of materials.

And so his four step roadmap to a genuine Star Trek replicator starts with computers that can make machines, to machines that can make more machines, to codes that can be applied to materials all the way through to programs that can be applied to materials. And at the very end of the line, you have smart matter. It’s when someone on stage points out something like Superstorm Sandy causing $20bn worth of damage and our only real defence against it being wet bags of sand that you realise how smart people are and how dumb you are for not realising it sooner. So the idea of trying to print mountains (and hey, you don’t have to print a *mountain*, you want to achieve the effect of a mountain only more efficiently, so you can *cheat*) is just a) literally awesome and b) audacious in a way that can only really command respect.

It goes a little sideways when getting excited about a $20bn destruction of cost int erms of printing out airplanes instead of having to make all the little bits individually and then assemble them because you’re suddenly wondering: where’s that $20bn going to go? But, to be fair, it’s not the job of people pushing forward the state of the art in materials science to be concerned with the economy, right?

Instead, what we get are things like the frankly offensive session Wearables at Work that, well, let me just quote:

“The real power of wearables comes from the behavioral data they generate and the environmental interfaces that they enable. Pairing data from these wearables with micro-level outcome data enables online-style behavioral analysis and A/B testing in the real world at an unprecedented scale. Physical changes in the workplace, from autonomous augmented cubicles to shifting walls will push the boundaries of what organizations can accomplish. This will not just change how people are managed, but will fundamentally alter the world economy.”

Warber is, for example, deeply and genuinely excited about measuring your posture, your cortisol, and all manner of implicit datapoints about your physiology in order to manage you better in the workplace. And, let’s be clear here: we’re not all on the same page when we use the phrase “manage you better”.

One of the themes that has been running for a long time in my newsletters has been how important empathy is because it turns out that when we’re talking about an internet of things, one of the things is people, and people don’t like to be things. Indeed, when we dehumanise people, generally speaking, bad things happen. So instinctively – whether Warber’s intentions are good or no – any discussion of people where instrumenting them for the benefit of others, as opposed to the benefit of those being instrumented feels like a Bad Thing.

See, it’s all very well when Ishii talks of doing things because we can and because they’re art and they just *are*, but the problem with rhetoric like Warber’s is that it shows our tendency (or at least some members of our species’ tendency) to want to treat other humans like they aren’t humans. To reduce or abstract them into individual units. At the same time when other speakers are talking of an aberration in the 20th century of the mechanical and industrial that seeked to treat everything the same way, at the same time that Baxter is proven to be interesting precisely because it does not do everything the same way each time, there’s something counter when we talk about measuring humans in the workplace and instrumenting them. Because, if there’s anything management could blame, I hoped it wasn’t a lack of instrumentation.

[1] http://solidcon.com/solid2014/public/schedule/detail/33744

[2] http://solidcon.com/solid2014/public/schedule/detail/35425

[3] http://solidcon.com/solid2014/public/schedule/detail/33559

2.0 Requests – GOV.UK in 2018

Roo Reynolds was the one who asked me to have a think about what GOV.UK might look like in 4 years time. Or, rather, what I’d like to see in GOV.UK in 4 years time. At a high-level reckon, I’d like to assume that a lot of the easyhard stuff – the transactional stuff and the meat and potatoes of government – would be taken care of by then. So all the things like better tax returns, notifications, licensing, would all be done and that the GDS team will have produced a base level of government services that are better-delivered-by-digital.

It’s what’s next that I’m interested in, and government’s role in that. Because those bottom-of-the-pyramid services are the most important ones that need to be implemented and improved first. When we move higher up the stack (or, as I’m happy to devolve to Mr. Heathcote, to different phases of life or to different priorities) I’m interested in government’s ability to help us achieve what we *want* to achieve as opposed to what we *need* to achieve. I mean: what are aspirational government services? Are there any? What might they be? How easy should government make starting a new company? How easy should government make working out what sort of company to start?

These feel like difficult questions because they kind of cut to the heart of what sort of role government should (or even could) play in citizens’ lives. And they’re less about survival and more about enablement.

3.0 Next

Over the past couple of days at Solid it’s become almost painfully apparent that the Valley, in broad terms, is suffering from a chronic lack of empathy in terms of how it both sees and deals with the rest of the world, not just in communicating what it’s doing and what it’s excited about, but also in its acts. Sometimes these are genuine gaffes – mistakes that do not betray a deeper level of consideration, thinking or strategy. Other times, they *are* genuine, and they betray at the very least a naivety as to consequence or second-order impact (and I’m prepared to accept that without at least a certain level of naivety or lack of consideration for impact we’d find it pretty hard as a species to ever take advantage of any technological advance), but let me instead perhaps point to a potential parallel.

There are a bunch of people worried about what might happen if, or when, we finally get around to a sort of singularity event and we have to deal with a genuine superhuman artificial intelligence that can think (and act) rings around us, never mind improving its ability at a rate greater than O(n).

One of the reasons to be afraid of such a strong AI was explained by Elizer Yudkowsky:

“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

And here’s how the rest of the world, I think, can unfairly perceive Silicon Valley: Silicon Valley doesn’t care about humans, really. Silicon Valley loves solving problems. It doesn’t hate you and it doesn’t love you, but you do things that it can use for something else. Right now, those things include things-that-humans-are-good-at, like content generation and pointing at things. Right now, those things include things like getting together and making things. But solving problems is more fun than looking after people, and sometimes solving problems can be rationalised away as looking after people because hey, now that $20bn worth of manufacturing involved in making planes has gone away, people can go do stuff that they want to do, instead of having to make planes!

Would that it were that easy.

So anyway. I’m thinking about the Internet of Things and how no one’s done a good job of branding it or explaining it or communicating it to Everyone Else. Because that needs doing.

As ever, thanks for the notes. Keep them coming in. If you haven’t said hi already, please do and let me know where you heard about my newsletter. And if you like the newsletter, please consider telling some friends about it.