s2e14: What You’ll Reap; Foo Thoughts; The Next Generation 

by danhon

0.0 Sitrep

9:30pm on Monday, August 3, 2015 in San Francisco. I’ve been in Sebastopol for the weekend after having been invited to take part in the nth annual O’Reilly Foocamp[0]. It was my second: the first was sometime back in something like 2006 or 2007, and I have to admit it was a very nice ego stroke to be invited back again. This time, though, I very firmly resolved to not do what I did the first time, which was to be starstruck and intimidated and a) cling like a limpet mine to the people who I did know, and b) hide. Probably the best experience of that first Foo was having the courage to ask Zoe Keating for a ‘cello lesson (having hit Grade 8 when I was 18 and then not really done anything with the ‘cello since then) and then actually *having* a ‘cello lesson with Zoe Keating, and playing Werewolf with Jane McGonigal.

This time, I decided that I had No Fucks Left To Give, and would pretty much go in all guns blazing with a loud mouth and not give a shit about voicing my opinion about things. And, I think, it worked out very nicely: unlike last time, I proposed two sessions, the first one had more people in it than there were chairs, and the second one just turned into a nice, small conversation. And I met people I didn’t know, and got to talk to them. All in all, just the right level of stimulation: not too much, and not too little.

In other news, some of you are very funny wags indeed and pointed out that the Watsons from last episode had made the amazing step forward in artificial intelligence of being able to do cold readings[1] and a passable attempt at producing horoscopes. So there’s that.

Lastly, I’m going to try and do a thing and write a whole week’s worth of newsletters and *not mention advertising at all*. I know, right. We’ll see how *that* goes.

[0] Foo Camp – Wikipedia, the free encyclopedia
[1] Cold reading – Wikipedia, the free encyclopedia

1.0 What You’ll Reap

My son is now at the age where I don’t just have to assume that he’s acting out because he misses me when I’m away traveling because he’ll just out and out tell us. I hadn’t really noticed or figured out yet that the reason why he didn’t want to say bye-bye or give me a hug and kiss on Facetime was because he *didn’t actually want to acknowledge me being around* because I’m really dense and totally not picking up the cues. But now he’s just saying it: I miss you, daddy.

So, my super smart stupid idea was to deflect and do something that, as described by my wife, is just the stereotypical setup to a short science fiction story with a somewhat allegorical ending. The deal is this: I’m friends with and advise Makieworld[0], a company that amongst other things lets you design a doll in your browser or in an app and then get it 3D printed. I have one that looks like me[1] and that I dressed up for at Halloween once. OK, the doll actually looks way more like me when it’s wearing glasses, which I got as an (3D printed) accessory later. But anyway, the point is that the doll looks like me and my son knows it looks like me because he’s pointed at it, sitting up on a far away windowsill and said that it looks like daddy.

So, I say, hey: I think you’re big enough and you’re careful enough that you can play with the daddy doll. Which, you know, he does, and gives it a big hug and kisses and plays with it and wants to show it all the things. Which means that we’re in Supertoys Last All Summer Long warning klaxon mode because the *other* thing about these Makie dolls is that they’re designed to be *just about right* and in just the right way for you to start putting batteries and computing infrastructure in them. This is, of course, the point at which my wife effectively tells me that a terrible science fiction short story is unfolding on her mark; *mark*.

The other thing about this, the thing about technologically mediated artifacts and kids is this: once I had a toddler in the house I started being super conscious of what was going on because the damn things have minds like sponges. Ben Hammersley has a great example of this in Possible Problems of Persona Politeness[2] where just *one* of the points that he has to make is that unlike Siri, Amazon Echo/Alexa is designed as a servile female voice that you can be curt, abrupt and bark imperatives to because she doesn’t respond in a hey-this-thing-is-like-a-human-mirror-neuron-triggering-way. As Hammersley points out, Alexa doesn’t say thank you. She just, well, “meets your user need”. So Hammersley ends up being progressively more curt and abrupt and impolite to her until the exact moment that he realises that his daughter is hoovering up every nuance of her father’s interaction with the world and the people and objects in it:

My daughter is too young to speak yet, but she does see and hear all of our interactions with Alexa. I worry what sort of precedent we are setting for her, in terms of her own future interactions with bots and AIs as well as with people, if she hears me being forced into impolite conversations because of the limitations of her household AI’s interface. It’s the computing equivalent of being rude to waitresses. We shouldn’t allow it, and certainly not by lack of design. Worries about toddler screen time are nothing, compared to future worries about not inadvertently teaching your child to be rude to robots.[2]

Some more anecdata for you, which if you’re a parent is probably completely unsurprising. My son, completely unprompted, has started trying to ask Siri to “play Shake It Off” but because he’s about two and a half, his enunciation is fine for a human to deal with but not quite good enough for Siri to deal with. So he gets a little bit frustrated. My yoga teachers four and five year olds, though, heavily invested in the universe of Minecraft and aware, in a different way, of what Siri can do for them, are now *spending time improving their enunciation* because a Siri that understands them and what they’re saying is massively empowering to them. They know the words, they just need to pronounce them right to unlock access to practically infinite knowledge.

So this is the deal: like Hammersley, I’m not even that bothered about screen time for toddlers and children anymore. I’m now skipping down further the timestream into: shit, badly designed conversational interfaces have even more of a potential effect in changing social norms or providing adaptation pressure. It’s hard enough getting my son to say please and to ask nicely when he’s talking to *me*, and I’d quite like it if there weren’t more things in the world that were indifferent to his manners.

On top of all this – and as a result of chatting about it with Erin McKean over tea and coffee – there’s the whole issue of things like:

– my Xbox 360 with Kinect needed me, at the beginning, to speak in an American English accent (ie: “Xbawks!”) because the system locale was set to America because the number of people with non-majority system locale accents is an edge case, right?
– I seem to remember that if your iPhone locale is set to British, then you get British Siri Idiot Dude, but then British Siri can’t do American Siri things and, maybe, expects you to speak to him in British English?
– How does that work for a Scottish accent?
– Is Siri as good at understanding Spanish-accented American English when the phone locale is set to American, but the language also set to American? What if the language is set to Spanish but the locale is set to American?
– What about people who stutter?
– And that’s without even talking about the fact that Alexa’s voice is by default female *and you don’t even have to be nice to her*

[0] Design your own doll! | Makies
[1] The Adventures of Ad Nerd | Flickr – Photo Sharing!
[2] Possible Problems of Persona Politeness — Ben Hammersley

2.0 Foo Thoughts

A quick list of thoughts after Foo:

– there are some problems that Silicon Valley has, I think demonstrated that it does not want to solve, or would like you to know that it is not sufficiently incentivised to solve. Not least of which is the fact that *some* leaders in the Valley *do* think that there should be a social safety net but aren’t expending any combination of money/time/effort into making the social safety net work clearer/simpler/faster/more efficient and instead are working on the problem of making sure you can get a cab within two minutes
– I still don’t understand Bitcoin with a capital B as the currency. The blockchain, sure, totally, but Bitcoin is still batshit crazy. I mean, the idea of actual value accruing from nigh-on worthless processing. And I still don’t see how you get from *here* – where here includes free-to-access advertising funded content to *there*, where there is a place where magical unicorns make a reasonable amount of money using magical unicorn low-transaction-processing-cost micropayments for the work that they do
– I wish there were more people talking about gut flora
– Design isn’t just solving problems, it’s a point of view – thank *fuck* for that, because I’ve got points of view coming out of my ears, down my arms, through my fingers and into this keyboard right here
– a lot more people were interested and wanted to talk about empathy than I thought, which was very, very reassuring and filled me with a bunch of hope
– trying to figure out if there’s actually a sustainable model for agency-style work in product strategy, design and delivery
– coming away with the somewhat unshakeable feeling that if you want things to actually change, then you should damn well go out and change them instead of, say, writing about them in a newsletter. Which was… (and still is, to be perfectly honest) disconcerting.

3.0 The Next Generation

So, Mike Bracken is moving on[0]. There’s a pretty good summation of at least the big parts of how I feel about this over here[1].

[0] Onwards! | Government Digital Service
[1] Thank you, Mike Bracken – Code for America

OK. 10:04pm. More tomorrow. As ever, send notes! And if you liked this newsletter and you think you know other people who might like it, let them know.