s2e30: Politeness 

by danhon

0.0 Sitrep

1:43pm sitting in a New Seasons cafe with lunch, ostensibly taking a break before the 2pm California Child Welfare Open Vendor Forum (yeah, it’s a thing), but at the same time, probably pretty wired on caffeine and Adderall. And yes, I know I’m not supposed to be drinking caffeine *with* Adderall.

1.0 Politeness

OK, so more on robots and politeness, in particular thanks to notes from Kim Plowright[1], James Aylett[2] and others. Thank you!

The first, which I think is a much better and neater description of what I was going for last episode[3] came from Plowright: “Emily Post[4] for Robots”. All of which makes me remember: a) Emily Saunders[5] from Douglas Adams’ Mostly Harmless that I happened to read recently and thus b) Colin[6].

The point here of course is the Etiquette of Robots and how they should integrate into a world where humans navigate and manage relationships with each other the whole time. It’s quite one thing to be artificially *intelligent* (yes, it’s great that you can recognize and understand Mandarin better than a human[7]), another to be able to apply that intelligence to model relationships and interactions with a whole bunch of mostly watery fleshbags.

I’ve written about this before in terms of how social interactions with agents/characters/personalities/services like Siri, Google and Alexa are *designed*, because we have to make decisions about how we want people to interact with them, but more concretely, how we want to *treat* those agents. They might be dumb now, but…

Let me just say this: what does a finishing school[8] for digital services look like? I mean, sure, we got the Ladette to Lady[9] tv series, but I guess I have to accept that the potential viewing audience for finishing schools for digital services wouldn’t be that big… (But it would be dedicated).

Aylett’s point was around the distinction between spider-driven, server-side bots, and what could potentially be done in-browser. Which yes, is a bit of a distinction as to *where* something happens, as opposed to *what* the happening is. The big bit of insight that came from Aylett, for me, was this:

“what are we going to call the *next* generation of still-not-actually-understanding AI? Deeper learning? At what point does the naming become an accidental subject of Douglas Adams’ parody while simultaneously having been inspired by *the same parody*?”
and a similar prompt from Robin Sloan, who was tickled by my phrase:
“do we need robot tags for “index this, but don’t understand it”?”

So, I’m kind of reminded again of the stupendously smart Jones/BERGian phrase of “Be as smart as a puppy” because now it reads more like “be as smart as a puppy in terms of the things that it does that don’t *seem* smart, but are actually hidden smart things, the kind of things that are intelligent without thinking about. Again, I’m not the first person to have thought about this.

I think, in a way, that we’re on the cusp of a trope of science fiction novels where we’re talking about consciousness-levels (e.g. “alpha-level A.I.”, “non-sentient but smart A.I.”, “beta-level simulation” and all that guff) but we’re getting to a quite real distinction between bots or services or software that at least:

a) just “index” things without “understanding them”, and create things like PageRank relationships between things;


b) do “understand” things (i.e. this photograph has lots of dogs in it, this photograph has lots of humans in it who are happy, this person posts photographs that have more happy humans in them than unhappy humans)

and the ones that don’t exist yet

c) ones that will have an “opinion” about (b) and (a)