s17e09: What does it take to trust?; The Young Lady’s Illustrated Tour Guide
0.0 Context Setting
Monday, 12 February 2024 in Portland, Oregon. I started writing this around midday, but didn’t get around to finishing it after dinner.
0.1 Hallway Track News
No Hallway Track news yet, but fingers crossed I should have an announcement for Hallway Track 010 this Thursday.
Last week’s Hallway Track (009 Infrastructure and Systems with Deb Chachra and Georgina Voss) went down a treat.
Here’s what Carl Coryell-Martin Martin had to say:
I attended the Hallway Track on Infrastructure and Systems this morning and now I can’t stop thinking of cars as powered exoskeletons for racism.
Which from my point of view is a pretty great outcome!
0.2 Specialized Generalist Tool for Hire
Hey! I’m available for work. First, if you already know what I do and have something that I can help with, then let’s have a chat.
Second, if you don’t know what I do for people then here’s three ways I’ve helped out teams recently:
- Helped prevent / intervened in a slow-moving IT project disaster. Especially in government.
- Set useful, clear and actionable goals for a team’s product strategy.
- Helped teams geared for steady, small, iterative improvements discover new product directions and opportunities.
1.0 Some Things That Caught My Attention
1.1 What does it take to trust?
I’ve thought out loud before about what would be needed for a useful, intelligent assistant, most recently back in s17e041. That time it was about how the Rabbit R1 hardware-enclosed LLM assistant had struck peoples’ attention, and whether the demoware showed actually-useful, real-world examples.
Anyway.
Google released its Gemini assistant, and I will not make fun of Google’s issues in Naming Things, which to be fair is one of the Two Hard Things in Computer Science2.
One: Allison Johnson’s review of Gemini in the Verge was super interesting and caught my attention in the headline, never mind further down:
Google’s Gemini assistant is a fantastic and frustrating glimpse of the AI future / It’s useful, but it’s also thoroughly Google3
You should read the whole thing. In the meantime, here’s the bullet points that stuck out for me:
- “sometimes the emotional labor of opening another app on my phone and typing in some text is just too much” -- because I’ve written before that having to take information from one app and type it into another is a symptom of the design constraint of business models. Apps / the companies that make them, are incentivized to not share, despite the purported promise of aging Web2.0 to API-fy everything.
- “Gemini feels like a preview of what that AI future could look like — provided you’re well entrenched in Google services” -- which is pretty much endorsement of my position that you’d need a whole bunch of correlated contextual information to do useful things. Look:
- “Gemini isn’t nearly as good of a conversationalist as ChatGPT, but its ability to hook into Gmail, Google Maps, and Google Docs4 is what makes it really interesting... ... It really doesn’t sound like much, but it’s the first time I’ve been really impressed with AI as a tool to help me get things done” -- which, again, just shows that provided you can get at all that cross-application siloed information, things get really interesting (and useful!)
I wrote before that you don’t want an intelligent assistant1, but perhaps the more interesting and provocative question is: under what conditions would you want a useful intelligent assistant? When would you be happy to have one -- and by happy, I mean “would choose to, willingly” rather than “because I’ve been kind of forced into making a less-than-compelling bargain with few-to-no choices”. Here’s what I thought at the time:
Now I assume there are people out there who’ll throw their hands up and declare a sort of “fuck it, you all know everything about me anyway, it’s not like there’s any use fighting anymore” and that the utility of the purported intelligent assistant will outweigh any qualms about any further abuse, invasion, or third party breach of privacy. But what I’m saying is that the need for more contextual information is asymptotic: more information will always be better, so there will always be requests for more information if, for example, we are lazy and want to utter some magic words and have brooms sweep everything up for us.1
The deal is that the useful intelligent assistant -- like the useful valet or executive assistant or secretary -- would know so much about you that they’d know intensely private things about you. Perhaps even things that you would wish to keep secret from the closest people in your life. Science fiction and fantasy is littered [citation needed, or just take it as read] with examples of such close familiars who know you better than yourself.
What might that look like? Bearing in mind that some of these requirements are more like a wishlist and on the “well, that would be nice, but it would be impossible or unrealistic”. It would need to be (an unexhaustive, thinking-out-loud):
- “discreet”;
- verifiably unable to exfiltrate information to third parties;
- be able to do what I mean, not what I say;
- to the extent possible, immune to being compelled to release private information (other than, for example, via a court order);
- a bit like that long-established law firm with which you might entrust secrets to after your death, the one where the kindly old man in the hat hand-delivers the sealed envelope to whomever needs it based on your instructions; and
- come to think of it, actually protected and required in law to act in your interests, almost as if it’s an agent with defined and enforced responsibility toward you, the principal, in the legal sense, and not just the software sense;
- be empowered to act just like you, in other words: able to read and take in the information that you are able to take in, as if it were you;
- “not able to make mistakes”; and
- undoubtedly more.
You’d want all this, to the extent possible, to be verifiable. You might delegate trust in these attributes to a third party because you’re a regular person who isn’t going to audit code or hardware. You would need to be reasonable and understand that entities that are non-tech companies that you have untrustworthy vibes about are legally required to release information about you under certain circumstances. (See the bit above about these being unicorn wishes).
But the point I’m making here is that these unicorn wishes come from a place of present (and valid, in many cases!) distrust.
You wouldn’t want it to make mistakes, but humans make mistakes all the time. It might even make fewer mistakes than a human, but this again is straying into the “self-driving cars are safer than humans driving cars because humans get tired and mistakes all the time”5.
My point here?
Well, one is the obvious one, which is that from a business point of view, Google’s going to make the case that the best useful assistant is one where as much of your life is stored in Google systems. The more information, the more useful, and Google can assure that its approach to cross-application/domain privacy and security is better than reaching out to non-Google sources because, well, integration. And they wouldn’t be wrong!
Apple would make the same case. In fact, anyone would. Having everything in one place would totally be “easier”, in the sense that things are “easy” in developing software and services, especially ones that preserve privacy.
Maybe you want the assistant to be able to forget. Maybe you want it -- unlike Albert, say, or Jarvis, or Samantha -- to have access to information in a domain based on criteria like “for the next day, you can totally read my medical records” and then know that that access has been restricted.
(The problem here is that it’s easier and less hassle to just default to access all the time. So what kind of defaults would you set?)
Both Google and Apple would say, here, that their permissions model helps super lots. And again, they wouldn’t be wrong.
One company that strikes me as somewhat fucked in this situation is Microsoft, which due to it completely missing the boat in being a major player in the “owns the operating system upon which the single device that’s with you all the time and knows the most about you, as well as all the services involved” area, just... doesn’t have that access. And would need to rely on permissions and APIs that OS/device vendors would prefer and attempt to keep private (for business reason!) absent regulation requiring them to level a playing field. In fact, we see this already, based on Apple’s complete and utter tantrum about fulfilling the requirements of the EU’s DMA and its press release being very, very specific about the perceived privacy and security risks created by the Digital Markets Act6.
What would it take for you to trust such a useful agent with all that information about you? What’s the minimum that it would take? And, in some sad sense, does it even matter, given the current state of trust around security and privacy in online services? At this point, are people just inured to not being able to trust privacy and security measures? I joke that I now must have nearly a decade’s worth of credit monitoring thanks to what feel like monthly breaches.
Perhaps one way of thinking about this is the risk model. What might those be?
- secrets you really don’t want people to know, that are purely informational (I don’t know, like ones you could be blackmailed over);
- secrets that other people knowing you wouldn’t expect people to know -- like your travel plans, which in the worst case might compromise your safety;
- secrets that put your bodily autonomy at risk (thanks to the horrific attack on reproductive rights in the U.S.)
- secrets that allow people to do things that only you should be able to do (passwords, things required to prove your identity)
In other words, that perennial of probability of event crossed with seriousness/impact of event. High probability, low-impact? What about low-probability, but stupendously high impact, like going to jail for having an abortion?
Connect Apple Health to your EHR using something like Epic, and you’re potentially there already, regardless of whether Siri or whatever LLM-powered replacement comes about this year.
(Never mind the fact that the risk is already present in the abortion case, thanks to reporting requirements before anything even hits your electronic health record)
What would the UI/UX look like for such a service? What would defaults look like? I can see the Terms and Conditions for Google Gemini and... they’re not bad, but they’re also terms and conditions for an online service, which is to say “not something people will read”. The default for Gemini, for example, is to store your information (Gemini Apps Activity) for 18 months, and you can change it to 3 or 36 months. Plus, this notice:
Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies
1.2 The Young Lady’s Illustrated Tour Guide
One other thing that caught my attention in this space. I’ve seen now a number of stories of people using Gemini/Bard/whatever while traveling, with the observation that it’s been very useful to them. More useful as a tour guide, I think, than as a general purpose assistant.
Often the nearest example from science fiction for this type of agent is The Young Lady’s Illustrated Primer, from Neal Stephenson’s Diamond Age7. Every so often (or more often, really), people try to recreate the Illustrated Primer, the general purpose education tool of which two sides are shown: the benefit of persistent, long-term human guidance, versus its utility as a massively reproduced and cheap tool for the masses. The other thing that crops up is the example that the early codename for Amazon’s Kindle while it was in development was Fiona.
I digress. Perhaps the Illustrated Tour Guide is a closer, more achievable goal than the replace the entire education system goal of the Illustrated Primer.
Okay, that’s it for today! How have you been?
Best,
Dan
How you can support Things That Caught My Attention
Things That Caught My Attention is a free newsletter, and if you like it and find it useful, please consider becoming a paid supporter.
Let my boss pay!
Do you have an expense account or a training/research materials budget? Let your boss pay, at $25/month, or $270/year, $35/month, or $380/year, or $50/month, or $500/year.
Paid supporters get a free copy of Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes, and free subscribers get a 20% discount.
-
s17e04: You don’t want an intelligent assistant; Protocols, Not Platforms (archive.is), me, 11 January 2024 ↩↩↩
-
The Two Hard Things in Computer Science are: naming things, cache-invalidation, and off-by-1 errors. ↩
-
Google’s Gemini assistant is fantastic and frustrating - The Verge (archive.is), Allison Johnson, 9 February, The Verge ↩
-
Google’s Bard chatbot can now find answers in your Gmail, Docs, Drive - The Verge (archive.is), Emma Roth, 19 September 2023, The Verge ↩
-
“Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.”, in which Sarah Connor, of all people, is totally in favor of self-driving cars (Terminator 2, James Cameron, 1991). ↩
-
Apple announces changes to iOS, Safari, and the App Store in the European Union - Apple (archive.is), Apple, 25 February 2024 ↩
-
Weirdly (or not, I suppose), a novel that gets much less attention than its sibling Snow Crash, perhaps because for one The Diamond Age was not written as a graphic novel. ↩