s14e12: Biblically Accurate AI, or, Hey ChatGPT, Bury This News For Me
0.0 Context Setting
It’s Monday, January 30, 2023 in Portland Oregon and it is cold and sunny, which means it is a stereotpically crisp winter morning and there are fine traceries of ice outside.
1.0 Some Things That Caught My Attention
1.1 Biblically Accurate AI
Today’s thing that caught my attention is the confluence between four recent things:
- Russell Davies on ChatGPT, first on what happens if you ask ChatGPT to write a blog post on 10 ways to be interesting1;
- Then, the more interesting follow-up observation that the opposite of interesting is not boring, it’s ChatGPT2;
- And lastly Florian Fangohr’s piece on AI3, through which I also read Oliver Reichenstein’s iA writer team’s blog post tying ChatGPT to bullshit jobs and thinking4.
Wait, five. Five recent things. The other recent thing is Simon Willison’s recent experiments with GitHub CoPilot5.
Here is a stupendously simplistic explanation of what ChatGPT and other large language models are: you give them a whole bunch of text, like, a whole bunch, and then you essentially ask them to see if there are any patterns, and then based on that, when you give it some text, it creates that text section by section, word by word, choosing the next most probable word. It is in some ways like an excitable person with ADHD who’s very into completing your sentences for you.
I understand that if you experience this a lot, it might be a little irritating.
Anyway, today’s title, Biblically Accurate AI, is riffing off the Biblically Accurate Angels meme6 that evolved from angels not looking like your stereotypical white robed winged haloed reply guys and gals, but instead burning wheels in the sky stuffed full of eyes. The particular neighborhood of subcultures that I inhabit means that the funniest version of this meme for me is Biblically Accurate Clippy7.
The only connection I’m riffing off here really though is that a) Clippy is a pseudo-chatbot AI “assistant” that “helps” you “write” “things, much like people talk about the promise of ChatGPT, and b) the idea that the tons of eyes represent facets of what ChatGPT can do.
Here are some of the things that different people are excited about ChatGPT doing and have used ChatGPT to do, which while I’ll say they’re all language, they are also distinct jobs to be done and trust me, I don’t like using that JTBD phrase so early in the morning either.
- producing code or code-related text (comments, documentation, tests, actual code), in whatever language, that ranges from “was a simple thing and correct” to “was a simple thing and incorrect”, with everything in between, including “was a complicated thing and not exactly incorrect, but also fundamentally incorrect”
- resumes, cover letters, and performance reviews
- letters to get out of mistakenly issued parking tickets
- “all high school and college essays”
- “getting a solid grade for the MBA course I teach”
- a children’s illustrated storybook
These are many things! Like I said above, language is, um, a tool, and people use it to do many different things in the service of different goals. If you agree that ChatGPT does as well as its training corpus or the environment/text it’s exposed to, then it’s going to be better at some things than others, which means some people have silly ideas about what might happen if you train it on procurement and acquisition texts8.
I suppose what I am saying is this. There are many eyes (ha! Biblically correct! to ChatGPT. Are we good at talking about the range of those eyes?
ChatGPT is stunningly good at creating, as Davies noticed, text that’s
so smooth that it’s hard to read properly. We just bounce off it.2
Which is making the same point in the same area as the iA Writer team, when they say:
instead of braining up Word memos that no one will read, you can give your computer a couple of approximate orders and the machine processes the text for you4
This feels like the slippery-slidey human type of communication that evolved because [handwaves, citation needed, do not read this uncritically] we’re a bunch of chemically driven embodied hormonal mammals in a physical world that could destroy us at any moment plus one where there are rivals and other things that could eat or injure us at any moment. In general, enough of us meat bags / flesh transporters for our brains apparently need all of this phatic communication to maintain some sort of facade of civil cooperative society, instead of simply issuing imperatives like crap science fiction unemotional robots.
But then sure, why not outsource it all to writing that effectively simulates the sort of phatic communication that your eyes will glaze over? It’s not that it’s wrong and it’s also not that it’s not valuable, but it certainly isn’t interesting, and it certainly is, I think, predictable.
These large language models would seem fine, I think, if they were sufficiently accurate and trustable and so on for non-human communication, like writing code and tests. Close to that, they may even be fine for writing specifications.
But I do not think they are okay for writing and substituting human-to-human communication, in part because they focus on some combination of expediency and efficiency. If AI-enhanced writing offers you expediency and efficiency, then what are we doing with the gained time? Are we being more intentional? Are we taking more care over the prompts used? Is taking more care over the prompt actually being more intentional? I’d argue not, because your tweaking is still removed, and you’re focussed on tweaking rather than, I think, focussing on the end goal and whether that’s the right goal in the first place. But then, maybe all the tweaking and experimentation means you’re able to narrow down your goal area? Perhaps? But then, then, maybe you just don’t have the time for that, and then it’s on to the next thing.
In discussion over a startup that offers ChatGPT tools to write performance reviews (such performance reviews and school reports which already have software assistance, allowing writers to assemble reports out of a library of phrases), someone pointed out that please for the love of god don’t use this for performance reviews. If you can’t do a good performance review, then don’t do one at all. The counter was that the software requires the report writer to enter bullet points anyway, in which case honestly I’m on the complainant’s side: why not just communicate the bullet points? It is as if, for some reason, it’s the thought that matters and whether it’s more humane to provide a human-typed list and it gets worse the further you get down a sliding spectrum of tool assistance that starts at “spelling”, nears “grammar” and then gets through to “another way of saying this would be”, along with “make this criticism more palatable by”. I mean, some of that advice is good management. But again, shouldn’t we be, um, teaching good management and making sure we manage well, rather than attempting to stick a plaster on the symptom of text generated as a result of bad management.
I suppose the thing that caught my attention was the recognition that the large language model cat is out of the datacenter now and that it’s going to be used. There will be areas where it’s most useful, and there are areas where it will be most dangerous and least useful.
I would almost say (ha, this finally feels like getting to a good point) that a danger of ChatGPT is that it can be used to hide terrible things behind anodyne, gloss-over language. Beyond use for outright manipulation, beyond influence campaigns and smearing and impersonation (and its usage in an impersonation pipeline that say is used to generate audio or video), couldn’t it also be used for hiding Challenger-type presentation data?
Hey ChatGPT. Bury this news for me.
That’s it, it’s Monday. How was your weekend, and how are you doing?
Best,
Dan
-
I am not a robot, Russell Davies, 27 January, 2023 ↩
-
The opposite of interesting is not boring it’s ChatGPT, Russell Davies, 28 January, 2023 ↩↩
-
a.i.: heaven or hell? this isn’t the time to fafo., Florian Fangohr, 26 January, 2023 ↩
-
The End of Writing, iA Writer, 25 January, 2023 ↩↩
-
For example, Writing tests with Copilot, Simon Willison, 14 November, 2022 ↩
-
Biblically Accurate Angels / Be Not Afraid, Know Your Meme, 2021 ↩
-
Biblically Accurate Clippy original concept by Hagai Palevsky (Twitter, 21 April, 2022), coloured and alternate version by Evangeline Gallagher (Twitter, 24 April, 2022) ↩
-
By some people, I mean me of course: “In three months, Microsoft’s Chat-GPT powered office suite becomes the largest supplier of government procurement systems. All federal procurement offices are replaced with Office 365 AI solicitation systems, becoming fully automated. The procurement offices of all 50 states shortly follow.” Me, on Mastodon, 29 January, 2023 ↩