s19e03: Kill Your Gods
This episode was written on Monday October 14 and finished on Wednesday October 16. I think you can tell, unfortunately, that it was interrupted. I don’t like not being able to finish these things in one go.
It’s Monday October 14 in Portland, Oregon, where it continues to be unseasonably warm. Tomorrow is bin day!
0.1 Events: Hallway Track, and Pulling the Cord
Hallway Track 010: Let’s Fix Government Procurement (in the U.S.) was on Wednesday 16 October, at 10am Pacific, 1pm Eastern.
Our guests were be:
- Kathrin Frauscher, co-founder and Deputy Executive Director of the Open Contracting Partnership.
- Ryan Ko, freelance consultant, co-author of this report, principal of RKO Consulting LLC, former Chief of Staff at Code for America, works with a portfolio of governments and non-profits to build a more human centered government.
- Gennie Nguyen, Social Equity Performance Manager, Procurement Services, at the City of Portland
Here's what the blurb was:
Nobody is happy, and everyone is making do. That’s how a whole bunch of people feel about how procurement works with governments in the U.S.
Procurement is big money (over $4.4 trillion spent every year!). It’s also democracy in action, because procurement is how governments buy the goods and services as part of our social contract.
It’s also a big mess that could be much, much better. It works – just, and despite the efforts of everyone involved.
Come to this Hallway Track to hear about Bringing a human-centered approach to government procurement technology, a policy memo co-written by Dan, Ryan, and Kathrin about topics like:
- what the authors learned over the past 6 months about how procurement tech works in the U.S.;
- why things are broken and ideas about how the entire situation might be improved;
- the familiar and true refrain it’s always a people problem, not a technology problem;
- why standards matter, even when that xkcd cartoon is true; and
- group therapy, to be honest.
What’s Hallway Track, if you’ve forgotten or I never told you?
Hallway Track is a series of free, ad-hoc gatherings, where we pretend to be in the hallway chatting with each other after a great conference session. It’s for small groups of only 25 people so it’s not too big people can’t talk and not too small there’s dead air; they run for 90 minutes; they’re not recorded, to encourage free conversation.
If you’re subscribed to this newsletter, you’ll be among the first to know about the next one.
Pulling the Cord, my workshop about diverting government technology projects is now open for registration:
Pulling the Cord for the Private sector is now open for registration for Tuesday 29 October, at 10am Pacific and 1pm Eastern. 15 spaces only, at $360 each.
Pulling the Cord for Government and Non-Profits is now open for registration for Friday 1 November, at 10am and 1pm Eastern. 15 spaces only, at $260 each.
Find out more about Pulling the Cord, the best plain-speaking guide to stopping traditional technology procurement, before it leaves the station.
1.0 Some Things That Caught My Attention
1.1 Kill Your Gods
Six parts to this, which is more than I thought it was going to be. You really can’t shut me up once I get going.
1.1.1 Aurora are pretty
Last week, on Friday 11 October, much of the northern hemisphere was treated to a stunning display of aurora borealis. We’re at the height of a solar maximum so be prepared to have your social media feeds periodically stuffed full of absolutely gorgeous pride sky ribbons for the next potentially nine years?
Anyway, the most important bit is the very pretty pictures of aurora that were posted on Threads, Meta née Facebook’s Absolutely Not Twitter social network.
1.1.2 Meta AI
The second bit of background is that Meta have been gradually rolling out “AI” features across its products and platforms (which, you know, smart, rather than trying to create a new thing for people to use).
Their generative AI chatbot caught what I’d describe in my British understatement style as “a bit of flack” for pretending it had a gifted, disabled child1, and again for summarizing comment sections2 just for two examples.
Meta also has something for you if you want to generate images3, and soon probably video too4, no matter how many oceans it’ll boil.
1.1.3 We Are So Not Ready For This
Back in August, Sarah Jeong wrote No One’s Ready For This 5, a fantastic piece first about Magic Editor and Reimagine, the machine learning-powered image editing and image generation built into Google’s latest phone, the Pixel 9. Go read Jeong’s piece, it’s a brilliant summary, catchup, and extrapolation of what’s going to happen with realistic image editing and generating at scale.
Yes, yes, “photographs can be faked” and “photographs have been manipulated ever since they existed” and “have you heard of Photoshop” but again the point here is the easy ability at hand to create imagery at scale in hardly any time.
Yes, yes, “you can get people to believe things with just a headline and some spammy text, who needs images” and I don’t think anyone would disagree that images would help, too.
Yes, yes, “we had a dis-and-misinformation problem before image generation became widely available” but see above. Easy, at-hand, quick, image generation and editing at scale just makes it so much easier for misinformation to be more compelling.
There’s a reason why advertising is art and copy. Advertising, that thing crafted to persuade you to do something, right?
1.1.4 We Can Remember It For You Wholesale
Shortly after Threads was flooded (in a nice way!) with pictures of aurora from many of my friends and strangers, Meta’s social account posted this:
@meta POV: you missed the northern lights IRL, so you made your own with MetaAI6
Let me angrily break that down for you.
- You did not see a thing
- You wish you had seen that thing
- You want to pretend that you had seen that thing
As a treat, you could probably add:
- You want other people to think that you saw the thing, too
I get it. I mean, I don’t like it, but I understand the behavior from the social team here. Spot a behavior on a network, find a way to tie it to a product or a feature that you’re marketing, and off you go: your mother’s trans man brother’s name is Robert.
Now one way for a responsible half-trillion dollar corporation to deal with helping people distinguish generated images from “real” images would be to label them! That way as platform owner you get the benefit of all of that sweet new generated content (and another text entry box for people to contribute their darkest, deepest, desires to your profile on them) and you get people to keep posting their regular images too!
That might be hard, though, and you might accidentally start labelling real photos as ‘Made by AI’7 which would let’s be clear always be bound to happen because something will always slip through, but also might be a bit embarrassing8 when professional photographs have their art and work hit with the label.
That said, if you’re anthropomorphizing a half-trillion dollar corporation, which you shouldn’t because it doesn’t get embarrassed. What it does is facilitate genocide9, and what you should remember is that it’s a corporation run by people who make decisions.
Of course, if your label is annoying people because it’s false, you can always move the goalposts a bit and change the label’s name to ‘AI Info’10, which would include the use of generative AI tools in editing11.
(Look, I’m not saying it’s a coincidence that Threads and Instagram are both headed by Meta’s Adam Mosseri, I’m just pointing out a fact)
1.1.5 Literally Don’t Do That
Alex Blechman, in my internet circles12 at least, is famous for this stupendously satire of Californian Ideology technology companies:
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create the Torment Nexus13
Which is a much better and catchier version of Neal Stephenson’s Snow Crash Wasn’t A Manual.
People14 have been saying for a long time that just image generation would be a shitshow without slowing down and figuring out what kind of tradeoffs we are prepared to accept as societies that are allowed to regulate themselves.
For example! The CEO of Microsoft was “alarmed” about explicit fakes of Taylor Swift back at the beginning of 202415 and that societies should have “guardrails” but, you know, there’s nothing to be done and in the meantime technology marches onward. The arc of the moral universe is long, but it bends towards discovering more profitable products and services!
(Pretty much exactly 7 months later, the former U.S. President would post generated images of Swift supporting his campaign16)
So now, pretty much 11 months to the day that it was reported Meta had disbanded its Responsible17 AI team18, you have Meta’s social team, on their own network, encouraging people to produce and post generated imagery of an event they did not personally witness.
I mean, what else are they supposed to do? Not promote their latest products? What else are you going to use generative AI for if not to create images of things that didn’t happen or don’t exist? Wouldn’t people do exactly that to create relevant content [sic] for current events?
I’m not even getting into the fact that the generated aurora imagery looks like shit and worse than what people were capturing with their phones.
Meta literally did the thing. Out loud. But again, what else could we expect?
1.1.6 Kill Your Gods
I wrote about this back in 201819.
The short version: there are a lot of truths the internet industry likes to tell itself to feel good about itself.
One of them is Moore’s law20, which isn’t. A law, that is. It was a rule of thumb, that the number of transistors you could cram into a given area would double about every two years. Even when it was clear that this law wouldn’t hold because physics, a bunch of people still thought that human ingenuity would win out and we’d still figure out ways to cram more transistors into a given area.
We did not.
Another one, the one I’m going to lay into, is Metcalfe’s law. Metcalfe’s law is this:
Metcalfe's law states that the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system21 (thanks, Wikipedia)
Another expression I’ve seen of Metcalfe’s law is just that the value of a network is proportional to the square of the number of connected users, not the more specific financial value, because the latter gives the game away.
A Lot Of People Are Saying [citation needed] that the deal with AI edited and/or generated images isn’t that much of a big deal because see all the reasons I wrote above.
But, like I said above, the difference is scale.
Metcalfe’s law is all about the scale.
I genuinely don’t see how people don’t understand the difference between possible and at hand for over 2.1 billion people every day. Like, in the pockets of over a billion people. Just like that. That’s not “yeah, anyone can Photoshop something”. That’s different.
And so Metcalfe’s law presupposes and gets excited about the value promised by larger and larger networks. A law -- a guess! a prediction! a wish -- that encourages networks to get bigger and bigger because yay financialization and there’s the lie in the Wikipedia formulation of the law: the greater financial value.
But what about harm?
I mean, at the very least the harm scales linearly with the number of connections?
I wouldn’t believe that harm is necessarily even reduced by the increasing number of users of a network because while some harms may reduce, if history has taught us anything, it’s that it only takes a small number of humans to be especially creative and invent new ways of harming other humans. And they don’t even have to be radical inventions! Sometimes all that changes is tweaking the harm or harmful behavior so that it rides on the utility provided by such a large network.
Metcalfe’s law is like believing in a universe that expands forever, where there’s always more growth to be discovered, to be colonized, to be captured, to be converted, to be inexpertly Mullenwegged.
Scale is a real thing. I don’t think potential harm increases linearly. I don’t know whether many/most/any people design systems thinking that harm increases linearly. There are people I know who do, though.
Anyway. Kill your gods. Or at least question them.
OK, that’s it. Over 2,500 words motivated by some anger that is only residual now that it’s Wednesday.
We got a dog. She’s a Good Girl. I’ve been going out for walks. The seasons are turning, autumn is here, the leaves are brown, the ground soggy.
How are you? I’m... doing better.
Best,
Dan
How you can support Things That Caught My Attention
Things That Caught My Attention is a free newsletter, and if you like it and find it useful, please consider becoming a paid supporter.
Let my boss pay!
Do you have an expense account or a training/research materials budget? Let your boss pay, at $25/month, or $270/year, $35/month, or $380/year, or $50/month, or $500/year.
Paid supporters get a free copy of Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes, and free subscribers get a 20% discount.
-
Facebook’s AI Told Parents Group It Has a Gifted, Disabled Child (archive.is), Jason Koebler, 404 Media, 17 April 2024 ↩
-
Meta’s AI is summarizing some bizarre Facebook comment sections - The Verge (archive.is), Emma Roth, The Verge, 31 May 2024 ↩
-
Generate images using Meta AI at www.meta.ai | Meta Store (archive.is) ↩
-
No one’s ready for this - The Verge (archive.is), Sarah Jeong, The Verge, 22 August 2024 ↩
-
@meta • POV: you missed the northern lights IRL, so you made your own with MetaAI • Threads (archive.is), “Meta”, Threads, “3 days ago”, or 11 October 2024 ↩
-
Meta is incorrectly marking real photos as ‘Made by AI’ - The Verge (archive.is), Sheena Vasani, The Verge, 24 June 2024 ↩
-
Instagram Photos Are Being Labeled 'Made With AI' When They're Not | PetaPixel (archive.is) ↩
-
Meta in Myanmar (full series) - Erin Kissane's small internet website (archive.is), Erin Kissane, October 2023, Matt Growcoot, PetaPixel, 28 May 2024 ↩
-
Instagram’s ‘Made with AI’ label swapped out for ‘AI info’ after photographers’ complaints - The Verge (archive.is), Richard Lawler, The Verge, 1 July 2024 ↩
-
This is What Makes Instagram Flag Your Photo as 'Made With AI' | PetaPixel (archive.is), Matt Growcoot, PetaPixel, 25 June 2024 ↩
-
RIP Google+ ↩
-
Torment Nexus | Know Your Meme (archive.is), Alex Blechman, Know Your Meme ↩
-
i.e. all the people who have pointed out the many, many ways in which all of this could fuck people up, and then when it has fucked people up, cited those exact examples of how and when and where people were fucked up, and then suggested ways to mitigate or even “hang on, it might not be a good idea to do this” ↩
-
Satya Nadella says explicit Taylor Swift AI fakes are ‘alarming and terrible’ - The Verge (archive.is), Adi Robertson, The Verge, 26 January 2024 ↩
-
Trump posts fake AI images of Taylor Swift and Swifties, falsely suggesting he has the singer’s support | CNN Politics (archive.is), Elizabeth Wagmeister and Kate Sullivan, CNN, 28 August 2024 ↩
-
“Responsible” here doing the kind of stellar work that “less-lethal” and “non-lethal” do when describing weapons ↩
-
Meta disbanded its Responsible AI team - The Verge (archive.is), Wes Davis, The Verge, 18 November 2023 ↩
-
No one’s coming. It’s up to us.. Adapted from “We Are The Very Model Of… | by Dan Hon | Medium (archive.is), Me, Medium, 9 February 2018 ↩