s16e19: Dark; Forward-Looking Barn Door Statements
0.0 Context Setting
Friday, 3 November, 2023 in Portland, Oregon, where it is cold but I suspect that's because I'm in the basement and haven't put socks on.
A very short one today.
0.1 Hallway Track 005 The New Luddites Seizing the Means of Computation
You can now register for Hallway Track 005 The New Luddites Seizing the Means of Computation.
You know the drill now: a small chat, free on Zoom, for about an hour and a half, where we pretend we've just come out of Brian and Cory's panel.
1.0 Some Things That Caught My Attention
1.1 Dark
Here's another way of thinking about government service delivery:
Dark patterns in the private sector are pretty familiar by now, one regular example being the "make it hard to unsubscribe" pattern.
(Dark patterns are also known as deceptive patterns1, now. Hi, Harry!)
For those in government, it might be interesting or helpful to ask what dark patterns exist in service delivery:
Where might what's thought of as a dark pattern exist in a process like determining eligibility?
What would it mean to call user-hostile patterns in eligibility determination or claim application a dark pattern, given what people understand dark patterns to be?
If a leader or program manager understands a dark pattern as "when Amazon tries to stop me from doing something that's in my interest, but not in Amazon's interest", what might look like that or feel like that in delivery of a public sector service?
Forward-Looking Barn Door Statements
First, I have not read the Biden Administration's Executive Order on Safe, Secure and Trustworthy Artificial Intelligence2.
What I have read, ish, are some of the reactions to the executive order, namely from Ben Stratechery3 and Steven Microsoft4.
(I should mention that I really enjoy Sinofsky's writing, and this is one of the first times his point of view has been challenging to me, I suspect because it's less about the practice of delivering software and more about, well, public policy)
Here's what jumped out at me about the criticism of the order from these two very influential people in technology:
The first is that they appear to be bristling at the mere thought of government regulating "technology" in the first place, so we're just at the ideological position of the freedom to innovate, yadda yadda yadda. Sinofsky talks about the need to, essentially, do things first and then tech your way out of it, a sort of "tech the tech". I think it's clear now that there's increasing understanding in more areas of society that maybe we should improve things somewhat from the existing approach.
Just the concept of regulation of a nascent technology, and one that's acknowledged to be as hazy and undefined as these new methods of automation (I remain hesitant to call them AI, but it's not as if terminology has been that effective at influencing positions at this stage), is, from their position, hostile. Anti-competitive. Anti-innovation.
But it strikes me that many of the regulations and reasons cited for regulation -- at least, those pointed out by Thompson and Sinofsky -- are aimed at minimizing existing demonstrated harms. So while yes they're being applied to "AI", they're also a reflection as much of a previous reluctance -- in the U.S., at least -- to regulate "tech" at all. That there isn't, for example, Federal privacy legislation.
That's the Barn Door bit, for me. If I were being opportunistic in the policy space, I would absolutely be using a perceived crisis in AI and the political will there to also include, in the best way possible, regulation that should have existed beforehand.
The best time to regulate was in the past, the next best time is now.
I really think the arguments for a laissez-faire approach, that we can't predict the benefits, and that we can rely on tech to tech our way out of this, don't hold up as much as they used to, in our current government and economic environment.
What's also weird -- and again, what put me off about Sinofsky this time -- is the bleeding-through of the California Ideology. That the government is shock-horror deeming to regulate what the people are able to do, when, look, it's the government that is the duly elected representative of the people. That they're not doing what a bunch of innovation-focused, tech-will-tech-our-way-out ideologists is absolutely true! And that's okay, that's the way the system is supposed to work! A public policy from a democratically elected5 even when you disagree with it is still valid.
OK, that's it for today! Told you it was going to be a short one.
How are you? How's your week been?
Best,
Dan
How you can support Things That Caught My Attention
Things That Caught My Attention is a free newsletter, and if you like it and find it useful, please consider becoming a paid supporter.
Let my boss pay!
Do you have an expense account or a training/research materials budget? Let your boss pay, at $25/month, or $270/year, $35/month, or $380/year, or $50/month, or $500/year.
Paid supporters get a free copy of Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes, and free subscribers get a 20% discount.
-
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House (archive.is) ↩
-
Attenuating Innovation (AI) – Stratechery by Ben Thompson (archive.is), Ben Thompson, 1 September, 2023, Stratechery ↩
-
Regulating AI by Executive Order is the Real AI Risk (archive.is), Steven Sinofsky, 1 November, 2023, Hardcore Software ↩
-
yes, I know, "democratically elected" is a load-bearing phrase here ↩