s17e12: Justice in Forensic Algorithms; The One With Retroactive Bereavement Fare Claims; Bits and Pieces
0.0 Context Setting
Friday, 16 February, 2024 in Portland, Oregon and it is cold.
I wanted to make this one short and I could not. I am sorry. Believe me, for this one especially, I am just as disappointed as you, but probably not very surprised.
1.0 Some Things That Caught My Attention
Going to try to make today a quicker, shorter one.
1.1 Justice in Forensic Algorithms
A couple of Democrats in the U.S. House of Representatives have this bill they’re trying to pass, the Justice in Forensic Algorithms Act1 2 3:
Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet. 2
This is good! I have no idea if it will pass. It brings to mind the Post Office Horizon mess in the UK, where innocent people were accused of fraud due to at first a faulty accounting system, and then and ultimately, due to callous and reprehensible management.
Some notes:
NIST gets to set standards! That’s... also good! I hope NIST get the resources they need.
This is just for criminal cases, not for civil cases.
And: honestly this feels like the sort of slow-coming but eventually-arriving societal reckoning to do with how integral “algorithms” are in today’s societies. It would be much better if requirements like these had been dealt with earlier and we were able to avoid countless injustices. But this is also the way change is supposed to happen, with democratically (ha) elected (ha) representatives (ha) introducing legislature and (ha) getting that enacted (ha) and put in place by a functional (ha) government.
If I were a law student again, or, well, a law anything, I think another question I’d be asking would be this: we’re taught that law should move slowly and deliberately, and that law made in haste is invariably (we wouldn’t say always) bad law. I’m old, so my example from student days is something like the Dangerous Dogs Act or whatever in England and Wales.
Anyway. Yes, I get it. Don’t make reactionary law. Don’t make reactionary law when a thing like a kid being mauled or killed by a dog has happened, in the heat of the moment. Got it. Understand the reasons there.
But what about don’t make law when technologies are being introduced and technologies are being spread? It’s all about thresholds. How long should you wait? How much harm are you prepared to witness (and then, remember, you should remember to provide restitution, if that’s even possible)? Actually, never mind witness. How much harm are you willing to enact until you’ve seen enough progress that you now have a regulatory framework or law that makes sense to deal with the situation?
This is, I suppose, just another way of saying “ugh, governments take too long to react”, which is not a new observation at all. The observation that “things happen more and faster now” is not new either; in many cases it’s seen as a positive and something to be encouraged. That if the rate of increase itself isn’t increasing, that if the line going up and to the right isn’t going upper more and righter more, then things aren’t good and we’ll descend into some sort of primitive (ha) chaos.
So then if I were a government that were serious about making decisions at some sort of time, and were willing to say that that time might change, then it’s more like understanding the threshold. Sure, I get it. Many thresholds depend on context. I have to admit that my brain was all “yeah no they don’t, water boils at...” before I remembered: hahahaha no it doesn’t, that’s only at standard temperature and pressure! Anyway.
You’d want to count.
How many people are harmed.
How many miscarriages.
You’d want to be data-informed.
But! It’s not like that would make much of a difference. It’s not like knowing how many people are killed by cops has forced any major, substantive policy change in the U.S.
But. We should know. And I like to believe that it’d be at the last just that little bit easier to make changes with data.
My point here is that for these two representatives, “enough harm has happened” for them to, for whatever reason, use their political power and capital to introduce this legislation.
This also appears to be another reason to re-read Seeing Like A State.
1.2 The One With Retroactive Bereavement Fare Claims
Moffatt v. Air Canada, 2024 BCCRT 1494 is probably not a case you’ve heard of, but if I say it’s The One With The Airline Pretending It’s Not Liable For What Its Chatbot Said then you might already know what I’m talking about5.
Short version: grandchild used a chatbot on Air Canada’s site to find out about bereavement rates and was told that a claim for the reduced rate could be made within 90 days of travel. Grandchild -- sneaky, been-here-before grandchild -- took a screenshot of that assurance. Air Canada, unsurprisingly, said “nah”.
It’s a short judgment, only 47 paragraphs in total[^moffat], you can totally read it.
Ugh, today is just going to make me relive writing essays at unversity.
Here’s the good bits, which is mainly me doing a précis (sorry, organic, sentient LLM summarization) of the paras 24-32 of the judgment:
The judge helpfully assumes that the claimant is alleging negligent misrepresentation: that Air Canada did not exercise reasonable care to ensure its representations were accurate and not misleading.
First, the judge finds that Air Canada totally owed Moffat a duty of care: they had a commercial relationship as a service provider and consumer. That means the duty of care Air Canada owed would be to take reasonable care to ensure its representations were accurate, and not misleading.
The best best bestest bit is when the judge does that thing in the best of the English legal traditions of very politely saying “are you fucking nuts”.
Because Air Canada says, essentially: “nah, we’re not liable, because we’re not liable for any information provided by one of our agents, servants, or representatives which btw includes that chatbot.”
The judge summarizes this position as Air Canada suggesting “the chatbot is a separate legal entity that is responsible for its own actions.”
Judge says: “This is a remarkable submission.” which is, like I said above, is the very polite way of saying “are you fucking nuts”.
This judge isn’t having any of it. He says:
- yeah, a chatbot’s interactive, but it’s just a part of Air Canada’s website;
- “It should be obvious to Air Canada that it is responsible for all the information on its website”; and
- “It makes no difference whether the information comes from a static page or a chatbot”
I love it.
The judge also addresses the inconsistency. Because the chatbot also pointed to the “Bereavement travel” page, which didn’t say anything about the post-travel claim policy, but Air Canada didn’t explain why the claimant should’ve trusted the webpage over the chatbot. Oops, Air Canada.
Like, I understand the point here that Air Canada is acting as if the chatbot were, say, a third party travel agent selling an Air Canada ticket, and that third party agent told Moffatt about an inaccurate bereavement policy. Sure, that works. Air Canada wouldn’t be liable for that. But again, it appears the argument Air Canada were (weirdly? I mean, I don’t think they had good lawyers here?) making was that the chatbot, which was on their site, was “someone over there making shit up and we can’t be liable for what that bot is saying, don’t know who they are”,
Some notes and questions!
- Jurisdiction etc aside, how is this different from that time the Chevy Dealership incorporated an OpenAI ChatGPT-powered bot on its site that was tricked into making offers?
- Did this fail purely because Air Canada didn’t argue the provisions that should prevail in case of conflicting information?
- What’s reasonable effort in terms of ensuring that the representations an entity makes are consistent and non-contradictory? Are there different thresholds in reasonableness between, say, static content, and “interactive” content, like a bot?
- Can a company be reasonably required to make reasonable effort in verifying the generative representations?
- If a company is required to make reasonable efforts to verify generative representations, should it just try to outright disclaim all of them?
- For those disclaimers to be effective, how visible should they be? Would it be okay for them to be hidden in the site’s general TOS, or should they be brought to the fore?
- How is this different from disclaiming liability from call center staff?
I really did not wake up thinking today’s episode was going to be The One With The Interesting And Traumatic Flashback To University.
1.3 Bits and pieces
- I read through all the documentation I could find on Flamework, the Flickr-style PHP application framework. Caught my attention because: the blog Building Slackp6 just launched about, uh, building Slack.
- Nemawashi, the “Japanese business informal process of quietly laying the foundation for some proposed change or project by talking to the people concerned and gathering support and feedback before a formal announcement”, which I am mortally offended by, having only learned about two days ago, and at that, it also describing what I do, when honestly, I thought that was just me. This was... incredibly validating? And, as I gratefully mentioned to the people who brought it to my attention, possibly the most consequential bit of professional development for me this entire year.
Phooooey. But hey, it’s Friday!
Thank you for the notes that said “hi!” from the people who sent notes that said “hi!”
I like getting notes, even when they just say “hi!”
How are you?
Best,
Dan
How you can support Things That Caught My Attention
Things That Caught My Attention is a free newsletter, and if you like it and find it useful, please consider becoming a paid supporter.
Let my boss pay!
Do you have an expense account or a training/research materials budget? Let your boss pay, at $25/month, or $270/year, $35/month, or $380/year, or $50/month, or $500/year.
Paid supporters get a free copy of Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes, and free subscribers get a 20% discount.
-
Text - H.R.7394 - 118th Congress (2023-2024): To prohibit the use of trade secrets privileges to prevent defense access to evidence in criminal proceedings, provide for the establishment of Computational Forensic Algorithm Testing Standards and a Computational Forensic Algorithm Testing Program, and for other purposes. | Congress.gov | Library of Congress (archive.is) ↩
-
New bill would let defendants inspect algorithms used against them in court - The Verge (archive.is), Lauren Feiner, 15 February 2024, The Verge ↩↩
-
BLACK BOX ALGORITHMS’ USE IN CRIMINAL JUSTICE SYSTEM TACKLED BY BILL REINTRODUCED BY REPS. TAKANO AND EVANS | U.S. Congressman Mark Takano of California's 39th District (archive.is), 15 February, 2024, Office of Mark Takano ↩
-
2024 BCCRT 149 (CanLII) | Moffatt v. Air Canada | CanLII (archive.is) ↩
-
Air Canada found liable for chatbot's bad advice on plane tickets | CBC News (archive.is), Jason Proctor, 15 February 2024, CBC News ↩
-
Building Slack (archive.is), Ali Rayl and Johnny Rodgers ↩