s12e08: AI but, you know, not bad; Liminal
0.0 Context Setting
It's a sunny and cloud Friday, May 20 in Portland, Oregon and you did not experience a time skip: I did not send a newsletter episode yesterday because I was sick, still.
I am marginally less sick now. I have to admit I feel somewhat terrible about the entire thing, because my whole schtick when coming back with season 11 was "I'm going to write every weekday for 15 minutes". And yesterday was the first day I missed in 57 consecutive weekdays.
But it wasn't the end of the world. The point was to set up a habit and make it a bit easier to get back on if -- wait, not if. When. When I fell off, the point was to make it easier to get back on. For some of us, that's hard. Just missing one can make you feel terrible. It's hard practicing kindness for yourself that way. I mean, I'm 42 and it's still hard.
Listening to: look, this is going to be offensive but I genuinely have the Free Guy Jodie Comer Dark Epic cover of Mariah Carey's Fantasy on single track repeat in the background.
Boss Level
Here's the "I derive benefit from this newsletter in my professional capacity and have the opportunity to support this writing through my employer":
... and a few more dates have opened up for 1:1 consulting time with me.
1.0 Some Things That Caught My Attention
AI but, you know, not bad
If I were thinking, apropos of nothing, about methods carrot and stick-like to encourage the not-crap use of AI and machine learning in government, I might think about things like this:
First, gosh, buying things like technology in government is difficult! I mean, buying things in general is difficult. It’s all very complicated and with technology especially, the kind of thing that costs Lots of Money, then there’s just so much to keep track of. You might have lots of lists and requirements and so on, and then the people who’re giving you the money to buy that technology might have lots of requirements as well! Before you know it, you’re drowning in lists.
What I’m saying is this: there might be the temptation to say hey, before you go off galavanting to the AI store to pick out the latest Machine Learning Decision Making Algorithms to add to your sparkly outfit, could you just make sure you do all this other stuff? And while I understand the impulse here, on some level I think we need to be aware that just procuring stuff competently and having the internal skill, experience and expertise to procure competently may not… be reliably there?
Training data transparency
Shortcut version: the thing about machine learning is that there's a whole bunch of bias that can creep in, and I'm going to charitably say that the bias creeps in through training data even when you're trying your best because hey, we're not perfect and there's always room for things that we might miss. But if I were funding governments buying AI, I'd like to know what training data they used, and I'd like to be able to audit it. I'd say this would be a pretty (ha) easy requirement that people might get upset about, but then I'd say: well, that's interesting, because it gets into two (yeah, more than two) issues:
First: let's just say that as much of your secret sauce is in your implementation of the ML method, so your "algorithm" is protected that way. More on that later. I maybe don't care about your "algorithm" because honestly, it's just a more opaque version of my existing convoluted business rules. Actually, here's the more on it later part: what I do care about is bias in the results. I care about what your results are more, I think, than how you got to them. Or at least I care about that just as much, and if I'm not caring about it, well, I should be.
Second: if you're using a model in government then I kind of do want to know what dataset you used to train on? And if you're going to say "well, there's value in that dataset, we spent a lot of time curating that", then I also kind of want to be able to verify the curation you did because god forbid you're lying and you just, I don't know, grabbed a bunch of shit from reddit (I am grossly oversimplifying). In fact, if there is value in curating the training data and it's going to be used for public benefit, then maybe... it's in our interest to have a public, curated dataset for training? For the benefit of everyone? Huh. What about that. A shitty implementation of this would require machine learning systems in government to use a shitty dataset instead of a state-of-the-art one, or would even prevent you from improving upon the state-of-the-art.
But then what I start thinking about is, sure: you've got a training dataset, but what I really want are replicable tests. What I want, maybe, is a set of data of suitably scrubbed People Who Apply For Benefits, and I want all of you bidders to run your model against that and show what your automated decisions would be. And I want that in a standard format. And I want to be able to check those and compare the results of those models against each other. In other words: I want a bake-off that's as close to production data as I can get. And I want those results to be public so they can be audited by third parties as well.
(There is a whole side point here about requiring additional checks and regulations in the procurement of AI/ML systems that just means organizations that don't have that expertise in house (which there isn't much of in the first place in the private sector) are going to need it, and they don't have the money in the first place. So then you give them the money and they go hire... vendors. Who... well, you get the idea. It's not a great situation, the result of decades of outsourcing, is it. But I guess it worked out great for vendors.)
Oh, the last thought: when I was chatting with Aaron Snow about this, he mentioned the difference between AI/ML decision-making and AI/ML "recommendations", where your first step is not making the decision, it's just providing an alternative so you can double check. Or a second opinion. Look I get that you want to skip to the good part of cutting out humans from making decisions (just like, er, Skynet did?) so you can save money or burn, as it were, through more transactions, but let's just hold our horses a little first, okay, and see if what the difference is?
But then if you are asking to see recommendations in-line with decision-making by a human, then perhaps there's opportunity for some sort of standard design pattern for implementing recommendations alongside decision-making in, e.g. case management systems. How would you show recommendations? How would you disclose them? Are there standard, best-practices patterns that could be used? Now that would be interesting, because then you've potentially got an implementation-agnostic layer on which to provide ML/AI-assisted decision-making.
Just some thoughts on something that caught my attention.
Liminal
Ha, liminal. Anyway, I just remembered my personal experience of being in the Oregon Trail generation, that one that was old enough to come of age during the period in which the internet became a thing.
Here's one: look, I get the need to memorize things. Sure, it's useful. I rely on things I've memorized! Sometimes I have even memorized them without wanting to when it's painful or irritating to do so. Sometimes I have memorized them through sheer brute force repeated usage, and it's more like a muscle memory than an intentional directed rote memorization. I haven't even tried spaced repetition.
But there was a time in the mid-90s where you could've been in secondary school and you could've been in class being told you needed to memorize, I don't know, the names of a bunch of rivers. And you would have to square that with your privileged position of having an expensive computer and a modem at home and knowing that just out there, if you were sat with your computer, you wouldn't need to know what all the rivers were, you could just find them online. And then someone would inevitably say, ah: but you're not going to have your giant computer and modem line next to you at all times, are you?
And then dear young precocious you already besotted with the promise of a Moore's Law ubiquitous cheap computing wireless information existence, you'd say: duh, it'd be in my pocket. And it'll be there by the time I grow up and I need to get a job.
And you know what? It was.
Didn't stop you from being in trouble for not memorizing all those rivers though, did it.
That's it for today. I'm still getting back into the swing of things and, it feels, my thoughts are similarly messy.
It's Friday -- how are you?
Best,
Dan