s4e11: We need to talk about algorithms
0.0 Station Ident
7:56am on Wednesday 12 April 2017, and I've made a mistake - I am over an hour and a half early for an appointment. This is less than ideal for reasons of family and household logistics, but I suppose I'm here and there's not much I can do about it now. Now's probably as good a time as any to mention that I have another newsletter where I send out the occasional affirmation[0], I imagine a good one for today would be that everyone makes mistakes.
[0] I can by Dan Hon
1.0 We need to talk about algorithms
On Sunday night, United Airlines removed a man from a flight that was overbooked. I say "overbooked" - it doesn't sound like there were more tickets sold for the flight than there were seats, instead the airline claimed that the flight was overbooked because it needed to use four seats to transport United crew. The airline went through their usual process to find volunteers by offering compensation - vouchers that could be applied to another flight - but could not find any. In the end, four passengers were selected at random by computer[0] and when one of them refused, United staff called security, who forcibly removed the passenger, dragging him off the plane, bloodied and bleeding.
It goes without saying that there's a lot wrong with what happened. It isn't clear the flight was overbooked, because paying passengers were removed ("re-accommodated", in the words of United CEO Oscar Munoz, and PR Week's Communicator of the Year) in favour of crew who needed to be transported, arguable a failure of the airline's on scheduling and logistics (of course, this is only a failure if you believe that airlines should prioritize seating paying passengers over their own crew logistics needs). But what many people have picked up on is how something like this happened: where multiple United staff - regular human beings, who presumably do not set out in the morning to enact a chain of events that will result in assault - didn't feel like there was anything they could do once the events were set in motion.
I wrote a tongue-in-cheek set of tweets about this[1], which went a bit like this:
United CEO To Discipline Computer Algorithm That Resulted In Passenger Removal
The CEO of United Airlines, Oscar Munoz, 57, said in a statement that the algorithm involved in Sunday's removal of a passenger from a United flight had made a 'bad decision' and would be placed on administrative leave during the airline's internal investigation.
The algorithm had selected passenger David Dao and three other passengers for what Munoz called "involuntary denial of the boarding process". Munoz confirmed that the wide-ranging investigation would also look into previous decisions made by the algorithm.
Industry software engineers were quick to jump to the algorithm's defense. Aldous Maximillion, an artificial intelligence researcher at internet portal Yahoo! said that "the actions of one algorithm should not tar all algorithms."
The algorithm in question is not the only one working at the airline. In its last 10-K filing, United disclosed that over 200 algorithms are employed at the airline, making many varied decisions throughout the day. A computer programmer at United commented off the record that many algorithms at the airline had been overworked recently.
Interviewed at Chicago airport where Sunday's incident occurred, many United staff remarked positively about their interactions with the airline's algorithms.
Tracy S., 29, a United gate attendant who was present at the incident on Sunday, spoke fondly of the resource scheduling and passenger seat assignment algorithm that made Sunday night's decision.
"[The algorithm] always communicates politely," she said, gesturing at the flat screen monitor, where she explained instructions from the algorithm would appear.
Tracy said that the algorithm was much more reliable than the human scheduler it had replaced, and added that the algorithm had not made any untoward sexual advances.
When the algorithm's instructions came through, none of the attendants thought anything of it, said Tracy. "Nobody could have predicted this," she said.
Rival companies American Airlines, Delta and Southwest were asked to comment on their employment of algorithms, but none had commented at time of press.
Okay, so it's funny (well, *I* think so) if you treat algorithms like people but I frequently find that I have to explain the more serious points that I'm making, albeit pretty obtusely. Which are:
First: sure, we can have a discussion about how algorithms-are-just-rules, but I think that denies the folk and common understanding that the way that we use the world algorithm now is as a computer program. I mean yes: a recipe in a cookbook is an algorithm (thank you, secondary school computer teaching) but these days I think it's safe to assume that unless we explicitly say otherwise, when someone says algorithm we *mean* "a program running on a computer beep boop".
Second, is the view pointed out by Josh Centers that United is a good example of how dehumanized we've let computers make us[2]. My reaction at the time was pretty hot-headed (emotions and tempers run high on birdsite, recently), and I said: "Bullshit. It's the perfect example of how dehumanized we've chosen to be. Computers have nothing to do with it."[3] to which I owe Josh an apology because computers have *something* to do with it, and it's, well, never a good idea to say never.
On preview, Josh is right: he says that it's an example of how dehumanized we've *let* computers make us become, and the use of the word 'let' implies that we still had a choice. To use a popular analogy, we've outsourced deciding things, and computers - through their ability to diligently enact policy, rules and procedures (surprise! algorithms!) give us a get out of jail free card that we're all too happy to employ.
In one reading, my reaction is to deny the environment that we make decisions in, partly because *we* created the environment. None of these rules come into existence ab initio, the algorithm that randomly selected passengers for "involuntary denial of the boarding process" (which in any other year would be a candidate for worst euphemism, but here we are in 2017) was conceived of and put into use *by a human who made a choice*.
At the end of the day, we're the ones who have the final decision, *but* it's only compassionate (and realistic) to recognize the environment that leads to the decisions that we make.
A brief detour into cognitive psychology and (surprise) my personal experience with mental health: it is hard to make decisions. Most of the time we don't. Most of the time we're just reacting and we're not, well, mindfully considering what it is that we want. The argument that we're fully in control of all of our actions at all times is turning out to be either not true, or not as true as we had thought, and in any case, unhelpful. In our best moments of clarity, in our best moments of being present, we're able to thoughtfully consider and slow down. Most of the time we're operating on Kahnemann's System 1 of evolutionarily acquired heuristics and rough rules that mostly work. It's only with deliberate effort that we can shift over to System 2.
My understanding - my folk, pop-sci reading version of this - is that System 2 just requires more energy, which is another way of saying that biological systems are inherently lazy and why bother spending *more* energy to do something if you could spend *less* energy doing something. If your mostly automatic, intuitive, non-conscious System 1 can do something more quickly and get it roughly right, then why not use it most of the time? It's *hard work* to think.
And, to Center's point, we've offloaded our thinking.
I don't think computers should necessarily shoulder the brunt of this from an academic point of view, but from a practical point of view it seems perfectly pragmatic to. Tools have always been about making things easier for us. Heuristics, policy, rules - whether formal or informal, exist in part to reduce the decision-making burden. How can we make things *easier* for ourselves? We've always wanted to outsource: in theory it's a postive adaptive move because it means that you have more energy for.. something else, like reproducing and making sure you've got lots of children so that the selfish replicators continue to get what they want.
This, I feel, is starting to get to the root of the issue.
I got into an argument - and am happy that I just decided to walk away instead of further engaging - with someone on Mastodon about opinionated software and the idea that Mastodon could incorporate something like a poison pill in its standard terms and conditions such that user data would be automatically deleted upon acquisition by a third party. On the one hand, hard cases like "how do we preserve user privacy in the event of corporate acquisition" can easily result in bad law, on the tridextrous hand, this quickly turned into me hearing for the nth time that "software is just a tool" and "shouldn't impose values".
To which the short answer is: tools *do* impose values and in the same way that manners maketh man, in 2017 so software maketh humanity. Some - very general purpose and elementary - tools might impose a minimal set of values, but user-facing software these days *require* the developers to make hundreds if not thousands of decisions which act as opinions-made-real as to how things should work. 140 characters or 500? Should you add your voice to something while spreading it, or just spread it without adding your own commentary? Should you allow people to hide content behind an arbitrary warning or not? Decisions in software seek to turn opinions into reality through usage. If software *wasn't* used then yes - it wouldn't impose values. But tools do, and complex and opaque tools like "software" certainly do.
I've been slowly and gradually learning that the way my head works is by trying to connect everything together. Sometimes this works well, where I've been lucky to find rewarding and meaningful work that benefits from seeing the connections between things, making conclusions and suggesting how things might work differently. Other times this works terribly, where everything becomes a mutually reinforcing web that can drag me down into debilitating depression.
At the moment, this is what I'm seeing and there's nascent connections forming, being reinforced and culled between all of these nodes:
- in the same way that society functions well when people can find out and understand the rules of society[citation needed], does the same principle apply to rules implemented in software?
- given our understandable and apparently unavoidable tendency to outsource decisions thanks to our inherited cognitive architecture, can we assume that we'll *continue* to outsource decision-making and rule-processing? And at an accelerating rate, too?
- "enterprise" businesses do this already with their predilection for things like "rules engines," which I always feel are some sort of unobtainable golden rule for "if only we didn't need any humans and a computer could just make all of our decisions for us and instead of implementing those rules programatically, what if we had a somewhat higher-level language and interface for doing so?"
- decision fatigue is real
- when was the last time you asked someone to decide something for you?
- how far off are we from the algorithmic equivalent of "This area contains naturally occurring and synthetic chemicals that are known to cause cancer or birth defects or other reproductive harm"? A sort of "This product contains algorithms that are known to affect your ability to make conscious decisions of your own volition"?
- To the above, Peter Watts would say: "Where have you fucking been? You've never been in control. We'd have to slap a warning sign on *everything*."
- How long until the next management fad is to include mindfulness training and regular mindfulness sessions *throughout the day* for employees to help them make decisions more consciously? You can still make decisions in response to your emotions, but at least you'll be aware of them.
I feel this is less a question of what technology wants. This is more a question of: we are tool-builders. Perhaps building tools - especially tools make decisions - is in large part an evolutionary imperative to reduce the energy burden of one of the most expensive organs in our body? Do we end up with a brain that just wants to experience qualia without having to *decide* anything?
In the meantime, though. Our decisions have impact in the real world and the real world contains other people. Outsourcing decisions is something we do, because it's easier - and less stressful for us - for something else to make a decision. It's nice to have something to blame, an algorithm to point to, instead of taking responsibility. We can say: the algorithm was fallible. We tried to do our best, but in the end, what could we do?
[Twenty years ago, someone like me would be an undergrad at university and deeply unimpressed at the groundbreaking brain-in-a-vat philosophy espoused in The Matrix. But on the other hand: what if seeing the underlying code of the world wasn't the software, but seeing *all* of the underlying rules? Seeing the complete System of the World?]
So. To tie all this together.
Algorithms make decisions and we implement them in software. The easy way out is to design them in such a a way as to remove the human from the loop. A perfect system. But, there is no such thing. The universe is complicated, and Things Happen. While software *can* deal with that, in our more sober moments, in our System 2 moments, we can take a step back and say: that is not the outcome we want. It is not the outcome that conscious beings that experience suffering deserve. We can do better.
United are merely the latest and most visible example of a collection of humans who have decided to remove themselves from the loop and to blame rules - that they came up with - rather than to accept responsibility and to do the hard work.
When we design systems and algorithms, we must make a decision: are these to help us provide *better* outcomes for humans who think and feel, or are they a way for us to avoid responsibility and to merely make things easier for us? It means choosing to do the hard thing over the easy thing.
And, in a trite way (but, as someone pointed out to me on Twitter, sometimes the trite things are true), perhaps it just starts with stopping to think.
More on this later, I expect.
[0] Officer Who Dragged Man Off United Flight Gets Suspended
[1] Dan Hon on Twitter: "“United CEO to discipline computer algorithm that resulted in passenger removal.”"
[2] Josh Centers on Twitter: "The United story is the perfect example of how dehumanized we've let computers make us."
[3] Dan Hon on Twitter: "Bullshit. It's the perfect example of how dehumanized we've chosen to be. Computers have nothing to do with it. https://t.co/zZxonSu0Am"
--
As ever, I love to get notes from you, even if they're just saying 'hi', and I do my best to not get freaked out if you disagree with me! So - feel free to send a note. Even if it's just a single emoji.
Best,
Dan