Things That Have Caught My Attention

Dan Hon's Weekday Newsletter

s4e14: The City Was Connected 

0.0 Station Ident

I am, as they say, “between things” at the moment, which is its own set of opportunities and, uh, the opposite of what an opportunity is.

Because I am between things and because I like to type things and start things that may or may not be finished or concluded in any way whatsoever, I’ll make this one simple because I’m dealing with a bunch of head stuff. Here’s something that I prepared earlier:

1.0 The City Was Connected

Everything was connected, and I was fucked.

I was late paying the water bill, so the parking meter refused service until I coughed up.

The meter said I had 30 seconds to pay the water bill until I had to move my car, and… I just froze. Then the meter attendant came. She said she was just doing her job as she booted my car, then looked down at her phone. Reminded me I hadn’t taken out my recycling.

This wasn’t turning out to be a good day.

She told me I was on my second strike: one more, and I’d lose streetlight privileges. I’d heard about that: a social shaming punishment. Streetlights would create a cone of darkness around just you. It sounded horrible. I shrugged. I didn’t care anymore. Of course, the next level was suspension of physical mail delivery and a karma dock on Nextdoor.

Fuck Nextdoor. Everything had gone to shit when they’d come in as the ‘social fabric platform’ when IBM connected the city. It’s not like the streetlights worked, anyway. In theory the full-spectrum full-color LEDs were super smart, but they were IOT-dumb.

Some joker had leaked another cache of NSA zero-day’s for Windows Embedded last month and the lights had been useless ever since. At least the kids at the high school were having fun. The lights outside Elspeth High kept flashing ‘Mr. Franklin Is A Kiddy Fiddler’. There was no chance the school admins or police could figure out who did it and besides, they had other problems. Not least of which I’d heard that the receipt printers for school dinners were drawing a cock on each receipt now, with Franklin’s address

Anyway, using streetlights to create a cone of darkness for social shaming if you hadn’t paid a water bill? Which idiot thought of that?

I didn’t have it as bad as the single parents, though. An irritation for me, a complicated connected hell for single mothers.

Everything was connected, so if you had a fidgety kid at school that day, you got an automatic fractional WIC deduction as parenting punishment. We had DeVos to thank for that one.

It was all easily fixed, though.

Send some bitcoins to the right address and someone – probably in China – would fix everything for you. Only for 24 hours, mind. Not a good business model, otherwise. Same as it ever was, just figure out who to pay to grease the tubes. In the meantime there was easy if boring work if you wanted it. All those smartlights needed a hard reboot thanks to a daily buffer overflow Essentially you’d get paid to take a walk and say hello to all the lampposts. A morning connected constitutional.

I felt sorry for the kids as I walked past the park. All the play structures were smart now, promising to add NikeFuel to each kid’s score. They’d single out the pudgier and hawk NikeFuel points to them, blinking their names in a disencouraging display. NIKE FUEL BONUS FOR JASON. They got hacked too.

“JASON ATE ALL THE PIES” it would flash, later.

Pity the single dad who took his kids to the park on an access day though. If he hadn’t paid his support, all the play structures locked up:

”PLAYGROUND DISABLED. SUPPORT YOUR FAMILY, BRIAN. $632 OUTSTANDING. PAY WITH  PAY OR SQUARE NOW.”

The other parents and kids scowled at Brian. He just turned around and took his kids with him. Ten seconds after they left, the playground unlocked.

So yeah, our connected city’s great. We’ve never been happier.

If you’d like more, the internet-short-fiction-stuff will continue whenever my brain chemistry allows at http://tinyletter.com/umbra where the tiny fanbase for the other thing I wrote can slowly accumulate like operations on a useless blockchain.

Cheers until something has caught my attention,

Dan

s4e13: A future of working 

0.0 Station Ident

12:40pm on Friday during a lunch break, in the middle of doing some sort of writing. This particular set of writing – typing for coins, as some of my friends have once described it – has at its goal changing the way a group of people do things from one way (unsurprisingly, given my current work, that way is: traditional, waterfall, planning-driven software development) to another way (iterative, multi-disciplinary digital service teams).I am under no illusion that there is a magic spell that can be typed out and, when it’s read by someone else, change the way that they do things. That comes with practice. But, it’s an interesting exercise because it also (more or less) follows on from what I was last writing about. That traditional method of software development is plan-and-process heavy. The assumption is that by casting a sort of documentation spell, writing down *how things should be done*, is in the best case, the most reliable way to assure a certain outcome, and in the worst case, the minimally viable way to ensure that certain outcome in most cases.

I suppose one way of looking at agile/iterative versus waterfall is the failure of the industrialization of software development. The invention of the organization and the development of process made *some things* repeatable, but the realization that “figuring out the right software to build and then building it” couldn’t be industrialized to the extent that the people implementing the process could do so blindly.

In other words, and to paraphrase Teen Talk Barbie, software is hard and you can’t produce good product by rote. Plans being followed produce the illusion of decisions having been made through consideration and comprehension, but the output needs to be validated. And these types of plans don’t really tell you *how* to make the decisions, they merely say: a decision should be made here.

Or, the plans remove any ability for consideration of context. Software is too hard, too difficult, too… multi-variate, for a recipe to be followed.

This is less a station ident and more an entire newsletter on its own, when I had designs on writing something else. So, I’ll hold this, and go write The Other Thing.

1.0 A future of working

I have a New Yorker subscription because they had an offer on for some ridiculous amount that I think was actually losing them money (seriously, it was something on the order of less than $10 for three months worth of issues). I decided to take it because this year I’m spending more money on, as they say, Supporting Quality Journalism, and after getting my Washington Post and New York Times subscription, I figured the very least I could do was to throw some money toward the organization behind the best LinkedIn Cartoon Ever[0].Anyway. I have this subscription and now bits of New Yorker arrive in my house, a bit like how the New Yorker currently arrives at my house (that is to say one method is: through the air, chopped up into bits and reconstituted onto an LCD matrix, the second method is: through a small slot, tightly bound together, and printed at a very high resolution). In both cases, the New Yorker arrives, eventually, in the bathroom, where many things are read or, in the New Yorker’s case, just piled up near the toilet to make it look like our household is populated by respectable, smart, well-informed middle-class adults with a high-fiber diet.

It was in the bathroom that I ended up starting to read one of the stories in the magazine’s Innovator’s Issue (a title that already causes some sort of eye-roll) and in particular, Nathan Heller’s piece, “Is The Gig Economy Working?”

Heller’s piece was, for me at least, something that I felt I could skip over because it followed a familiar format. Here’s someone who’s not really involved in the Gig Economy. But, they are involved in the Gig Economy because they can buy services through it! By using services provided by the Gig Economy (in this case, by someone who comes and hangs some pictures and does other odd DIY jobs in Heller’s apartment), Heller muses about the nature of The Gig Economy. It felt like about 75% (a scientific number, I’ll have you know) was Tropes About The Gig Economy:

– The work isn’t particularly fulfilling, but can be profitable for some people
– Some companies don’t treat their employees (who aren’t employees, they’re contract workers) very well
– TaskRabbit
– Uber
– Young People, Specifically, Millennials
– Airbnb
– Airbnb management companies

Once we’d gotten through all of that, then, I thought, the interesting part. If the social safety net is broken, then what do we need to replace it? Is it time to get rid of the distinction (in America, at least) between 1099 contractors and W-2 employees and the benefits and rights that go to one category (healthcare, vacation, certain job protections and the prospect of a pension) and not to the other? Heller gets to talk to Tom Perez, who’s the current chairman of the DNC, and the two go off to visit Hello Alfred which – excuse me – is literally an affordable, democratized home help butler service, whose innovation is that workers are treated like employees, and are W-2 employees.

This has been a long recap to get to, I feel, the interesting part.

Much of what has been written, it seems, about the way work is changing has been about the gigging part. The schedule-free work that can be squeezed either into gaps around existing employment (the Lyft drivers I meet who’re working two jobs) or that fills gaps encountered by boomers who’ve cashed out, have their final salary pensions and mortgages paid off and are literally working to stave off boredom, a sort of mental health wellness prescription that also happens to earn them money.

How interesting for the gig economy to cater both at the bottom of Maslow’s hierarchy, as a way for those struggling to exchange time for the cash needed to afford rent and food, but also at middle, to meet psychological need for contact with other people, any people, instead of living out days alone with the television.

That’s the gig economy. There’s a whole other part of how work is changing that doesn’t get covered as much, and a friend reminded me of it over breakfast and that piece of conversation has stuck with a bunch of other tiny little data points and now they’re feeling like they’re blaring at me a little bit.

Patreon pulls together a bunch of threads, theories and assumptions that internet old timers had been thinking about, puts them into practice and now, four years old, appears to be unrepentantly and aggressively validating them with cold hard recurring cash.

The shape that Patreon takes in my head is something of a mish-mash of 1,000 True Fans[2], universal basic income[3], moat-building, genuine platform creation and the introduction of a (for now) benevolent dictator.

Forget transaction-based “I need a task done and a human body to do it” that is furthest toward the “first against the automation wall when the robots come gradually and then quickly”, Patreon covers what (naively?) feels like intrinsically human creative work: it’s a way for people who create culture that’s valuable to other people to be consistently, regularly supported by their audience.

Patreon is the way that people can monetize their 1,000 true fans not on a one-off transaction basis (who could possibly live that way?) but on a recurring monthly subscription way. Or, in other words, ask their 1,000 true fans for a monthly salary.

The numbers, I’ve heard, go only up and to the right. They may not be going quickly, but they do so steadily.

From Patreon’s point of view, they’ve got a pretty interesting product/platform: if you’ve got an audience, you can use them to get paid by it. Patreon can sit in the middle – they’re doing valuable work, it’s difficult and hard to collect recurring revenue on your own, so the infrastructure is helpful – but they’ve got a significant moat. The billing relationship is between subscriber and Patreon and (full disclosure, I haven’t actually looked into this), but the switching costs in financial billing relationships are high! If, say, I’m a successful creator on Patreon who’s pulling down a few thousand dollars a month and a competitor comes along who’ll offer a better rate, or services that might suit me better, how easy will it be for me to persuade all my patrons to switch to the new provider? It’s hard enough getting people from one mailing list to another these days.

Making it easy for creators to move from one Patreon-like service to another would be the ethical thing to do, but it wouldn’t be good business sense. You can hear drip of VC saliva at the gate: what better platform lock-in incentive than *your monthly paycheck*?

There’s a lot of good that Patreon could do too, though. At some point, they start to feel more like a quasi-employer or staffing agency, albeit one who provides a staff to hundreds or thousands of customers at once. For those high-performing creators, Patreon could offer benefits like health insurance. My naive understanding of the market is that they could even do so and make money: present Patreon creators as a pool, offer them to Hiscox as people needing commercial insurance, say, and the customer acquisition fee that Hiscox or whomever would pay out could go any which way: to Patreon completely, back to the creator, split, or used to reduce premiums.

I’m not completely certain that the divide between what’s traditionally thought of as the gig economy (TaskRabbit, Uber, Lyft, all of the failed home cleaning services) and Patreon is necessarily just one of “stuff that can’t be automated easily”, but it feels like a useful, correct-more-often-than-not heuristic.

Perhaps a better distinguishing feature is that the gig economy is tasks that are commodities and can be competed on in a marketplace. Patreon – and non-Patreon type work that we’re seeing, like the analysts and consultants who go independent and start charging direct subscriptions for their daily or weekly analysis. What’s being brought here isn’t strictly substitutable, in my dim remembering of competition law. For creative or intellectual output work, the value for certain consumers is in *Ben Thompson’s* analysis, not getting analysis in general. In comics, it’s getting *Lucy Bellwood’s* creative output, not “a series of pictures with text commentary that tell a story”.

In other words, services like Patreon allow for the possibility for people to get value for what makes them *them*, and that no-one else can copy or reproduce*.

* I say, “no-one else can”, because the obvious counter-example is a) made-to-order art reproductions from a photograph in the style of your favourite painter from China and b) a recurrent neural network doing exactly the same thing.

This is perhaps one of the reasons why I tried to train an RNN on my entire newsletter corpus to see if it could do what I did and to my relief, the answer is that it (hopefully) makes less sense and is less useful than I am.

I made a reference to this a while ago on Twitter: for the people who value Mary Meeker’s internet trend reports, there’s nothing *in principle* (other than violating copyright) stopping you from putting together a corpus of Meeker’s trend output presentations and, well, having it try to make more output presentations that are Sufficiently Advanced As To Be Indistinguishable From Meeker.

What someone might say at this point is that, sure, but that doesn’t take into account how you’d give that deep learning system access to all of the data that Meeker uses to generate her trend reports, and the answer to that is: why, is that harder than all the shit we throw at a self-driving car? Can Meeker even really explain how she comes to her conclusions to a sufficient degree of acceptance?

Maybe Meeker’s too hard, maybe we could just start off with neural-net-generated xkcd comics.

At that point, I’d like to formally propose that we set up a deep learning system that is a) trained on the corpus of Randall Monroe’s output, b) have it generate output, c) have that output be scored by community feedback as reinforcement learning and, crucially, d) set up a Patreon account for that neural network so it needs to get Patrons who will reward it with money so it can buy more GPU capacity on Amazon to learn how to make better comics.

Anyway, I digress.

Forgot a gigging future of work. That’s boring. Patreon is interesting because it’s a future of work that appears to allow enough space and flexibility for a *human* future of work.

[0] A New Universal ‘New Yorker’ Cartoon Caption: ‘I’d like to add you to my professional network on LinkedIn.’ – The Atlantic
[1] Is the Gig Economy Working? – The New Yorker
[2] The Technium: 1,000 True Fans

OK. Back to writing documentation. It was nice to be back, however briefly. And as always, send notes, even if it’s just to say hi. If it’s to say “Hello, I’m interested in your ideas and would like to subscribe to your newsletter” even *that* is okay. If it’s to say “Your ideas suck, and this thought in particular sucks because [citation needed]” then I suppose that’s fair enough, but maybe think of a more compassionate way to say it? I’m not one of those deep-learning systems that doesn’t have a system for generating plausible emotional reactions.

Best,

Dan

s4e12: End of Process 

0.0 Station ident

Thursday, 13 April 2017. Today is the day that I formally discovered the composer Max Richter, whose work I’d encountered in bits and pieces (the first Netflix season of Black Mirror, parts of Arrival) but it wasn’t until I was out having tea this morning that his album Infra hit me with all the subtlety of… well, today’s not the day to be talking about large things happening.

1.0 End of Process

This is the story of joining the dots from a CEO’s letter to shareholders[0] to machine learning[1] to human organizations and management[2] and process to automation to a post-scarcity universe where humans still have an important job to do because they’re Culture[3] Special Circumstance[4] agents.

So. Jeff Bezos writes a letter to shareholders and it’s very good and interesting[0]. It is interesting to me for two things: because Bezos is saying that he’s terrified that one day Amazon might stop being focussed on outcomes and instead become focussed on processes. Bezos says that one should

Resist Proxies
As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.

A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.

“The process is not the thing”. I would write that down and make giant posters of it and scream it from rooftops because so many times, in so many places, the process is not the thing. For example, when I think about things like governments requiring vendors to provide customer references and the certificates (e.g. Project Management Processional certification or Agile certification), my first impolitic instinct is that this is a crock-of-shit process way of assuring a desired outcome. What I mean is: presumably the desired outcome is “how do we prevent projects from being a flaming tire fire of a disaster”? Well, one way of doing that is by requiring people with experience. How do we know if people have experience? Well, one way is seeing if they’ve passed a test. What kind of a test? Well, one that results in credentialing people as Certified Systems Engineers, for example.

Another bad example of this would be one where you, say, have a third party who’s providing oversight of a project and trying to make sure that the project is, er, “less risky”. *One* way of doing this would be to say that the project staff should have a list of risks (let’s call it a register-of-risks) and that if it exists, then that means the project staff have thought of risks and therefore, the project is “less risky” than if the register did not exist. The process for this might involve someone looking for the existence of a risk register, and if they saw one, they would say “oh good, they’re mitigating risks”, and if they didn’t, they’d say “haul this lot in front of whichever committee and publicly embarrass them”.

The process – going through the motions to see if there’s a risk register – *does not actually decrease or identify any risks*.

(Of course, it’s more complicated than that. Checklists[5] can help get things right – if the right things are on the checklist. Just having a checklist doesn’t assure the desired outcome, and checklists also don’t help you figure out what you don’t know.)

My suspicion – especially if you’ve been following along – is that processes are alluring for a number of reasons. They relieve the burden of decision-making, which in general we like because thinking is hard and expends energy. They also relieve us of the burden of fault: us humans are a fickle bunch and it’s easy for something to make us feel guilty (when we haven’t lived up to our own standards), or ashamed (when we don’t live up to others’ standards) or angry (when we’re stopped from achieving an outcome that we desire). The existence of process can be an emotional shield that means that we don’t have to be responsible. In Bezos’ example above, the junior leader who defends a bad outcome with “Well, we followed the process” is also someone who is able to defend an attack on their character and also of their self worth, if their sense of self worth is weighted to include the outcome of their actions.

Processes – rules – help us do more, more quickly. They mean that we can think about the outcome, design a set of processes and the processes promise that we can walk away, secure in the knowledge that the process will deliver the outcome. The world, naturally, doesn’t work like that, which is why Bezos wants to remind us that we should revisit the outcome every now and then.

Who wouldn’t want to have to make fewer decisions? Businesses attempt to automate their processes – a series of decisions to achieve a particular outcome – and the promise of computers were that at some point, they would make decisions for us. They don’t, at least not really. There’s a lot of work that goes into figuring out what business processes are and then, these days, trying to translate them into some sort of Business Rules Engine which is exactly the kind of software that I thought I’d be terrified about. Business Rules Engines appear to be a sort of holy grail of enterprise software where anyone can just point-and-click to create an, um, “rule” about how the business should work, and then Decisions Get Made Automatically From Thereon In.

(My suspicion is that they’re not all they’re cracked up to be, but I suppose the promise was that a business rules engine would be *somewhat* better than hard-coding “the stuff that the business does” into, well, code, so now there’s an engine and you have another language in which you write your rules that maybe non-software people can understand but let’s just skip to the end and say that ha ha ha things are complicated and you wish they were that easy).

But, here’s the next bit, where Bezos talks about machine learning:

Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder.
Machine learning techniques – most recently and commonly, neural networks[1] – are getting pretty unreasonably good at achieving outcomes opaquely. In that: we really wouldn’t know where to start in terms of prescribing and describing the precise rules that would allow you to distinguish a cat from a dog. But it turns out that neural networks are unreasonably effective (unless you seed their input images with noise that’s invisible to a human, for example[6]) at doing these kinds of things. We haven’t gotten, I think, anywhere near in terms of doing what modern neural networks can do with expert systems, which are (superficially) the equivalent of a bunch of if/then/else rules that humans would have to sit down and describe.

No, with modern machine learning, we just throw data at the network and tell it what we want it to see or pick out. We train it. And then, it just… somehow… does that? I mean, it’s difficult for a human to explain to another human exactly how decisions are made when you get down to it. The answer to “Why are you sure that’s a car?” can get pretty involved pretty quickly.

Instead, now we’re at the stage where we can throw a bunch of images to a network and also throw a bunch of images of cars at a network and then *magic happens* and we suddenly get a thing that can recognize cars. Or, if you’re Google, “optimize the heating and cooling of a data center” because a car’s pretty much the same thing as toggling outputs.

If my intuition’s right, this means that the promise of machine learning is something like this: for any *process* you can think of where there are a bunch of rules and humans make *decisions*, substitute a machine learning API. It means that I now think – I think? – that machine learning doesn’t necessarily threaten jobs like “write a contract between two parties that accomplishes x, y and z” but instead threatens jobs where management people make decisions.

As I understand it, one of the arguments of the AI-is-coming-to-steal-all-our-jobs is that automation happened and it’s not a good idea to be paid to do something that a robot could do. Or, it’s not a great long-term plan for your career to be based on something that any other sack of thinking meat could do, like “drive a car”. But! In our drive to do more faster, we’ve tried (cf United Airlines) tried to systematize how we do things, which relegates a whole bunch of *other* people into “sack of meat that doesn’t even really need to think”. If most of our automation right now is about automation of *information* but still involves a human in the loop, then all those humans might just be ready for replacement by a neural network.

These networks that are unreasonably effective are the opposite of how we do things right now – we think about outcome and then we try to come up with a process that a bunch of thinking sacks of meat can follow because we still think that a human needs to be involved in the loop and because those sacks of meat do still have something to do with making a decision.

But the neural networks work the other way around: we tell them the outcome and then they say, “forget about the process!”. There doesn’t need to be one. The process is *inside* the network, encoded in the weights of connections between neurons. It’s a unit that can be cloned, repeated and so on that just *does* the job of “should this insurance claim be approved”.

If we don’t have to worry about process anymore, then that lets us concentrate on the outcome. Does this mean that the promise of machine learning is that, with sufficient data, all we have to do is tell it what outcome we want? Look, here’s a bunch of foster applications. *If* we have all of this data, then what should the decision be? Yes or no?

The corollary there – and where the Special Circumstance[4] agent comes in – is that humans might still have a role where there’s not enough data to train a network. Maybe the event that we’re interested in doesn’t have a large enough set of n. Maybe it’s a completely novel situation. Now I’m thinking about how a human might get *better* at being useful for such situations.

So. Outcome over process. The architecture of neural networks means and requires focussing on outcome because process is opaque and disappeared into the internal architecture of the network that we have nothing to do with.

How unreasonably effective will such networks be at business processes, then?

[0] EX-99.1
[1] The Unreasonable Effectiveness of Recurrent Neural Networks
[2] ribbonfarm – experiments in refactored perception (oh, just go and read all of Ribbonfarm)
[3] A Few Notes on the Culture, by Iain M Banks
[4] Special Circumstances – Wikipedia
[5] A Life-Saving Checklist – The New Yorker
[6] Attacking Machine Learning with Adversarial Examples

OK. Time for bed. As always, I appreciate any and all notes from you.

Best,

Dan

s4e11: We need to talk about algorithms 

0.0 Station Ident

7:56am on Wednesday 12 April 2017, and I’ve made a mistake – I am over an hour and a half early for an appointment. This is less than ideal for reasons of family and household logistics, but I suppose I’m here and there’s not much I can do about it now. Now’s probably as good a time as any to mention that I have another newsletter where I send out the occasional affirmation[0], I imagine a good one for today would be that everyone makes mistakes.

[0] I can by Dan Hon

1.0 We need to talk about algorithms

On Sunday night, United Airlines removed a man from a flight that was overbooked. I say “overbooked” – it doesn’t sound like there were more tickets sold for the flight than there were seats, instead the airline claimed that the flight was overbooked because it needed to use four seats to transport United crew. The airline went through their usual process to find volunteers by offering compensation – vouchers that could be applied to another flight – but could not find any. In the end, four passengers were selected at random by computer[0] and when one of them refused, United staff called security, who forcibly removed the passenger, dragging him off the plane, bloodied and bleeding.

It goes without saying that there’s a lot wrong with what happened. It isn’t clear the flight was overbooked, because paying passengers were removed (“re-accommodated”, in the words of United CEO Oscar Munoz, and PR Week’s Communicator of the Year) in favour of crew who needed to be transported, arguable a failure of the airline’s on scheduling and logistics (of course, this is only a failure if you believe that airlines should prioritize seating paying passengers over their own crew logistics needs). But what many people have picked up on is how something like this happened: where multiple United staff – regular human beings, who presumably do not set out in the morning to enact a chain of events that will result in assault – didn’t feel like there was anything they could do once the events were set in motion.

I wrote a tongue-in-cheek set of tweets about this[1], which went a bit like this:

United CEO To Discipline Computer Algorithm That Resulted In Passenger Removal

The CEO of United Airlines, Oscar Munoz, 57, said in a statement that the algorithm involved in Sunday’s removal of a passenger from a United flight had made a ‘bad decision’ and would be placed on administrative leave during the airline’s internal investigation.

The algorithm had selected passenger David Dao and three other passengers for what Munoz called “involuntary denial of the boarding process”. Munoz confirmed that the wide-ranging investigation would also look into previous decisions made by the algorithm.

Industry software engineers were quick to jump to the algorithm’s defense. Aldous Maximillion, an artificial intelligence researcher at internet portal Yahoo! said that “the actions of one algorithm should not tar all algorithms.”

The algorithm in question is not the only one working at the airline. In its last 10-K filing, United disclosed that over 200 algorithms are employed at the airline, making many varied decisions throughout the day. A computer programmer at United commented off the record that many algorithms at the airline had been overworked recently.

Interviewed at Chicago airport where Sunday’s incident occurred, many United staff remarked positively about their interactions with the airline’s algorithms.

Tracy S., 29, a United gate attendant who was present at the incident on Sunday, spoke fondly of the resource scheduling and passenger seat assignment algorithm that made Sunday night’s decision.

“[The algorithm] always communicates politely,” she said, gesturing at the flat screen monitor, where she explained instructions from the algorithm would appear.

Tracy said that the algorithm was much more reliable than the human scheduler it had replaced, and added that the algorithm had not made any untoward sexual advances.

When the algorithm’s instructions came through, none of the attendants thought anything of it, said Tracy. “Nobody could have predicted this,” she said.

Rival companies American Airlines, Delta and Southwest were asked to comment on their employment of algorithms, but none had commented at time of press.

Okay, so it’s funny (well, *I* think so) if you treat algorithms like people but I frequently find that I have to explain the more serious points that I’m making, albeit pretty obtusely. Which are:

First: sure, we can have a discussion about how algorithms-are-just-rules, but I think that denies the folk and common understanding that the way that we use the world algorithm now is as a computer program. I mean yes: a recipe in a cookbook is an algorithm (thank you, secondary school computer teaching) but these days I think it’s safe to assume that unless we explicitly say otherwise, when someone says algorithm we *mean* “a program running on a computer beep boop”.

Second, is the view pointed out by Josh Centers that United is a good example of how dehumanized we’ve let computers make us[2]. My reaction at the time was pretty hot-headed (emotions and tempers run high on birdsite, recently), and I said: “Bullshit. It’s the perfect example of how dehumanized we’ve chosen to be. Computers have nothing to do with it.”[3] to which I owe Josh an apology because computers have *something* to do with it, and it’s, well, never a good idea to say never.

On preview, Josh is right: he says that it’s an example of how dehumanized we’ve *let* computers make us become, and the use of the word ‘let’ implies that we still had a choice. To use a popular analogy, we’ve outsourced deciding things, and computers – through their ability to diligently enact policy, rules and procedures (surprise! algorithms!) give us a get out of jail free card that we’re all too happy to employ.

In one reading, my reaction is to deny the environment that we make decisions in, partly because *we* created the environment. None of these rules come into existence ab initio, the algorithm that randomly selected passengers for “involuntary denial of the boarding process” (which in any other year would be a candidate for worst euphemism, but here we are in 2017) was conceived of and put into use *by a human who made a choice*.

At the end of the day, we’re the ones who have the final decision, *but* it’s only compassionate (and realistic) to recognize the environment that leads to the decisions that we make.

A brief detour into cognitive psychology and (surprise) my personal experience with mental health: it is hard to make decisions. Most of the time we don’t. Most of the time we’re just reacting and we’re not, well, mindfully considering what it is that we want. The argument that we’re fully in control of all of our actions at all times is turning out to be either not true, or not as true as we had thought, and in any case, unhelpful. In our best moments of clarity, in our best moments of being present, we’re able to thoughtfully consider and slow down. Most of the time we’re operating on Kahnemann’s System 1 of evolutionarily acquired heuristics and rough rules that mostly work. It’s only with deliberate effort that we can shift over to System 2.

My understanding – my folk, pop-sci reading version of this – is that System 2 just requires more energy, which is another way of saying that biological systems are inherently lazy and why bother spending *more* energy to do something if you could spend *less* energy doing something. If your mostly automatic, intuitive, non-conscious System 1 can do something more quickly and get it roughly right, then why not use it most of the time? It’s *hard work* to think.

And, to Center’s point, we’ve offloaded our thinking.

I don’t think computers should necessarily shoulder the brunt of this from an academic point of view, but from a practical point of view it seems perfectly pragmatic to. Tools have always been about making things easier for us. Heuristics, policy, rules – whether formal or informal, exist in part to reduce the decision-making burden. How can we make things *easier* for ourselves? We’ve always wanted to outsource: in theory it’s a postive adaptive move because it means that you have more energy for.. something else, like reproducing and making sure you’ve got lots of children so that the selfish replicators continue to get what they want.

This, I feel, is starting to get to the root of the issue.

I got into an argument – and am happy that I just decided to walk away instead of further engaging – with someone on Mastodon about opinionated software and the idea that Mastodon could incorporate something like a poison pill in its standard terms and conditions such that user data would be automatically deleted upon acquisition by a third party. On the one hand, hard cases like “how do we preserve user privacy in the event of corporate acquisition” can easily result in bad law, on the tridextrous hand, this quickly turned into me hearing for the nth time that “software is just a tool” and “shouldn’t impose values”.

To which the short answer is: tools *do* impose values and in the same way that manners maketh man, in 2017 so software maketh humanity. Some – very general purpose and elementary – tools might impose a minimal set of values, but user-facing software these days *require* the developers to make hundreds if not thousands of decisions which act as opinions-made-real as to how things should work. 140 characters or 500? Should you add your voice to something while spreading it, or just spread it without adding your own commentary? Should you allow people to hide content behind an arbitrary warning or not? Decisions in software seek to turn opinions into reality through usage. If software *wasn’t* used then yes – it wouldn’t impose values. But tools do, and complex and opaque tools like “software” certainly do.

I’ve been slowly and gradually learning that the way my head works is by trying to connect everything together. Sometimes this works well, where I’ve been lucky to find rewarding and meaningful work that benefits from seeing the connections between things, making conclusions and suggesting how things might work differently. Other times this works terribly, where everything becomes a mutually reinforcing web that can drag me down into debilitating depression.

At the moment, this is what I’m seeing and there’s nascent connections forming, being reinforced and culled between all of these nodes:

– in the same way that society functions well when people can find out and understand the rules of society[citation needed], does the same principle apply to rules implemented in software?
– given our understandable and apparently unavoidable tendency to outsource decisions thanks to our inherited cognitive architecture, can we assume that we’ll *continue* to outsource decision-making and rule-processing? And at an accelerating rate, too?
– “enterprise” businesses do this already with their predilection for things like “rules engines,” which I always feel are some sort of unobtainable golden rule for “if only we didn’t need any humans and a computer could just make all of our decisions for us and instead of implementing those rules programatically, what if we had a somewhat higher-level language and interface for doing so?”
– decision fatigue is real
– when was the last time you asked someone to decide something for you?
– how far off are we from the algorithmic equivalent of “This area contains naturally occurring and synthetic chemicals that are known to cause cancer or birth defects or other reproductive harm”? A sort of “This product contains algorithms that are known to affect your ability to make conscious decisions of your own volition”?
– To the above, Peter Watts would say: “Where have you fucking been? You’ve never been in control. We’d have to slap a warning sign on *everything*.”
– How long until the next management fad is to include mindfulness training and regular mindfulness sessions *throughout the day* for employees to help them make decisions more consciously? You can still make decisions in response to your emotions, but at least you’ll be aware of them.

I feel this is less a question of what technology wants. This is more a question of: we are tool-builders. Perhaps building tools – especially tools make decisions – is in large part an evolutionary imperative to reduce the energy burden of one of the most expensive organs in our body? Do we end up with a brain that just wants to experience qualia without having to *decide* anything?

In the meantime, though. Our decisions have impact in the real world and the real world contains other people. Outsourcing decisions is something we do, because it’s easier – and less stressful for us – for something else to make a decision. It’s nice to have something to blame, an algorithm to point to, instead of taking responsibility. We can say: the algorithm was fallible. We tried to do our best, but in the end, what could we do?

[Twenty years ago, someone like me would be an undergrad at university and deeply unimpressed at the groundbreaking brain-in-a-vat philosophy espoused in The Matrix. But on the other hand: what if seeing the underlying code of the world wasn’t the software, but seeing *all* of the underlying rules? Seeing the complete System of the World?]

So. To tie all this together.

Algorithms make decisions and we implement them in software. The easy way out is to design them in such a a way as to remove the human from the loop. A perfect system. But, there is no such thing. The universe is complicated, and Things Happen. While software *can* deal with that, in our more sober moments, in our System 2 moments, we can take a step back and say: that is not the outcome we want. It is not the outcome that conscious beings that experience suffering deserve. We can do better.

United are merely the latest and most visible example of a collection of humans who have decided to remove themselves from the loop and to blame rules – that they came up with – rather than to accept responsibility and to do the hard work.

When we design systems and algorithms, we must make a decision: are these to help us provide *better* outcomes for humans who think and feel, or are they a way for us to avoid responsibility and to merely make things easier for us? It means choosing to do the hard thing over the easy thing.

And, in a trite way (but, as someone pointed out to me on Twitter, sometimes the trite things are true), perhaps it just starts with stopping to think.

More on this later, I expect.

[0] Officer Who Dragged Man Off United Flight Gets Suspended
[1] Dan Hon on Twitter: ““United CEO to discipline computer algorithm that resulted in passenger removal.””
[2] Josh Centers on Twitter: “The United story is the perfect example of how dehumanized we’ve let computers make us.”
[3] Dan Hon on Twitter: “Bullshit. It’s the perfect example of how dehumanized we’ve chosen to be. Computers have nothing to do with it. https://t.co/zZxonSu0Am”

As ever, I love to get notes from you, even if they’re just saying ‘hi’, and I do my best to not get freaked out if you disagree with me! So – feel free to send a note. Even if it’s just a single emoji.

Best,

Dan

s4e10: Stages of Transformation 

0.0 Station Ident

Writing this on Saturday, 8 April 2017 in the afternoon and it’s not a normal day to be writing a newsletter. I’ve spent some time in Sacramento this week having some meetings with the right people at the right time where you can make some little decisions that will change big things. Well, decisions are easy*: the work will be hard.

* Counter-argument: no, they’re not easy at all. It’d be more accurate to say that decisions can be hard and the work will be hard. Otherwise the latter wouldn’t be work?

1.0 Stages of Transformation

OK, so Niamh Webster tweeted a photo earlier this week[0] and in a rear break with tradition to show that I’m *flexible*, here’s an embedded tweet so you can see the image:

Ugh, which totally didn’t work, so *here’s* an inline image:

Someone presenting at a slide at a conference that says: “If you apply digital to a thing that’s broken, you’ll have a broken digital thing.”

to which I said: “This is true. Most of my work is in helping people change things. No-one likes being told their thing is broken.”[1]

which is actually two completely different things, the first being a) changing things that are “broken” and b) an observation about things that are “broken” and how you can move from something that isn’t as good as it could or should be (ie: broken) to something that *is* as good as it could or should be (ie: shows understanding of and meets user needs).

You could call a pithy version of this (which isn’t pithy! It just *sounds* pithy, it’s only pithy if you treat it pithily)
The Three Stages of Digital Transformation[2]:

1. Denial: our thing is not broken
2. Anger: we hate you for telling us
3. Acceptance: holy crap our thing is broken

or, if you’re following a more established model and you have more space because you’re not quoting another tweet[3]:

1. Denial and isolation (our thing is not broken, we’re the only ones with this problem)
2. Anger (we are angry that we’ve been told our thing is broken)
3. Bargaining (it’s not that broken, look we’re already fixing it)
4. Depression (it’s too hard, we’ll never change)
5. Acceptance (things are broken, fault doesn’t matter, what can we do now)
6. Transformation (changing things a bit at a time and understanding that there’ll always be relapse)

So, here’s where I get to talk about the fundamental interconnectedness of all things, pattern matching, and how about dealing with mental health has a lot to do with digital transformation.

The first thing is: assume everyone is doing their best. No-one *wants* to do a bad job. There might be reasons why something isn’t as good as you might think it should be – but you don’t know if anyone else thinks it should be better, either! Frequently enough for it to be almost-always-true, you can probably assume that people are doing the best job they can in the circumstances they’re in. Those circumstances may be incredibly environmental, they might also include personal circumstances as well: people may not have the tools, knowledge or *practice* to do a better job, either. But that doesn’t mean they’re trying.

The second is radical acceptance, which is difficult not to laugh at because for some people (me included), the phrase has a certain wafty odour of new-agey thinky self-help book, but at the root of it what I managed to agree with was this: unless you completely accept the circumstances and the facts of the situation, you’re not going to get very far in the long run. Accepting those facts and the situation may well be painful and may cause anger or shame or guilt because, say, it turns out that you need to accept that you didn’t do as good a job as you intended to or for the standard you hold yourself to. And for “you” you can substitute both individuals and organizations.

I said my reaction to the statement “If you apply digital to a thing that’s broken, you’ll have a broken digital thing” had two parts:

1) Digital on its own isn’t better
2) You can’t get to better without accepting and understanding where you are

When I work with people on digital transformation, one of the examples that I use is that “simply turning all our forms into e-PDFs and putting them on our website” would count as “digital”, but wouldn’t be any “better” than what they have right now. Most people agree with that. We then get to have a conversation about what “better” would actually mean. At some point this inevitable gets down to a conversation about what exactly it is that we’re trying to achieve. What process, or what outcome, is being subjected to digital transformation? How can we be sure that it will *better*? What does better even mean?

Some people in the group then work out that for something to be better, of course, that means that the existing thing has to be worse.

There are two clear implications here that I try to deal with clearly and without any ambiguity:

1) It doesn’t matter *how* we got here, all that matters is that we’re *here*. For those people who were involved in getting to our current location – maybe they had written policy, maybe they had been involved with the design or management of certain processes – this isn’t a judgment on them or their work. We just have to accept that we’re here now, without blame or judgment.

2) Now that we’re here, what are we going to do?

The first point above isn’t something you can just say once and forget. Unless you work at it and unless you practice it, in our brains the past has the tendency to haunt the present when, most of the time *it doesn’t actually matter*, and if it *is* haunting the present, it doesn’t matter as much as you think it does.

There will be people who will feel a combination of angry and ashamed and guilty. Angry because they were on a path and it’s being closed off or being diverted and things aren’t working out they way they thought. Ashamed because now they feel their work wasn’t up to what “everyone else” expected. Guilty because they now might see how things could be different and they’re holding themselves up to new standards.

What I try to remember in times like this is that I’m going to transformation war with the army that I – that the organization – has. No-one is coming to fix it. The team has to own it. But that *they did the best in the environment they had*.

Here’s a present case: in California, I’m working with the state to spin up more demonstrator projects after Child Welfare Digital Services. It is unfair to hold any other department, team or project up to the same standard of practice that CWDS is going through. On the one hand, it’s *true* that any particular project’s plan for a monolithic, waterfall procurement and deployment would probably fail, or wouldn’t be any combination of successful for time, money and getting-the-actual-job-done. On the other hand it’s *also true* that the environment *requires* them to put together a monolithic, waterfall procurement and deployment. They don’t know any better. They were doing the best they could, in the environment they were in.

The conversation and the opportunity is to say: unbeknownst to you, while you were working on this, the rules changed. There is a chance to do things differently now. The best way of moving forward is to *accept* that *despite* doing things the best you could under the old environment, the old environment only allowed for a very narrow kind of success, and that you had nothing to do with the shape of the old environment. It’d be like criticizing extremophiles living in a low-energy environment for not having gigantic body plans and a rich predator/prey ecosystem when *there’s just not that much energy around*, or criticizing people living in a rain shadow for not having a vibrant seaside sunbathing tourist industry. They didn’t choose the environment. They’re just there, doing the best they can.

[Note to self: do not make any sort of analogy to Ian Malcom’s ‘life finds a way’ remark from Jurassic Park in relation to large technology products in government/large organization environments].

All of this is to say: yes, things are broken. It doesn’t have to be anyone’s *fault*. And it might not be the done thing to tone-police and say, “well, can we not say broken? It would make people feel bad” but hey, it *will* make people feel bad. It’s also true that these things aren’t necessarily broken: they are certainly *doing something*. They may not be doing something as well as they could be – but I’ve gotten in trouble for this before. I’ve called a 30 year old COBOL-based mainframe a piece of junk that’s broken and immediately been taken to task for it because it’s demonstrably *not* – it’s processing transactions and getting them done.

Coming in and saying that something people have spent time on, where people are trying to do their best, is broken isn’t necessarily the most respectful thing and it won’t necessarily help you turn the army you have into an army that’s fit for a different fight. It just means that you might have to spend even more time than you would otherwise in building compassion and sympathy for the current situation and environment.

I get where calling things broken comes from. Like I said, I’ve done it. It’s dramatic and it gets attention and for some people it might be the right thing to say when you need to persuade people about the need for change. But for others, it just might not be the best way to start.

There are always people who are just trying to do their best. In the end, they’re the ones who will see the potential for better than what they have right now, and they’re also the ones who will fight for it.

And with that, read this thread[4] from Meg Pickard about how digital transformation is a bit like being a midwife, complete with a terrible pun from yours truly.

[0] Niamh Webster on Twitter: “❤️ this. Digital (online/tech/whatever you want to call it) should just complement what you’re doing already – offline! https://t.co/XzUbJWDE7M”
[1] Dan Hon on Twitter: “This is true. Most of my work is in helping people change things. No-one likes being told their thing is broken. https://t.co/Kj3aE5dJVy”
[2] Dan Hon on Twitter: “digital transformation: 1. denial: our thing is not broken 2. anger: we hate you for telling us 3. acceptance: holy crap this is broken https://t.co/Kj3aE5dJVy”
[3] Dan Hon on Twitter: “the 6 stages of digital transformation are: 1/ denial & isolation 2/ anger 3/ bargaining 4/ depression 5/ acceptance 6/ transformation”
[4] Meg Pickard on Twitter: “@hondanhon A lot of the time I think my role is a bit like a midwife: you’ve got yourself into this situation, but you’re resistant to the next bit. >”

OK, lots of other stuff in my head at the moment:

* “algorithms”
* culture
* networked software as material, so what are the material properties of Mastodon?

which I’ll hopefully splurge out sometime.

Best,

Dan