s4e12: End of Process
0.0 Station ident
Thursday, 13 April 2017. Today is the day that I formally discovered the composer Max Richter, whose work I'd encountered in bits and pieces (the first Netflix season of Black Mirror, parts of Arrival) but it wasn't until I was out having tea this morning that his album Infra hit me with all the subtlety of... well, today's not the day to be talking about large things happening.
1.0 End of Process
This is the story of joining the dots from a CEO's letter to shareholders[0] to machine learning[1] to human organizations and management[2] and process to automation to a post-scarcity universe where humans still have an important job to do because they're Culture[3] Special Circumstance[4] agents.
So. Jeff Bezos writes a letter to shareholders and it's very good and interesting[0]. It is interesting to me for two things: because Bezos is saying that he's terrified that one day Amazon might stop being focussed on outcomes and instead become focussed on processes. Bezos says that one should
Resist Proxies
As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.
A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.
"The process is not the thing". I would write that down and make giant posters of it and scream it from rooftops because so many times, in so many places, the process is not the thing. For example, when I think about things like governments requiring vendors to provide customer references and the certificates (e.g. Project Management Processional certification or Agile certification), my first impolitic instinct is that this is a crock-of-shit process way of assuring a desired outcome. What I mean is: presumably the desired outcome is "how do we prevent projects from being a flaming tire fire of a disaster"? Well, one way of doing that is by requiring people with experience. How do we know if people have experience? Well, one way is seeing if they've passed a test. What kind of a test? Well, one that results in credentialing people as Certified Systems Engineers, for example.
Another bad example of this would be one where you, say, have a third party who's providing oversight of a project and trying to make sure that the project is, er, "less risky". *One* way of doing this would be to say that the project staff should have a list of risks (let's call it a register-of-risks) and that if it exists, then that means the project staff have thought of risks and therefore, the project is "less risky" than if the register did not exist. The process for this might involve someone looking for the existence of a risk register, and if they saw one, they would say "oh good, they're mitigating risks", and if they didn't, they'd say "haul this lot in front of whichever committee and publicly embarrass them".
The process - going through the motions to see if there's a risk register - *does not actually decrease or identify any risks*.
(Of course, it's more complicated than that. Checklists[5] can help get things right - if the right things are on the checklist. Just having a checklist doesn't assure the desired outcome, and checklists also don't help you figure out what you don't know.)
My suspicion - especially if you've been following along - is that processes are alluring for a number of reasons. They relieve the burden of decision-making, which in general we like because thinking is hard and expends energy. They also relieve us of the burden of fault: us humans are a fickle bunch and it's easy for something to make us feel guilty (when we haven't lived up to our own standards), or ashamed (when we don't live up to others' standards) or angry (when we're stopped from achieving an outcome that we desire). The existence of process can be an emotional shield that means that we don't have to be responsible. In Bezos' example above, the junior leader who defends a bad outcome with "Well, we followed the process" is also someone who is able to defend an attack on their character and also of their self worth, if their sense of self worth is weighted to include the outcome of their actions.
Processes - rules - help us do more, more quickly. They mean that we can think about the outcome, design a set of processes and the processes promise that we can walk away, secure in the knowledge that the process will deliver the outcome. The world, naturally, doesn't work like that, which is why Bezos wants to remind us that we should revisit the outcome every now and then.
Who wouldn't want to have to make fewer decisions? Businesses attempt to automate their processes - a series of decisions to achieve a particular outcome - and the promise of computers were that at some point, they would make decisions for us. They don't, at least not really. There's a lot of work that goes into figuring out what business processes are and then, these days, trying to translate them into some sort of Business Rules Engine which is exactly the kind of software that I thought I'd be terrified about. Business Rules Engines appear to be a sort of holy grail of enterprise software where anyone can just point-and-click to create an, um, "rule" about how the business should work, and then Decisions Get Made Automatically From Thereon In.
(My suspicion is that they're not all they're cracked up to be, but I suppose the promise was that a business rules engine would be *somewhat* better than hard-coding "the stuff that the business does" into, well, code, so now there's an engine and you have another language in which you write your rules that maybe non-software people can understand but let's just skip to the end and say that ha ha ha things are complicated and you wish they were that easy).
But, here's the next bit, where Bezos talks about machine learning:
Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder.
Machine learning techniques - most recently and commonly, neural networks[1] - are getting pretty unreasonably good at achieving outcomes opaquely. In that: we really wouldn't know where to start in terms of prescribing and describing the precise rules that would allow you to distinguish a cat from a dog. But it turns out that neural networks are unreasonably effective (unless you seed their input images with noise that's invisible to a human, for example[6]) at doing these kinds of things. We haven't gotten, I think, anywhere near in terms of doing what modern neural networks can do with expert systems, which are (superficially) the equivalent of a bunch of if/then/else rules that humans would have to sit down and describe.
No, with modern machine learning, we just throw data at the network and tell it what we want it to see or pick out. We train it. And then, it just... somehow... does that? I mean, it's difficult for a human to explain to another human exactly how decisions are made when you get down to it. The answer to "Why are you sure that's a car?" can get pretty involved pretty quickly.
Instead, now we're at the stage where we can throw a bunch of images to a network and also throw a bunch of images of cars at a network and then *magic happens* and we suddenly get a thing that can recognize cars. Or, if you're Google, "optimize the heating and cooling of a data center" because a car's pretty much the same thing as toggling outputs.
If my intuition's right, this means that the promise of machine learning is something like this: for any *process* you can think of where there are a bunch of rules and humans make *decisions*, substitute a machine learning API. It means that I now think - I think? - that machine learning doesn't necessarily threaten jobs like "write a contract between two parties that accomplishes x, y and z" but instead threatens jobs where management people make decisions.
As I understand it, one of the arguments of the AI-is-coming-to-steal-all-our-jobs is that automation happened and it's not a good idea to be paid to do something that a robot could do. Or, it's not a great long-term plan for your career to be based on something that any other sack of thinking meat could do, like "drive a car". But! In our drive to do more faster, we've tried (cf United Airlines) tried to systematize how we do things, which relegates a whole bunch of *other* people into "sack of meat that doesn't even really need to think". If most of our automation right now is about automation of *information* but still involves a human in the loop, then all those humans might just be ready for replacement by a neural network.
These networks that are unreasonably effective are the opposite of how we do things right now - we think about outcome and then we try to come up with a process that a bunch of thinking sacks of meat can follow because we still think that a human needs to be involved in the loop and because those sacks of meat do still have something to do with making a decision.
But the neural networks work the other way around: we tell them the outcome and then they say, "forget about the process!". There doesn't need to be one. The process is *inside* the network, encoded in the weights of connections between neurons. It's a unit that can be cloned, repeated and so on that just *does* the job of "should this insurance claim be approved".
If we don't have to worry about process anymore, then that lets us concentrate on the outcome. Does this mean that the promise of machine learning is that, with sufficient data, all we have to do is tell it what outcome we want? Look, here's a bunch of foster applications. *If* we have all of this data, then what should the decision be? Yes or no?
The corollary there - and where the Special Circumstance[4] agent comes in - is that humans might still have a role where there's not enough data to train a network. Maybe the event that we're interested in doesn't have a large enough set of n. Maybe it's a completely novel situation. Now I'm thinking about how a human might get *better* at being useful for such situations.
So. Outcome over process. The architecture of neural networks means and requires focussing on outcome because process is opaque and disappeared into the internal architecture of the network that we have nothing to do with.
How unreasonably effective will such networks be at business processes, then?
[0] EX-99.1
[1] The Unreasonable Effectiveness of Recurrent Neural Networks
[2] ribbonfarm – experiments in refactored perception (oh, just go and read all of Ribbonfarm)
[3] A Few Notes on the Culture, by Iain M Banks
[4] Special Circumstances - Wikipedia
[5] A Life-Saving Checklist - The New Yorker
[6] Attacking Machine Learning with Adversarial Examples
--
OK. Time for bed. As always, I appreciate any and all notes from you.
Best,
Dan