Things That Have Caught My Attention

Dan Hon's Newsletter

s5e01: Fully Connected, Feed Forward

0.0 Station Ident

Last episode looked like as good a place as any to do a season break (even though it might not have been planned). So here we are at the beginning-ish of a new academic year, within spitting distance of Halloween and your usual Western pagan ceremonies.

1.0 Fully Connected, Feed Forward

This is just a dump of the things that are in my head at the moment, with a few notes. Your interest in the following may vary, I make no representation whatsoever about how useful these observations or noticings might be to you.

Mr. Codex of Slate, Star, Codex has written about Predictive Processing[0], a model that seeks to explain how our brains work. I’ve been interested in how our brains work for a long time, so if you’re into stuff like Steven Pinker, Blank Slateism, “yeah, but Sapir-Whorf is *kind* of true, right?” and a whole host of other things because, *if you think about it* the fact that you’re conscious is *really really weird*, then take a look at that link. The super-high level overview of Predictive Processing is that your brain essentially does two things: it constantly generates predictions about the world (and all of those predictions are interrelated). Those predictions are checked against your sensorimotor input at every single layer of cognition. So you’ve got things like predictions about right-angles in nature (probably not) all the way up to IF there is a law enforcement officer in this area THEN certain other agents visible in the area may be more likely to act in a certain way. These predictions like to be correct (the more correct they are, the more we understand the world and can affect it), but sometimes they override what our raw sensorimotor input. If you’re following, this is why optical illusions exist and work: because we have top-down processing or “priors” that kind of cook the books and alter what we perceive, contradicting “unnatural” raw input. For example: faces do not suddenly go concave, so you’re probably seeing another face, not an inside-out-face.

In this way, predictive processing can be abstracted away to a maths problem: how do you generate better predictions about the world based on historical input data, and what do you do when your input data appears to suggest a problem? One way of doing this is by using a piece of maths called a Kalman filter[1].

Remember Beagle 2? It was an ESA Mars Probe that disappointingly for everyone involved failed to properly land on Mars. One of the (armchair, amateur, internet) discussions I saw about what might have happened to the probe after some ESA investigation revealed that (I think) a parachute was released too early suggested that the problem was with sensor fusion.

You’re not controlling Beagle 2’s lander remotely. It’s too far away. It has to decide, by itself, when to release the parachute. We cannot accurately simulate or predict ahead of time what the conditions are going to be. So, what does the lander do? It’s got all these sensors – rotation, altitude, acceleration – and uses them to figure out where it is, where it’s going and what’s going to happen next. We can use this prediction about what will happen next to figure out when the parachute needs to fire to arrest the probe’s speed on landing.

But! What happens if, say, your altimeter inaccurately reports that you’re suddenly 100 meters from the surface instead of, say, 20,000 meters from the surface? *All* of your other data is telling you that it *looks* just like you’re 20,000 meters from the surface. Do you cook the books and pretend you’re still 20km up? Do you ignore the altimeter data? As a piece of math that helps with synthesizing and increasing the accuracy of predictions, a kalman filter would help you figure out that *if* the (broken) altimeter is true *then* our prediction of our lander’s proprioception would look like x (ie: what do we *expect* if we’re at 100m). If data then comes back that makes it look like it’s *more likely* that we’re at 20km than 100m, then we can decide to throw away the altimeter data. But, that may affect the quality of our prediction about the world because one of our inputs has gone screwy! But *at least we know to ignore it now*.

Back to predictive processing. The theory is that your brain acts like everything is normal when your higher-level prediction (this is a policeman) agrees with your lower-level data (it turns out that you can object-recognize an insignia, or something, or you can see a stick or a holster). When that doesn’t happen (e.g. you predict a policeman because there are boots and the right uniform and a hat and a radio in the right place, but instead of a gun in a holster there’s a plush blue Elmo), then that layer freaks out and things fire in a state of surprisal, which (as Mr. Codex notes) is an excellent neuroscience term for what happens when, er, a neuron fires because something happens that does not usually happen (ie: it is surprised).

Now back to mental illness and psychology and cognitive behavioral therapies. One of the trendy ideas right now that’s being taught to patients is that you don’t really have any control over your first set of thoughts. Your brain’s job is to just generate new thoughts. And I’d go further than that, one of the ways that it generates new thoughts is just by noticing associations. Once those associations get high-level enough, they’re less associations and more like “Pizza”, which very quickly might turn into an “I want pizza”.

The idea is your brain does this anyway. We know priming works. In a way, your brain *should* work this way. Stimulus comes in and your brain *reacts* to it. It *reacts* to stimulus by firing (anywhere from heavily to not-at-all) on anything, uh, related to the stimulus.

Thoughts aren’t real. They are just reactions to stimulus.

Now, a quick disclaimer. Over 15 years after being first introduced to it, I’m tentatively opening a copy of The Artist’s Way, which I’m reliably informed *can be* read as totally cult-like, which you might just want to ignore because it turns out you can ignore that and just focus on the results (which are intended to be an unblocking of creativity).

… and I would say that maybe creativity can also be expressed as concepts that are novel. That result in surprisal. That do not fit into the probability distribution of regular, expected events.

Julia Cameron’s position (the author of The Artist’s Way) is that creativity is always there *and* she’s assembled a bunch of quotes supporting the idea that creativity comes from a higher power (ie: ‘the music of Madam Butterfly was dictated to me by God’), but let’s just ignore that (she will accept ‘flow’), but I think at this point that my theory is that they genuinely come from *somewhere else* because they’re not consciously summoned. Subconsciously, creative concepts can be ones generated by your brain *because that’s what your brain’s job is – to react to stimuli and offer up associations*, but that we become blocked when, at an extreme, an openness to absurd creativity is disappeared because it doesn’t predict the world. Creativity generates surprisal and you have to want to be OK with that.

This might be a roundabout way of explaining the folklore that children are preternaturally creative and that we beat it out of them: because we teach them to discount and ignore the surprisal value of dissonant thoughts and concepts.

That’s enough of that for now, anyway.

In completely the other direction, a bunch of random thoughts (ha, did you not read the above?)

* OAuth is your future: A collection of things I designed that didn’t exist in 2012, but probably exist now but for completely different reasons [3]. These are mocked screenshots showing what an OAuth permissions interface might look like for potentially useful (or weakly-dystopic). They range from US DHS asking for FourSquare history to help with a customs form (2017 called, HA, it says), the London Met Police asking to update your Twitter account (I can’t remember why?); Cigna asking for permission to connect to your Foursquare account to get better information on your habits (and a reminder that you might not have a choice due to your employer) (2017: HA again) and UK HMRC (the tax authority) asking for access to LinkedIn to better fill in your employment history on your tax return.

A few thoughts: OAuth presupposes that someone would want to *actively* ask for permission and be explicit about it rather than the (now-current) dark pattern of just hiding the fact that the data is now exfiltrated through a click-through acceptance of TOS. Mobile apps became a trojan horse for personal data (and TOS): you don’t need to ask for permission if it’s in the TOS, so just build something that allows you access (hi, Android and iOS app permissions) to personal data, and then treat it like it’s your (corporate) own.

The other point, of course, is that while it would be *efficient* for governments to programatically import third party data to their own systems, they don’t, they just ask for your username and password when you’re waiting to re-enter the country. Or they suborn Google’s network. You know. Apparently those two things are easier to do. (Ha, no, they’re not *easier* to do, but the (externalized and internal) costs of doing so may be higher than doing it the other way).

* Equifax is a complete tire fire to the extent that there are now think-pieces belatedly realising that the entire US credit report system and infrastructure might not be fit for purpose? This ignores the fact that it is “good enough” for now because patently *nothing bad has happened yet* for the system to have been replaced. The systemic cost of replacement is both too high to bear (too big to fail) for external parties and legacy/entrenched interests are never excited about having to do any more work than necessary, or having their rent-providing positions in society being removed.

OK, so: at what point does a small autonomous group successfully execute a sort of Fight-Club maneuver? The Hollywood Movie plot version of this is setting off an EMP in financial districts, but I think given what we’ve seen of *legacy technology that underpins a lot of western society*, this strategy is super difficult to pull off and super high-risk, right? I mean, a) you have to find one and build one and then b) physically deliver it. You might get caught! Why not just find a zero day in a piece of commercial infrastructure? I mean, it probably doesn’t even have to be a zero day! Equifax’s PINs for managing freezes of credit accounts are reportedly derived from a month-day-hour-minute *timestamp*, and have been for over ten years.

So if you don’t have to do that, how long until someone breaks? And it doesn’t have to be a lot of people, it can be a *small* number of people. You don’t even need to think about a Person of Interest-style plot about releasing a “virus” to “delete” data. All you have to do is reduce confidence or increase churn, right?

So a few questions: at what point are things so bad that sufficiently motivated individuals will act? What are precipating events or conditions for a pseudo Boston Tea Party event “but for personal data”?

It would be *more* difficult for this to happen to a digital native style company (e.g. Facebook, Google, Amazon), but I think orders of magnitude *easier* for it to happen to a legacy, entrenched company. Like Equifax or Experian. Maybe even Tesco or Walmart. That is, a legacy company that has critical data but isn’t smart enough about exposing it securely. Like, what may have happened with Equifax and Apache Struts.

* A workshop about algorithms in society and the myriad of issues involved has a call for submission of papers[4]. I mean, I guess better to think about this at all, but somewhat of a case of horse has bolted from barn and you now live in a world ruled by horses, I hope you like serving your horse masters

* I think my default position on UBI now isn’t that it’s an easier/better way to do social services and that it’s nothing more than an easier/better way to do economic stimulus. I am far too worried that without concentrating on the quality (availability, etc.) of social services that may be purchased with said universal, basic, income, you’re just jerking around about pretending to solve one problem *without actually making things better*. (Note: they might be a bit better, but it seems the presumption is that *removing all government-provided social services and replacing them with just $$$* results in better outcomes, and I’m like: supplied by whom?)

* I saw a good piece (can’t find the link, sorry) whose basic position was a) “Well yes, *in the long run* mechanization, automatisation and industrialisation improved things for people” but also b) “We did have a couple of world wars, hundreds of millions of people died etc. etc.” The argument that “we always find new jobs” is one that works in the abstract but not in the personal (ie: you telling me there *will* be a job in the future for me to do when automation takes away my current job is not great, because this implies there will be a gap). (There is always a gap). Unless you explicitly design for no gap. I don’t see anyone designing for no gap.

* An encouraging – maybe – piece in the New York Times about a former warehouse worker who used to do boring, repetitive, physical work that is now done by something that is (more suited?) to doing that work; i.e. a robot [5]. This is not your usual story because (surprisal!) the former warehouse worker who used to physical labour is now, like, a robot manager? She is slowly moving up beyond the API layer. I mean, this kind of makes sense? Someone who knows what the robot needs to do can now, kind of, look after what the robot should do when the robot throws an exception? Ms. Scott, former physical labourer, now “robot manager”, is happy to provide an economic reporting money quote by saying “For me, it’s the most mentally challenging thing we have here. It’s not repetitive.” This is good, right? Now we just need a few hundred million, if not billions, of these.

[0] Book Review: Surfing Uncertainty | Slate Star Codex
[1] Kalman filter – Wikipedia
[2] Beagle 2 – Wikipedia
[3] OAuth is your future | Flickr
[4] Workshop on Trustworthy Algorithmic Decision-Making
[5] As Amazon Pushes Forward With Robots, Workers Find New Roles – The New York Times

OK. That was some writing. See you for the next episode. Maybe Netflix will pick up a full season this time or something, or we won’t have all the production difficulties that plagued last season.

Dan

s4e14: The City Was Connected 

0.0 Station Ident

I am, as they say, “between things” at the moment, which is its own set of opportunities and, uh, the opposite of what an opportunity is.Because I am between things and because I like to type things and start things that may or may not be finished or concluded in any way whatsoever, I’ll make this one simple because I’m dealing with a bunch of head stuff. Here’s something that I prepared earlier:

1.0 The City Was Connected

Everything was connected, and I was fucked.I was late paying the water bill, so the parking meter refused service until I coughed up.

The meter said I had 30 seconds to pay the water bill until I had to move my car, and… I just froze. Then the meter attendant came. She said she was just doing her job as she booted my car, then looked down at her phone. Reminded me I hadn’t taken out my recycling.

This wasn’t turning out to be a good day.

She told me I was on my second strike: one more, and I’d lose streetlight privileges. I’d heard about that: a social shaming punishment. Streetlights would create a cone of darkness around just you. It sounded horrible. I shrugged. I didn’t care anymore. Of course, the next level was suspension of physical mail delivery and a karma dock on Nextdoor.

Fuck Nextdoor. Everything had gone to shit when they’d come in as the ‘social fabric platform’ when IBM connected the city. It’s not like the streetlights worked, anyway. In theory the full-spectrum full-color LEDs were super smart, but they were IOT-dumb.

Some joker had leaked another cache of NSA zero-day’s for Windows Embedded last month and the lights had been useless ever since. At least the kids at the high school were having fun. The lights outside Elspeth High kept flashing ‘Mr. Franklin Is A Kiddy Fiddler’. There was no chance the school admins or police could figure out who did it and besides, they had other problems. Not least of which I’d heard that the receipt printers for school dinners were drawing a cock on each receipt now, with Franklin’s address

Anyway, using streetlights to create a cone of darkness for social shaming if you hadn’t paid a water bill? Which idiot thought of that?

I didn’t have it as bad as the single parents, though. An irritation for me, a complicated connected hell for single mothers.

Everything was connected, so if you had a fidgety kid at school that day, you got an automatic fractional WIC deduction as parenting punishment. We had DeVos to thank for that one.

It was all easily fixed, though.

Send some bitcoins to the right address and someone – probably in China – would fix everything for you. Only for 24 hours, mind. Not a good business model, otherwise. Same as it ever was, just figure out who to pay to grease the tubes. In the meantime there was easy if boring work if you wanted it. All those smartlights needed a hard reboot thanks to a daily buffer overflow Essentially you’d get paid to take a walk and say hello to all the lampposts. A morning connected constitutional.

I felt sorry for the kids as I walked past the park. All the play structures were smart now, promising to add NikeFuel to each kid’s score. They’d single out the pudgier and hawk NikeFuel points to them, blinking their names in a disencouraging display. NIKE FUEL BONUS FOR JASON. They got hacked too.

“JASON ATE ALL THE PIES” it would flash, later.

Pity the single dad who took his kids to the park on an access day though. If he hadn’t paid his support, all the play structures locked up:

”PLAYGROUND DISABLED. SUPPORT YOUR FAMILY, BRIAN. $632 OUTSTANDING. PAY WITH  PAY OR SQUARE NOW.”

The other parents and kids scowled at Brian. He just turned around and took his kids with him. Ten seconds after they left, the playground unlocked.

So yeah, our connected city’s great. We’ve never been happier.

If you’d like more, the internet-short-fiction-stuff will continue whenever my brain chemistry allows at http://tinyletter.com/umbra where the tiny fanbase for the other thing I wrote can slowly accumulate like operations on a useless blockchain.

Cheers until something has caught my attention,

Dan

s4e13: A future of working 

0.0 Station Ident

12:40pm on Friday during a lunch break, in the middle of doing some sort of writing. This particular set of writing – typing for coins, as some of my friends have once described it – has at its goal changing the way a group of people do things from one way (unsurprisingly, given my current work, that way is: traditional, waterfall, planning-driven software development) to another way (iterative, multi-disciplinary digital service teams).I am under no illusion that there is a magic spell that can be typed out and, when it’s read by someone else, change the way that they do things. That comes with practice. But, it’s an interesting exercise because it also (more or less) follows on from what I was last writing about. That traditional method of software development is plan-and-process heavy. The assumption is that by casting a sort of documentation spell, writing down *how things should be done*, is in the best case, the most reliable way to assure a certain outcome, and in the worst case, the minimally viable way to ensure that certain outcome in most cases.

I suppose one way of looking at agile/iterative versus waterfall is the failure of the industrialization of software development. The invention of the organization and the development of process made *some things* repeatable, but the realization that “figuring out the right software to build and then building it” couldn’t be industrialized to the extent that the people implementing the process could do so blindly.

In other words, and to paraphrase Teen Talk Barbie, software is hard and you can’t produce good product by rote. Plans being followed produce the illusion of decisions having been made through consideration and comprehension, but the output needs to be validated. And these types of plans don’t really tell you *how* to make the decisions, they merely say: a decision should be made here.

Or, the plans remove any ability for consideration of context. Software is too hard, too difficult, too… multi-variate, for a recipe to be followed.

This is less a station ident and more an entire newsletter on its own, when I had designs on writing something else. So, I’ll hold this, and go write The Other Thing.

1.0 A future of working

I have a New Yorker subscription because they had an offer on for some ridiculous amount that I think was actually losing them money (seriously, it was something on the order of less than $10 for three months worth of issues). I decided to take it because this year I’m spending more money on, as they say, Supporting Quality Journalism, and after getting my Washington Post and New York Times subscription, I figured the very least I could do was to throw some money toward the organization behind the best LinkedIn Cartoon Ever[0].Anyway. I have this subscription and now bits of New Yorker arrive in my house, a bit like how the New Yorker currently arrives at my house (that is to say one method is: through the air, chopped up into bits and reconstituted onto an LCD matrix, the second method is: through a small slot, tightly bound together, and printed at a very high resolution). In both cases, the New Yorker arrives, eventually, in the bathroom, where many things are read or, in the New Yorker’s case, just piled up near the toilet to make it look like our household is populated by respectable, smart, well-informed middle-class adults with a high-fiber diet.

It was in the bathroom that I ended up starting to read one of the stories in the magazine’s Innovator’s Issue (a title that already causes some sort of eye-roll) and in particular, Nathan Heller’s piece, “Is The Gig Economy Working?”

Heller’s piece was, for me at least, something that I felt I could skip over because it followed a familiar format. Here’s someone who’s not really involved in the Gig Economy. But, they are involved in the Gig Economy because they can buy services through it! By using services provided by the Gig Economy (in this case, by someone who comes and hangs some pictures and does other odd DIY jobs in Heller’s apartment), Heller muses about the nature of The Gig Economy. It felt like about 75% (a scientific number, I’ll have you know) was Tropes About The Gig Economy:

– The work isn’t particularly fulfilling, but can be profitable for some people
– Some companies don’t treat their employees (who aren’t employees, they’re contract workers) very well
– TaskRabbit
– Uber
– Young People, Specifically, Millennials
– Airbnb
– Airbnb management companies

Once we’d gotten through all of that, then, I thought, the interesting part. If the social safety net is broken, then what do we need to replace it? Is it time to get rid of the distinction (in America, at least) between 1099 contractors and W-2 employees and the benefits and rights that go to one category (healthcare, vacation, certain job protections and the prospect of a pension) and not to the other? Heller gets to talk to Tom Perez, who’s the current chairman of the DNC, and the two go off to visit Hello Alfred which – excuse me – is literally an affordable, democratized home help butler service, whose innovation is that workers are treated like employees, and are W-2 employees.

This has been a long recap to get to, I feel, the interesting part.

Much of what has been written, it seems, about the way work is changing has been about the gigging part. The schedule-free work that can be squeezed either into gaps around existing employment (the Lyft drivers I meet who’re working two jobs) or that fills gaps encountered by boomers who’ve cashed out, have their final salary pensions and mortgages paid off and are literally working to stave off boredom, a sort of mental health wellness prescription that also happens to earn them money.

How interesting for the gig economy to cater both at the bottom of Maslow’s hierarchy, as a way for those struggling to exchange time for the cash needed to afford rent and food, but also at middle, to meet psychological need for contact with other people, any people, instead of living out days alone with the television.

That’s the gig economy. There’s a whole other part of how work is changing that doesn’t get covered as much, and a friend reminded me of it over breakfast and that piece of conversation has stuck with a bunch of other tiny little data points and now they’re feeling like they’re blaring at me a little bit.

Patreon pulls together a bunch of threads, theories and assumptions that internet old timers had been thinking about, puts them into practice and now, four years old, appears to be unrepentantly and aggressively validating them with cold hard recurring cash.

The shape that Patreon takes in my head is something of a mish-mash of 1,000 True Fans[2], universal basic income[3], moat-building, genuine platform creation and the introduction of a (for now) benevolent dictator.

Forget transaction-based “I need a task done and a human body to do it” that is furthest toward the “first against the automation wall when the robots come gradually and then quickly”, Patreon covers what (naively?) feels like intrinsically human creative work: it’s a way for people who create culture that’s valuable to other people to be consistently, regularly supported by their audience.

Patreon is the way that people can monetize their 1,000 true fans not on a one-off transaction basis (who could possibly live that way?) but on a recurring monthly subscription way. Or, in other words, ask their 1,000 true fans for a monthly salary.

The numbers, I’ve heard, go only up and to the right. They may not be going quickly, but they do so steadily.

From Patreon’s point of view, they’ve got a pretty interesting product/platform: if you’ve got an audience, you can use them to get paid by it. Patreon can sit in the middle – they’re doing valuable work, it’s difficult and hard to collect recurring revenue on your own, so the infrastructure is helpful – but they’ve got a significant moat. The billing relationship is between subscriber and Patreon and (full disclosure, I haven’t actually looked into this), but the switching costs in financial billing relationships are high! If, say, I’m a successful creator on Patreon who’s pulling down a few thousand dollars a month and a competitor comes along who’ll offer a better rate, or services that might suit me better, how easy will it be for me to persuade all my patrons to switch to the new provider? It’s hard enough getting people from one mailing list to another these days.

Making it easy for creators to move from one Patreon-like service to another would be the ethical thing to do, but it wouldn’t be good business sense. You can hear drip of VC saliva at the gate: what better platform lock-in incentive than *your monthly paycheck*?

There’s a lot of good that Patreon could do too, though. At some point, they start to feel more like a quasi-employer or staffing agency, albeit one who provides a staff to hundreds or thousands of customers at once. For those high-performing creators, Patreon could offer benefits like health insurance. My naive understanding of the market is that they could even do so and make money: present Patreon creators as a pool, offer them to Hiscox as people needing commercial insurance, say, and the customer acquisition fee that Hiscox or whomever would pay out could go any which way: to Patreon completely, back to the creator, split, or used to reduce premiums.

I’m not completely certain that the divide between what’s traditionally thought of as the gig economy (TaskRabbit, Uber, Lyft, all of the failed home cleaning services) and Patreon is necessarily just one of “stuff that can’t be automated easily”, but it feels like a useful, correct-more-often-than-not heuristic.

Perhaps a better distinguishing feature is that the gig economy is tasks that are commodities and can be competed on in a marketplace. Patreon – and non-Patreon type work that we’re seeing, like the analysts and consultants who go independent and start charging direct subscriptions for their daily or weekly analysis. What’s being brought here isn’t strictly substitutable, in my dim remembering of competition law. For creative or intellectual output work, the value for certain consumers is in *Ben Thompson’s* analysis, not getting analysis in general. In comics, it’s getting *Lucy Bellwood’s* creative output, not “a series of pictures with text commentary that tell a story”.

In other words, services like Patreon allow for the possibility for people to get value for what makes them *them*, and that no-one else can copy or reproduce*.

* I say, “no-one else can”, because the obvious counter-example is a) made-to-order art reproductions from a photograph in the style of your favourite painter from China and b) a recurrent neural network doing exactly the same thing.

This is perhaps one of the reasons why I tried to train an RNN on my entire newsletter corpus to see if it could do what I did and to my relief, the answer is that it (hopefully) makes less sense and is less useful than I am.

I made a reference to this a while ago on Twitter: for the people who value Mary Meeker’s internet trend reports, there’s nothing *in principle* (other than violating copyright) stopping you from putting together a corpus of Meeker’s trend output presentations and, well, having it try to make more output presentations that are Sufficiently Advanced As To Be Indistinguishable From Meeker.

What someone might say at this point is that, sure, but that doesn’t take into account how you’d give that deep learning system access to all of the data that Meeker uses to generate her trend reports, and the answer to that is: why, is that harder than all the shit we throw at a self-driving car? Can Meeker even really explain how she comes to her conclusions to a sufficient degree of acceptance?

Maybe Meeker’s too hard, maybe we could just start off with neural-net-generated xkcd comics.

At that point, I’d like to formally propose that we set up a deep learning system that is a) trained on the corpus of Randall Monroe’s output, b) have it generate output, c) have that output be scored by community feedback as reinforcement learning and, crucially, d) set up a Patreon account for that neural network so it needs to get Patrons who will reward it with money so it can buy more GPU capacity on Amazon to learn how to make better comics.

Anyway, I digress.

Forgot a gigging future of work. That’s boring. Patreon is interesting because it’s a future of work that appears to allow enough space and flexibility for a *human* future of work.

[0] A New Universal ‘New Yorker’ Cartoon Caption: ‘I’d like to add you to my professional network on LinkedIn.’ – The Atlantic
[1] Is the Gig Economy Working? – The New Yorker
[2] The Technium: 1,000 True Fans

OK. Back to writing documentation. It was nice to be back, however briefly. And as always, send notes, even if it’s just to say hi. If it’s to say “Hello, I’m interested in your ideas and would like to subscribe to your newsletter” even *that* is okay. If it’s to say “Your ideas suck, and this thought in particular sucks because [citation needed]” then I suppose that’s fair enough, but maybe think of a more compassionate way to say it? I’m not one of those deep-learning systems that doesn’t have a system for generating plausible emotional reactions.

Best,

Dan

s4e12: End of Process 

0.0 Station ident

Thursday, 13 April 2017. Today is the day that I formally discovered the composer Max Richter, whose work I’d encountered in bits and pieces (the first Netflix season of Black Mirror, parts of Arrival) but it wasn’t until I was out having tea this morning that his album Infra hit me with all the subtlety of… well, today’s not the day to be talking about large things happening.

1.0 End of Process

This is the story of joining the dots from a CEO’s letter to shareholders[0] to machine learning[1] to human organizations and management[2] and process to automation to a post-scarcity universe where humans still have an important job to do because they’re Culture[3] Special Circumstance[4] agents.

So. Jeff Bezos writes a letter to shareholders and it’s very good and interesting[0]. It is interesting to me for two things: because Bezos is saying that he’s terrified that one day Amazon might stop being focussed on outcomes and instead become focussed on processes. Bezos says that one should

Resist Proxies
As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.

A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.

“The process is not the thing”. I would write that down and make giant posters of it and scream it from rooftops because so many times, in so many places, the process is not the thing. For example, when I think about things like governments requiring vendors to provide customer references and the certificates (e.g. Project Management Processional certification or Agile certification), my first impolitic instinct is that this is a crock-of-shit process way of assuring a desired outcome. What I mean is: presumably the desired outcome is “how do we prevent projects from being a flaming tire fire of a disaster”? Well, one way of doing that is by requiring people with experience. How do we know if people have experience? Well, one way is seeing if they’ve passed a test. What kind of a test? Well, one that results in credentialing people as Certified Systems Engineers, for example.

Another bad example of this would be one where you, say, have a third party who’s providing oversight of a project and trying to make sure that the project is, er, “less risky”. *One* way of doing this would be to say that the project staff should have a list of risks (let’s call it a register-of-risks) and that if it exists, then that means the project staff have thought of risks and therefore, the project is “less risky” than if the register did not exist. The process for this might involve someone looking for the existence of a risk register, and if they saw one, they would say “oh good, they’re mitigating risks”, and if they didn’t, they’d say “haul this lot in front of whichever committee and publicly embarrass them”.

The process – going through the motions to see if there’s a risk register – *does not actually decrease or identify any risks*.

(Of course, it’s more complicated than that. Checklists[5] can help get things right – if the right things are on the checklist. Just having a checklist doesn’t assure the desired outcome, and checklists also don’t help you figure out what you don’t know.)

My suspicion – especially if you’ve been following along – is that processes are alluring for a number of reasons. They relieve the burden of decision-making, which in general we like because thinking is hard and expends energy. They also relieve us of the burden of fault: us humans are a fickle bunch and it’s easy for something to make us feel guilty (when we haven’t lived up to our own standards), or ashamed (when we don’t live up to others’ standards) or angry (when we’re stopped from achieving an outcome that we desire). The existence of process can be an emotional shield that means that we don’t have to be responsible. In Bezos’ example above, the junior leader who defends a bad outcome with “Well, we followed the process” is also someone who is able to defend an attack on their character and also of their self worth, if their sense of self worth is weighted to include the outcome of their actions.

Processes – rules – help us do more, more quickly. They mean that we can think about the outcome, design a set of processes and the processes promise that we can walk away, secure in the knowledge that the process will deliver the outcome. The world, naturally, doesn’t work like that, which is why Bezos wants to remind us that we should revisit the outcome every now and then.

Who wouldn’t want to have to make fewer decisions? Businesses attempt to automate their processes – a series of decisions to achieve a particular outcome – and the promise of computers were that at some point, they would make decisions for us. They don’t, at least not really. There’s a lot of work that goes into figuring out what business processes are and then, these days, trying to translate them into some sort of Business Rules Engine which is exactly the kind of software that I thought I’d be terrified about. Business Rules Engines appear to be a sort of holy grail of enterprise software where anyone can just point-and-click to create an, um, “rule” about how the business should work, and then Decisions Get Made Automatically From Thereon In.

(My suspicion is that they’re not all they’re cracked up to be, but I suppose the promise was that a business rules engine would be *somewhat* better than hard-coding “the stuff that the business does” into, well, code, so now there’s an engine and you have another language in which you write your rules that maybe non-software people can understand but let’s just skip to the end and say that ha ha ha things are complicated and you wish they were that easy).

But, here’s the next bit, where Bezos talks about machine learning:

Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder.
Machine learning techniques – most recently and commonly, neural networks[1] – are getting pretty unreasonably good at achieving outcomes opaquely. In that: we really wouldn’t know where to start in terms of prescribing and describing the precise rules that would allow you to distinguish a cat from a dog. But it turns out that neural networks are unreasonably effective (unless you seed their input images with noise that’s invisible to a human, for example[6]) at doing these kinds of things. We haven’t gotten, I think, anywhere near in terms of doing what modern neural networks can do with expert systems, which are (superficially) the equivalent of a bunch of if/then/else rules that humans would have to sit down and describe.

No, with modern machine learning, we just throw data at the network and tell it what we want it to see or pick out. We train it. And then, it just… somehow… does that? I mean, it’s difficult for a human to explain to another human exactly how decisions are made when you get down to it. The answer to “Why are you sure that’s a car?” can get pretty involved pretty quickly.

Instead, now we’re at the stage where we can throw a bunch of images to a network and also throw a bunch of images of cars at a network and then *magic happens* and we suddenly get a thing that can recognize cars. Or, if you’re Google, “optimize the heating and cooling of a data center” because a car’s pretty much the same thing as toggling outputs.

If my intuition’s right, this means that the promise of machine learning is something like this: for any *process* you can think of where there are a bunch of rules and humans make *decisions*, substitute a machine learning API. It means that I now think – I think? – that machine learning doesn’t necessarily threaten jobs like “write a contract between two parties that accomplishes x, y and z” but instead threatens jobs where management people make decisions.

As I understand it, one of the arguments of the AI-is-coming-to-steal-all-our-jobs is that automation happened and it’s not a good idea to be paid to do something that a robot could do. Or, it’s not a great long-term plan for your career to be based on something that any other sack of thinking meat could do, like “drive a car”. But! In our drive to do more faster, we’ve tried (cf United Airlines) tried to systematize how we do things, which relegates a whole bunch of *other* people into “sack of meat that doesn’t even really need to think”. If most of our automation right now is about automation of *information* but still involves a human in the loop, then all those humans might just be ready for replacement by a neural network.

These networks that are unreasonably effective are the opposite of how we do things right now – we think about outcome and then we try to come up with a process that a bunch of thinking sacks of meat can follow because we still think that a human needs to be involved in the loop and because those sacks of meat do still have something to do with making a decision.

But the neural networks work the other way around: we tell them the outcome and then they say, “forget about the process!”. There doesn’t need to be one. The process is *inside* the network, encoded in the weights of connections between neurons. It’s a unit that can be cloned, repeated and so on that just *does* the job of “should this insurance claim be approved”.

If we don’t have to worry about process anymore, then that lets us concentrate on the outcome. Does this mean that the promise of machine learning is that, with sufficient data, all we have to do is tell it what outcome we want? Look, here’s a bunch of foster applications. *If* we have all of this data, then what should the decision be? Yes or no?

The corollary there – and where the Special Circumstance[4] agent comes in – is that humans might still have a role where there’s not enough data to train a network. Maybe the event that we’re interested in doesn’t have a large enough set of n. Maybe it’s a completely novel situation. Now I’m thinking about how a human might get *better* at being useful for such situations.

So. Outcome over process. The architecture of neural networks means and requires focussing on outcome because process is opaque and disappeared into the internal architecture of the network that we have nothing to do with.

How unreasonably effective will such networks be at business processes, then?

[0] EX-99.1
[1] The Unreasonable Effectiveness of Recurrent Neural Networks
[2] ribbonfarm – experiments in refactored perception (oh, just go and read all of Ribbonfarm)
[3] A Few Notes on the Culture, by Iain M Banks
[4] Special Circumstances – Wikipedia
[5] A Life-Saving Checklist – The New Yorker
[6] Attacking Machine Learning with Adversarial Examples

OK. Time for bed. As always, I appreciate any and all notes from you.

Best,

Dan

s4e11: We need to talk about algorithms 

0.0 Station Ident

7:56am on Wednesday 12 April 2017, and I’ve made a mistake – I am over an hour and a half early for an appointment. This is less than ideal for reasons of family and household logistics, but I suppose I’m here and there’s not much I can do about it now. Now’s probably as good a time as any to mention that I have another newsletter where I send out the occasional affirmation[0], I imagine a good one for today would be that everyone makes mistakes.

[0] I can by Dan Hon

1.0 We need to talk about algorithms

On Sunday night, United Airlines removed a man from a flight that was overbooked. I say “overbooked” – it doesn’t sound like there were more tickets sold for the flight than there were seats, instead the airline claimed that the flight was overbooked because it needed to use four seats to transport United crew. The airline went through their usual process to find volunteers by offering compensation – vouchers that could be applied to another flight – but could not find any. In the end, four passengers were selected at random by computer[0] and when one of them refused, United staff called security, who forcibly removed the passenger, dragging him off the plane, bloodied and bleeding.

It goes without saying that there’s a lot wrong with what happened. It isn’t clear the flight was overbooked, because paying passengers were removed (“re-accommodated”, in the words of United CEO Oscar Munoz, and PR Week’s Communicator of the Year) in favour of crew who needed to be transported, arguable a failure of the airline’s on scheduling and logistics (of course, this is only a failure if you believe that airlines should prioritize seating paying passengers over their own crew logistics needs). But what many people have picked up on is how something like this happened: where multiple United staff – regular human beings, who presumably do not set out in the morning to enact a chain of events that will result in assault – didn’t feel like there was anything they could do once the events were set in motion.

I wrote a tongue-in-cheek set of tweets about this[1], which went a bit like this:

United CEO To Discipline Computer Algorithm That Resulted In Passenger Removal

The CEO of United Airlines, Oscar Munoz, 57, said in a statement that the algorithm involved in Sunday’s removal of a passenger from a United flight had made a ‘bad decision’ and would be placed on administrative leave during the airline’s internal investigation.

The algorithm had selected passenger David Dao and three other passengers for what Munoz called “involuntary denial of the boarding process”. Munoz confirmed that the wide-ranging investigation would also look into previous decisions made by the algorithm.

Industry software engineers were quick to jump to the algorithm’s defense. Aldous Maximillion, an artificial intelligence researcher at internet portal Yahoo! said that “the actions of one algorithm should not tar all algorithms.”

The algorithm in question is not the only one working at the airline. In its last 10-K filing, United disclosed that over 200 algorithms are employed at the airline, making many varied decisions throughout the day. A computer programmer at United commented off the record that many algorithms at the airline had been overworked recently.

Interviewed at Chicago airport where Sunday’s incident occurred, many United staff remarked positively about their interactions with the airline’s algorithms.

Tracy S., 29, a United gate attendant who was present at the incident on Sunday, spoke fondly of the resource scheduling and passenger seat assignment algorithm that made Sunday night’s decision.

“[The algorithm] always communicates politely,” she said, gesturing at the flat screen monitor, where she explained instructions from the algorithm would appear.

Tracy said that the algorithm was much more reliable than the human scheduler it had replaced, and added that the algorithm had not made any untoward sexual advances.

When the algorithm’s instructions came through, none of the attendants thought anything of it, said Tracy. “Nobody could have predicted this,” she said.

Rival companies American Airlines, Delta and Southwest were asked to comment on their employment of algorithms, but none had commented at time of press.

Okay, so it’s funny (well, *I* think so) if you treat algorithms like people but I frequently find that I have to explain the more serious points that I’m making, albeit pretty obtusely. Which are:

First: sure, we can have a discussion about how algorithms-are-just-rules, but I think that denies the folk and common understanding that the way that we use the world algorithm now is as a computer program. I mean yes: a recipe in a cookbook is an algorithm (thank you, secondary school computer teaching) but these days I think it’s safe to assume that unless we explicitly say otherwise, when someone says algorithm we *mean* “a program running on a computer beep boop”.

Second, is the view pointed out by Josh Centers that United is a good example of how dehumanized we’ve let computers make us[2]. My reaction at the time was pretty hot-headed (emotions and tempers run high on birdsite, recently), and I said: “Bullshit. It’s the perfect example of how dehumanized we’ve chosen to be. Computers have nothing to do with it.”[3] to which I owe Josh an apology because computers have *something* to do with it, and it’s, well, never a good idea to say never.

On preview, Josh is right: he says that it’s an example of how dehumanized we’ve *let* computers make us become, and the use of the word ‘let’ implies that we still had a choice. To use a popular analogy, we’ve outsourced deciding things, and computers – through their ability to diligently enact policy, rules and procedures (surprise! algorithms!) give us a get out of jail free card that we’re all too happy to employ.

In one reading, my reaction is to deny the environment that we make decisions in, partly because *we* created the environment. None of these rules come into existence ab initio, the algorithm that randomly selected passengers for “involuntary denial of the boarding process” (which in any other year would be a candidate for worst euphemism, but here we are in 2017) was conceived of and put into use *by a human who made a choice*.

At the end of the day, we’re the ones who have the final decision, *but* it’s only compassionate (and realistic) to recognize the environment that leads to the decisions that we make.

A brief detour into cognitive psychology and (surprise) my personal experience with mental health: it is hard to make decisions. Most of the time we don’t. Most of the time we’re just reacting and we’re not, well, mindfully considering what it is that we want. The argument that we’re fully in control of all of our actions at all times is turning out to be either not true, or not as true as we had thought, and in any case, unhelpful. In our best moments of clarity, in our best moments of being present, we’re able to thoughtfully consider and slow down. Most of the time we’re operating on Kahnemann’s System 1 of evolutionarily acquired heuristics and rough rules that mostly work. It’s only with deliberate effort that we can shift over to System 2.

My understanding – my folk, pop-sci reading version of this – is that System 2 just requires more energy, which is another way of saying that biological systems are inherently lazy and why bother spending *more* energy to do something if you could spend *less* energy doing something. If your mostly automatic, intuitive, non-conscious System 1 can do something more quickly and get it roughly right, then why not use it most of the time? It’s *hard work* to think.

And, to Center’s point, we’ve offloaded our thinking.

I don’t think computers should necessarily shoulder the brunt of this from an academic point of view, but from a practical point of view it seems perfectly pragmatic to. Tools have always been about making things easier for us. Heuristics, policy, rules – whether formal or informal, exist in part to reduce the decision-making burden. How can we make things *easier* for ourselves? We’ve always wanted to outsource: in theory it’s a postive adaptive move because it means that you have more energy for.. something else, like reproducing and making sure you’ve got lots of children so that the selfish replicators continue to get what they want.

This, I feel, is starting to get to the root of the issue.

I got into an argument – and am happy that I just decided to walk away instead of further engaging – with someone on Mastodon about opinionated software and the idea that Mastodon could incorporate something like a poison pill in its standard terms and conditions such that user data would be automatically deleted upon acquisition by a third party. On the one hand, hard cases like “how do we preserve user privacy in the event of corporate acquisition” can easily result in bad law, on the tridextrous hand, this quickly turned into me hearing for the nth time that “software is just a tool” and “shouldn’t impose values”.

To which the short answer is: tools *do* impose values and in the same way that manners maketh man, in 2017 so software maketh humanity. Some – very general purpose and elementary – tools might impose a minimal set of values, but user-facing software these days *require* the developers to make hundreds if not thousands of decisions which act as opinions-made-real as to how things should work. 140 characters or 500? Should you add your voice to something while spreading it, or just spread it without adding your own commentary? Should you allow people to hide content behind an arbitrary warning or not? Decisions in software seek to turn opinions into reality through usage. If software *wasn’t* used then yes – it wouldn’t impose values. But tools do, and complex and opaque tools like “software” certainly do.

I’ve been slowly and gradually learning that the way my head works is by trying to connect everything together. Sometimes this works well, where I’ve been lucky to find rewarding and meaningful work that benefits from seeing the connections between things, making conclusions and suggesting how things might work differently. Other times this works terribly, where everything becomes a mutually reinforcing web that can drag me down into debilitating depression.

At the moment, this is what I’m seeing and there’s nascent connections forming, being reinforced and culled between all of these nodes:

– in the same way that society functions well when people can find out and understand the rules of society[citation needed], does the same principle apply to rules implemented in software?
– given our understandable and apparently unavoidable tendency to outsource decisions thanks to our inherited cognitive architecture, can we assume that we’ll *continue* to outsource decision-making and rule-processing? And at an accelerating rate, too?
– “enterprise” businesses do this already with their predilection for things like “rules engines,” which I always feel are some sort of unobtainable golden rule for “if only we didn’t need any humans and a computer could just make all of our decisions for us and instead of implementing those rules programatically, what if we had a somewhat higher-level language and interface for doing so?”
– decision fatigue is real
– when was the last time you asked someone to decide something for you?
– how far off are we from the algorithmic equivalent of “This area contains naturally occurring and synthetic chemicals that are known to cause cancer or birth defects or other reproductive harm”? A sort of “This product contains algorithms that are known to affect your ability to make conscious decisions of your own volition”?
– To the above, Peter Watts would say: “Where have you fucking been? You’ve never been in control. We’d have to slap a warning sign on *everything*.”
– How long until the next management fad is to include mindfulness training and regular mindfulness sessions *throughout the day* for employees to help them make decisions more consciously? You can still make decisions in response to your emotions, but at least you’ll be aware of them.

I feel this is less a question of what technology wants. This is more a question of: we are tool-builders. Perhaps building tools – especially tools make decisions – is in large part an evolutionary imperative to reduce the energy burden of one of the most expensive organs in our body? Do we end up with a brain that just wants to experience qualia without having to *decide* anything?

In the meantime, though. Our decisions have impact in the real world and the real world contains other people. Outsourcing decisions is something we do, because it’s easier – and less stressful for us – for something else to make a decision. It’s nice to have something to blame, an algorithm to point to, instead of taking responsibility. We can say: the algorithm was fallible. We tried to do our best, but in the end, what could we do?

[Twenty years ago, someone like me would be an undergrad at university and deeply unimpressed at the groundbreaking brain-in-a-vat philosophy espoused in The Matrix. But on the other hand: what if seeing the underlying code of the world wasn’t the software, but seeing *all* of the underlying rules? Seeing the complete System of the World?]

So. To tie all this together.

Algorithms make decisions and we implement them in software. The easy way out is to design them in such a a way as to remove the human from the loop. A perfect system. But, there is no such thing. The universe is complicated, and Things Happen. While software *can* deal with that, in our more sober moments, in our System 2 moments, we can take a step back and say: that is not the outcome we want. It is not the outcome that conscious beings that experience suffering deserve. We can do better.

United are merely the latest and most visible example of a collection of humans who have decided to remove themselves from the loop and to blame rules – that they came up with – rather than to accept responsibility and to do the hard work.

When we design systems and algorithms, we must make a decision: are these to help us provide *better* outcomes for humans who think and feel, or are they a way for us to avoid responsibility and to merely make things easier for us? It means choosing to do the hard thing over the easy thing.

And, in a trite way (but, as someone pointed out to me on Twitter, sometimes the trite things are true), perhaps it just starts with stopping to think.

More on this later, I expect.

[0] Officer Who Dragged Man Off United Flight Gets Suspended
[1] Dan Hon on Twitter: ““United CEO to discipline computer algorithm that resulted in passenger removal.””
[2] Josh Centers on Twitter: “The United story is the perfect example of how dehumanized we’ve let computers make us.”
[3] Dan Hon on Twitter: “Bullshit. It’s the perfect example of how dehumanized we’ve chosen to be. Computers have nothing to do with it. https://t.co/zZxonSu0Am”

As ever, I love to get notes from you, even if they’re just saying ‘hi’, and I do my best to not get freaked out if you disagree with me! So – feel free to send a note. Even if it’s just a single emoji.

Best,

Dan