s5e06: Filtered for wide spread, low accuracy
0.0 Station Ident
Hello. I'm back. Let's not question why and just roll with it.
Here's some things that have caught my attention:
1.0 Dial-a-yield observations
First: The New York Times reports that the Airbus A380, a stupendously humongous plane built on a bet (I was going to say figuratively, but I guess both ways work) that air travel would remain more-or-less hub-and-spoke and airlines would want to find a more economical way of moving large numbers of people from a to b, might cease production if Emirates, the remaining customer, doesn't buy any more[0].
My throwaway imitation of a BBC Radio 4 panel game participant is to remark that we shouldn't shut down the A380 program because "we're going to need them to airlift survivors of extreme weather events over the next 200 years."[1] Now, if you think about it, this isn't really practical because one of the problems with the A380 is that it's so humongous that it required airports to make changes to their architecture in order to support it. So the chances that you're going to be able to land one with the appropriate logistical support in an area that's just experienced the euphemistic "extreme weather event" are probably low. (Cue a bunch of people trying to work out if it would be better, then, to pre-emptively invent a whole bunch of point-to-point short-hop cargo drone carriers that could carry survivors in dangling harnesses, a bit like a rollercoaster ride. Oh wait, what's that, Boeing? You have a prototype already?[2]
Second: OK, so there's some commentary floating around thanks to the tidbit in The Washington Post that the mistaken ballistic missile emergency alert broadcast in Hawaii over the weekend was, in part, due to the selection of the wrong item from a drop-down menu. Quote:
Around 8:05 a.m., the Hawaii emergency employee initiated the internal test, according to a timeline released by the state. From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.” He was supposed to choose the former; as much of the world now knows, he chose the latter, an initiation of a real-life missile alert.
“In this case, the operator selected the wrong menu option,” HEMA spokesman Richard Rapoza told The Washington Post on Sunday. [3]
Rightly, a lot of people[4] are saying that the person who chose the wrong option in the drop-down should not be reassigned or reprimanded because the bigger problem, of course, is the design of the drop-down menu in the first place. And, for example, the fact that because of the way the emergency alert system is architected means that there wasn't a quick way (where quick is less than around 30 minutes which... apparently is around the time it would take for the suspected ballistic missile to arrive) to issue a retraction through the same system[5].
According to Good Morning America (which in 2018 now sounds more like a command rather than a salutation), one of the bureaucracy's solutions to this problem of accidentally broadcasting an existentially terrifying alert is that "[from] now on it will take two buttons and two people to send that kind of alert again."[6]
The "two buttons" part of this fix is part of the problem that created this mess in the first place. Cyd Harrell had the timeless observation that "the greatest trick the devil ever pulled was convincing the world that enterprise software complexity is properly addressed via training"[7], which is relevant here because the point is this kind of software suffers from a ridiculous lack of user research and relevant, useful testing.
Harrell (disclosure: former colleague) has a great thread[7a] on all the armchair "well, what clearly needs to be fixed is {x}" in the distinction between just a few UI fixes versus a more systemic problem in, say, the entirety of software design, development, deployment and procurement in the realm of government. If you're interested in this kind of thing, then you should read it.
On Twitter, @supersat dug up screenshots of a real EAS/WEA interface, which look like, well: if you're (quite) software literate, they look exactly how you'd think "traditional browser-based enterprise software without modern user research" might look like[8].
Third: a working list of things that we should rise up and seize the means of:
* rise up and seize the means of political boundary districting
* rise up and seize the means of online advertising targeting
* ... and dopamine loop engineering[9]
This mention of dopamine loop engineering is in reference to the allegation by Matt Mayberry of Dopamine Labs, "printed" by The Globe and Mail, that:
"Instagram "[exploits this craving for novelty bias] by strategically withholding "likes" from certain users. If the photo-sharing app decides you need to use the service more often, it'll show only a fraction of the likes you've received on a given post at first, hoping you'll be disappointed with your haul and check back again in a minute or two. "They're tying in to your greatest insecurities," Mr. Mayberry said.[9]
This allegation blew up[9a] on Twitter when it was excerpted via screenshot by @AndreaCoravos[10]. But the twist is that Casey Newton, a reporter for The Verge, reported Instagram co-founder and CTO Mike Krieger refuting the allegation[11], saying that "likes don't appear immediately for technical reasons, but it's not an engagement hack."
Newton has an interesting observation, which starts with the fact that the (inaccurate, according to Instagram) rumor has spread faster than the denial, indicating that most people don't give Instagram (and by extension, its corporate parent, Facebook) the benefit of the doubt about anything.
This leads to my somewhat facetious quip riffing on Arthur C Clarke that "sufficiently advanced, non-transparent algorithms are indistinguishable from malice"[12], and that following its stupendously successful "What Is Code?" issue, Bloomberg Businessweek produces in April 2018 an issue dedicated to "What Is Like?", an in-depth exploration of everything that happens when someone likes your Instagram post[13].
The broader point here is that things like "instantly displaying a notification that someone has liked your Instagram post" is, if not incredibly difficult, then at the very least, incredibly *complex*. The fact that we're able to do these things is a testament to our ability to try to solve very complicated problems, but part of me wonders about the increasing gap between the ethos of "design is how it works" - and that we expect things to work simply and clearly - and "engineering is how the design doesn't fall over".
We see these things every now and then - Ford's "What Is Code?"[14] was a sprawling masterpiece amongst the middle class partly because it attempted to explain (with minimal judgment) the almost fractal complexity that's involved in modern computing systems. There are so many assumptions, rules of thumb and hidden pieces of knowledge involved in something as simple as "displaying an Instagram Like" that don't matter until you start asking questions like: are they hiding them from me? Or is it genuinely a complicated engineering problem? The "complicated engineering problem" answer is, well, complicated because everything we do is a series of trade-offs. The problem is Instagram *could* just push out Like notifications when each one happened, but, well, therein lies scaling problems. My folk understanding is that Twitter isn't really "real-time" in the way that it used to be, when we had to contend with fail-whales instead of maurading bands of Nazis: there's significantly more layers of caching and batching so that Twitter preserves the *feeling* of being as real-time as it can be. Reading Ford's "Code", I think the hope is that the reader comes away understanding that, yes, in principle, people *could* be notified as soon as a Like comes in, but that there'd be... lots of work involved.
Which comes back to the idea that sufficiently non-transparent advanced algorithms are indistinguishable from malice.
I'm thinking aloud here: the social compact involved in software before our current attention-based business models was pretty clear. I give you some money, and in return, I get some software that solves a problem for me. Removing the payment part, while great for "democratizing access", muddies issues somewhat: this is the now well-known "if you're not the one paying, you're the product" point of view. For free services subsidized by advertising (such advertising based on being able to "effectively" advertise due to a wealth of user data), because there's no longer a direct monetary relationship between the formerly paying end-user and the supplier of the software, that social compact is gone. In other words, there's no longer as much of a clear incentive to act in the best interests of the formerly paying end user.
Part of the calculus now is this: when I use your free services, I trust you only so far. It may well turn out that the length to which that trust will go is quite far (see: our current situation and how much people are willing to trade not only their information, whether quasi-public or private, but also now their *behavior*). One of the boundary conditions might be this: look, it's all well and good that you get *information* about me. But I draw the line at *altering my behavior*.
Might this be where some of the current outrage (I feel really weird about putting things in inverted commas) about the state of social software stems from? We've had a (relatively) long history of ceding information about ourselves in exchange for some sort of benefit [citation needed], but have we done the same, in such an explicit way, in such a way that we're able to see the potential effects every day, for our behavior?
Perhaps that's the most offensive of all: I may choose to give you my secrets, but if what people are telling us is true, we don't have really have a choice, we don't really have the autonomy when network effects are used to take advantage of how our brains work.
This is pretty much a long way of saying what we already knew when I was at Wieden and working on the Facebook account. People didn't *like* Facebook, they resented that they had to use it. They accepted it that they no longer had any control over situations governed by network effects, much like people who resisted getting non-smart mobile phones until, well, they were pretty much everywhere and it was hard *not* to get one.
Perhaps what's more insulting is this: part of the argument that's being put forward here is that ever more so-called dopamine loops of control - like the example being talked about with Instagram allegedly withholding likes - are being put in place because, well, business models demand it. Consider this: the allegation is that Instagram doesn't show you all the likes on your post because then you'll come back again. But to what end? To earn yet more advertising impressions? From the user's point of view, a need that's being satisfied is checking for anticipated external social validation, but from the application's point of view, the goal is to what? Increase engagement? I used to get in trouble for asking this sort of question when I worked in advertising because frequently there wasn't a good reason as to *why* increasing engagement would be a good idea. I mean, isn't the goal here to, I don't know, sell shoes? How does increasing engagement sell shoes? From a naive, outsider's point of view, it's easier to optimize for increasing engagement (make people come back to Instagram more) than it is to optimize for effectiveness (only show ads, or *something* that actually moves a relevant, productive needle somewhere).
At the same time, all of this is stupendously intersectional and related in a way that I think our brains aren't intuitively well-equipped to parse and understand. There isn't simple a leads to b causation here. Everything is happening at once and it's hard to say that there are prime causes or prime movers. It can be simultaneously true that it's a genuine engineering problem to show Likes in realtime *and* that someone at Instagram wants to engineer ways to increase repeat visits. Or, it can just be a genuine engineering problem and it's just a happy coincidence that this results in repeat visits and the fix, or an improvement, for displaying realtime reactions to posts is just de-prioritized because of the effect it *could* have on engagement.
The big question is how we get to there from here, where "there" is some significantly less fucked-up-ness. One tactic, as trite as it may seem, would just be significantly greater transparency and an effort to explain - over time - exactly what's going on. To throw our hands up and say "it's too complicated" is an abdication of responsibility, especially when we don't even try. But how refreshing, at least, would it be to hear from a program/product manager an acknowledgement that competing interests are being balanced? That right now, the revenue model is advertising based, and that requires certain thresholds to be met, but at the same time, there's a desire not to engineer reflexive behavior that cannot easily be countered without mindfulness on the part of the user?
As a final aside, this is something that annoyed me recently about some commentary from John Gruber on Apple's approach to parental restrictions on iOS devices[15]. Apple investors had published an open letter asking Apple to study the health effects of its products and to make it easier for parents to limit their children's use of iPhones and iPads. Gruber's commentary was as so:
"This open letter is getting a lot of attention, but to me, the way to limit your kids’ access to devices is simply, well, to limit their access to devices. I’m sure iOS’s parental controls could be improved (and in a statement, Apple claims they have plans to do so), but more granular parental controls in iOS are no substitute for being a good, involved parent." [15]
I feel like the constructive criticism here is a bit like the advice people give about doing good improvisation and that Gruber didn't quite do a "yes-and". I mean, Gruber says that the best way to limit access to devices is to "well, limit access to devices", the implication of which is limiting physical access, because all Apple can do is affect the software that runs on their devices. He doubles down, saying "more granular parental controls in iOS are no substitute for being a good, involved parent", but my interpretation of the situation (being a parent too!) is that such granular controls might be *helpful* to me being a good, involved parent. I might be taking this too personally, but it smells like the sort of accusation where a "good, involved parent" *doesn't need any help*, never gets tired, never gets frustrated and is always able to mindfully and compassionately reason with their (tired, over-emotional, frustrated) child and limit access to a device. Well, sometimes it's hard? I need help. *I* find it difficult to put my device down.
In other words, what irks me is a lack of compassion from all around. So. More compassion, please.
[0] Airbus A380, Once the Future of Aviation, May Cease Production - The New York Times
[1] dan hon 🤦🏻♂️ 💻 on Twitter: "Don’t shut down the A380 program, we’re going to need them to airlift survivors of extreme weather events over the next 200 years… https://t.co/XPJjcFzWVr"
[2] Boeing built a giant drone that can carry 500 pounds of cargo - The Verge
[3] Hawaii missile alert: How one employee ‘pushed the wrong button’ and caused a wave of panic - The Washington Post
[4] On Twitter
[5] Hawaii Chaos: The Internet Broke Emergency Alerts - The Atlantic
[6] Good Morning America on Twitter: "Employee who hit button sending the false Hawaii missile alert is now being reassigned. From now on it will take two buttons and two people… https://t.co/7Pn3qX4waX"
[7] Cyd Harrell on Twitter: "the greatest trick the devil ever pulled was convincing the world that enterprise software complexity is properly addressed via training"
[7a] Cyd Harrell on Twitter: "on the other hand, people are saying talking about UI elements isn't enough, because there are clearly more systemic problems to be addresse… https://t.co/nJk35kkWYQ"
[8] Karl on Twitter: "In case you're curious what Hawaii's EAS/WEA interface looks like, I believe it's similar to this. Hypothesis: they test their EAS authoriza… https://t.co/t8ZH1J6JuE"
[9] Your smartphone is making you stupid, antisocial and unhealthy. So why can’t you put it down? - The Globe and Mail
[9a] Where "blew up" means "currently has 6,700 likes, 5,500 retweets and 182 comments" so, in the grand scheme of things, potentially "not really"
[10] Andy Coravos on Twitter: "Wait. @instagram strategically *withholds* "likes" from users that they believe might disengage hoping they'll be disappointed and recheck t… https://t.co/Dbow7Tajan"
[11] Casey Newton on Twitter: "Instagram co-founder @mikeyk says this is not true; says likes don’t always appear immediately for technical reasons but it’s not an engagem… https://t.co/tZrnbfHiYA"
[12] dan hon 🤦🏻♂️ 💻 on Twitter: "Sufficiently advanced, non-transparent algorithms are indistinguishable from malice"
[13] dan hon 🤦🏻♂️ 💻 on Twitter: "April 2018. Bloomberg Businessweek devotes an entire issue to “What is like?”, a full and clear exploration of what happens when someone l… https://t.co/gv4wRV6T0p"
[14] Paul Ford: What Is Code? | Bloomberg
[15] Daring Fireball: Regarding This Open Letter From Two Investor Groups to Apple Regarding Kids' Use of Devices
--
OK! I forgot to ask for feedback last time, in that I normally end these things with a "hey, I love getting feedback! Send a note! Even just a hello!" and, well, it felt like I got less feedback than usual. For those of you who did send feedback, thank you! And, uh, this is awkward because I don't want to shame anyone into feeling like they *should* have sent feedback but didn't.
I'll just stop.
Have a good week,
Dan