s5e03: Welcome to the Churn
0.0 Station Ident
I'm on DL 178, an Airbus A330 that my brain reckons parts of which have been *glued* together, on my way to what a non-native Norwegian regards as "the middle of nowhere, Norway" answering an invitation from a mysterious individual[0] to come and "discuss the future of A.I." As has been pointed out by other people, this is *totally* the setup for a locked-room murder mystery, and that's before you find out that Budd has decided in his infinite wisdom that the best location to discuss the future of artificial intelligence is "the location where Ex Machina was filmed[1]."
Now I have to admit that I have been freaking out (a little) about this particular trip. There's a bunch of other people who've been invited, and the event currently exists in that liminal state where it's not-quite public and also not-quite private and if you really wanted to, you could probably work out who all the attendees are by using, as the kids say, OSINT, or "Googling" and "looking at Twitter".
I'm writing about this because I've been having a whole bout of imposter syndrome, which goes a little bit like this: "Oh my god, I don't understand why I've been invited, look at all the other attendees" followed by "Well, I *have* been invited, so clearly a decision has been made" and a refrain of "But what about all the other people *who aren't me* who might be better than me at this thing that I haven't experienced yet?" and, if I want to impress my therapist about how well I'm doing about realizing that everyone has intrinsic value just by being themselves, I'd say "ah yes, but none of those people are me, and what I bring is, well, my own unique set of experiences and points of view - no-one else can replicate that".
Part of the reason for writing this is that I have the slight awareness that *other* people may regard me as being both demonstrably qualified and interesting enough to take part in a frankly portentous discussion about the future of A.I., in which case the fact that they can see that I have imposter syndrome about taking part might, in a small way, be helpful in showing how fucked up we all are because hey, we're all just humans after all.
All of which is to say is that I should've taken the lorazepam earlier (I am not necessarily opposed to "I Should've Taken The Lorazepam Earlier" as an epitaph when one is required).
Anyway.
More, later this week, on "the future of A.I." In the meantime, there are at least *three* giant things that have been buffering in my head over the past week or so, all of which have been waiting for unblocking and the time to just write them down, get over myself and send them. Or, in retrospect, for me to have just taken the Lorazepam earlier.
[0] Andy Budd on Twitter: "This week I'm taking an eclectic group of designers, thinkers and artists on a retreat in Norway to discuss the future of A.I."
[1] Startpage - Juvet
1.0 Working As Designed
Phtoooooooey *long exhale*. So, Facebook's been in the news, right? *One* of the reasons why Facebook's been in the news is that there's evidence that "the Russians" used the platform to "push Trump rallies in 17 cities"[0]. This is interesting, because if we take the report as face value (ie: the Russian state did in fact use Facebook as a platform to interfere in the domestic politics of a sovereign state), a couple things fall out.
First, this might be particularly devious on the part of Russia, and I say this with full knowledge that I do *not* want to be "that game theory guy". My knowledge of geopolitics is naive to say the least and this is just me thinking out loud. On the one hand, the organized events themselves destabilize politics in the targeted country - rallies happen, they get covered in domestic media and apparent divisions are shown or highlighted in a way to create or heighten actual divisions. This doesn't help unite a country. So, a win for Russia there.
But, secondly, the activity has a secondary effect on Facebook. This might not be an intended effect, or it might just be a bonus nice-to-have, but *if* Russia's activity were to be found out, then it would destabilize Facebook the platform. (Narrator: it ended up drawing attention to Facebook).
This ends up being two goals for the price of one (and again, I'm not sure if the type of people who play these games intend these things to happen or if this is just armchair strategizing). Russia gets to destabilize politics *and* destabilize an instrument of neo-liberal globalist American hegemonic power, i.e. "Facebook", by exposing to the American government and political class yet another instance of the hubris of the Silicon Ideology, only this time in a way that the American political class might actually give a shit about.
What's (somewhat) astonishing to me (not really) is that after the NSA leaks, Facebook and Google et al all started freaking out, in a way, about state-sponsored security threats, and that this *wasn't even a security issue*. This was basic product design and Facebook leadership having a blind spot (perhaps even an intentional one) about how their product *could* be used by a malicious third party. Like, "oops, we didn't realize a state or state-level hostile actor could use our own platform to discredit us inside our own regulatory regime".
I quipped on Twitter that I'd call this type of exploit an Existential Exploit[1]. An exploit that goes to the heart of *whether the affected product would be allowed to exist in its regulatory regime, purely through creative use of the product itself*. What we've got here is a product and platform that has explicitly been designed to do certain things with an almost religious belief (or equivalent negligence, at this point) that *it can't be used to do wrong*, or that the platform is *so strong* that it itself cannot be harmed.
Alternatively, I feel like I could just type out the word "hubris" lots.
[0] Exclusive: Russians Appear to Use Facebook to Push Trump Rallies in 17 U.S. Cities
[1] Dan Hon on Twitter: "So this is interesting and genius from .ru for two reasons, right? https://t.co/wBjeFIlgCR"
2.0 Works For Me
Still Facebook. Sorry. (But actually maybe just Benedict Evans? Or, to depersonalize it, just the worldview that Benedict Evans represents and espouses?)
This time it was the news from ProPublica that Facebook's ad targeting product "accidentally" let advertisers target "Jew-haters"[0], which, on the face of it I'd say would not be a surprise to certain people. So there's the initial story, of which I probably have opinions and thoughts, but then there's the reaction to the story of which I apparently blew up in a rage on Twitter (side thought: what are the main apps for each emotion? If Twitter is for Anger, what are the products and services for all the other emotions? For love (no, I guess you don't need to answer that, or you can answer for 'lust', at least), jealousy, disgust, sadness, and so on...).
This particular rage reaction was provoked by Benedict Evans who is an English person in America who's employed[0a] for his professional opinions about the internet, unlike me, who is an English person in America who's not employed for his amateur opinions about the internet.
Evans said, "Maybe platforms should pay bug bonuses for moral or ethical exploits, not just technical exploits. Broaden 'how can bad people break this?'"[1], to which my reasonable and emotionally well-adjusted response was to explode and say "How about Fucking Hire And Pay Experienced Online Community Moderators [sic, I actually meant *managers*, not moderators] For Fuck's Sake"[2].
Now, before I get into Evans' response, I want to explain why I got so angry about this.
What it looks like Evans is doing is: a) implicitly recognizing the risk of harm in such ethical product exploits, b) implicitly recognizing that such harm applies to a large surface area (bounties are commonly used when there's a large search space, where you might not have enough employees to ensure what you deem as sufficient coverage for risk), c) dealing with the large search space by incentivizing a crowdsourced approach by "paying people money" if they find things that are broken.
Now, Evans talks about "bug bonuses" which are commonly used where companies acknowledge that there's a high enough risk that their internal resource (e.g. employed security researchers) will miss a security issue and that it's worth compensating others to help find those exploits and risks.
The issue here is that you normally get bug bounties *combined* with employing security researchers. For "ethical and moral issues", Evans is in essence (I think) proposing that companies skip the part about employing the equivalent of ethical/moral exploit researchers and instead go straight to the "it's too hard for us to do it ourselves, but we suppose we'll pay someone if they find an issue".
I mean, this whole issue would've (I feel) been addressed if there had actually been people at Facebook empowered to consider, deal with and manage the social effects of their products. This job is sometimes called a community manager or community director and otherwise (at least, in other, early Web 2.0-era companies) used to be part of product management considerations before, well, everything went to shit and we decided that if it disrupts, it... something that rhymes with "disrupts" but signifies "is good".
Now, I pointed out Flickr (the company and later product being known for its community moderators and managers) as one of the canonical examples of both how hard it is to do "online community right" and "what online community done well can look like", to which Evans' rejoinder was "Flickr didn't have 2bn users"[3], to which I find difficult to not treat as a sort of excuse. I mean, just because the problem is harder (and yes, it doesn't necessarily scale linearly with the number of users), *doesn't mean it isn't worth trying*.
Moishe Lettvin put it in a similar way, I think. "[The question] "How do we prevent ethical abuse at scale in this product" should be an early and explicit design goal, not a post-facto patch."[4], which, for those playing at home, reinforces any existing priors that you might have about Facebook just... not caring about this kind of thing.
Pierre Omidyar brought up the same issue later down the thread. This wasn't the first time moral and ethical lapses were spotted in Facebook's product, cf the "no blacks" rental ads[5, 6]. And the apparent result is that... the same thing happened again. Facebook *appeared* to do nothing.
At this point, Evans' question ("If it's so obvious, how come it took until now for someone to notice?") belies (what I think think is) the echo chamber that Evans' thinking inhabits. This *was* obvious, to a *bunch* of people, *a long time ago*, but the risk was brushed off or dismissed by those who are now belatedly calling attention to it.
To which, I think the big question that others will be asking at some point is: if Facebook doesn't care, then what will it take to make them care?
Anyway, an aside. On the Flickr community moderation point, Kevin Fox pointed out that "Flickr's guidelines were also vague and subjective. One was literally, "Don't be creepy. You know the guy. Don't be that guy."[7]
I have some quick points I want to make about that and again, I don't want this to be personal about Fox and instead think it's a larger point about the software-and-systems vs humanities mindset. Look, I used to be a lawyer, and I qualified in a common law country. The law in England and America is, at a certain level, *literally* the accretion of "Don't be creepy. You know the guy. Don't be that guy" and the accumulation of a *whole bunch* of different examples of that guy, because, you know what? The messy universe that we humans inhabit, the one that is real and is the ground truth from which we derive our binary and software abstractions *is* vague and subjective. Every single time we have problems it's in part because a bunch of people think that the real world *shouldn't* be vague and subjective and that it should be certain and objective because, well, that's easier to deal with. It *would* be easier to deal with, but it's not the reality we inhabit.
And far be it for me to throw stereotypes at this whole thing, but on the one hand we've got product and system-oriented Facebook which wants to move fast, break things and remake things in a way that can be, I don't know, efficiently modeled and stored in a variety of SQL and noSQL databases and its dear leader, Mark Zuckerberg with his honorary degree in... computer science, I think? And in the other corner, who else in our zeitgeist would be the CEO of the company that had a vague and subjective but *continually enforced and attempt to be equitable* "don't be creepy" but Stewart "degree in Philosophy" Butterfield.
Score one for the humanist technologists. Or not, as the case may be, because the non-humanist technologists appear to be winning.
[0] Facebook Enabled Advertisers to Reach ‘Jew Haters’ — ProPublica
[0a] at Andreesen Horowitz, a venture capital firm with which I feel I have a somewhat strange relationship if only because Marc Andreessen subscribes to (and occasionally also apparently reads) this newsletter, follows me on Twitter and moreso favorites the "odd" tweet
[1] Benedict Evans on Twitter: "Maybe platforms should pay bug bonuses for moral or ethical exploits, not just technical exploits. Broaden 'how can bad people break this?'"
[2] Benedict Evans on Twitter: "Maybe platforms should pay bug bonuses for moral or ethical exploits, not just technical exploits. Broaden 'how can bad people break this?'"
[3] Benedict Evans on Twitter: "@hondanhon @MikeIsaac Flickr didn't have 2bn users."
[4] Moishe Lettvin on Twitter: "@hondanhon @BenedictEvans @MikeIsaac agreed; "how do we prevent ethical abuse at scale in this product" should be an early & explicit design goal, not a post-facto patch."
[5] Pierre Omidyar on Twitter: "@BenedictEvans @hondanhon @MikeIsaac Lots of people noticed, actually. Journalists broke the story on "no-blacks" rental ads. Only people who didn't notice were inside the co."
[6] Facebook Lets Advertisers Exclude Users by Race — ProPublica
[7] Kevin Fox 🦊 on Twitter: "@BenedictEvans @hondanhon @MikeIsaac Flickr's guidelines were also vague and subjective. One was literally, "Don't be creepy. You know the guy. Don't be that guy.""
3.0 Welcome to the churn
A brief interlude from Facebook, I think.
I've recently been through an... interesting procurement experience, which *at the very least* has been useful because in the absence of evidence to the contrary, I like to assume the principle of mediocrity and that most-things-are-like-this-thing. This is also a dumb fallacy, because hey, most things might not be like this thing. So, take this with the requisite salt. For example:
* I am not an economist
* I am literally just thinking out loud
* This is pretty much the equivalent of making things up, and I have no idea if the fundamentals I’m basing my assumptions on are, well, completely inaccurate (luckily, I appear to have also been invited to an event where I'll get to talk to actual economists about this issue).
I've been thinking about things that look like increases in productivity where those things instead mask massive decreases in, well, productivity, "efficiency" and more distressingly vaguely, "desired outcomes". I don't know what to call what I'm talking about other than something akin to bureaucratic bloat, a general lack of courage and the habit for humans to default to being lazy and resist change whenever possible.
There's been two events that have caused this thought: more exposure to "how government works" and a specific exposure to "how a certain healthcare technology project works". I'll talk about the latter, and then probably maybe a little bit about the former.
I got rewarded for my work by being nominated as a patient representative for a healthcare project in California. This one's for creating, more or less, a provider directory where Californian residents can just go to one "place" and find out whether a given doctor or service is in-network for their particular insurance provider. California-the-state has an interest in this because its public healthcare Medicaid expansion is one of the biggest "insurers" in the State, so it would have lots of people who use it.
Now, what happened was this. We're talking about the overall, uh, design of the system - things like architecture and "data flows" that would cover things like "how providers get their data into their system", of which some methods are "email" and "fax", to which my instant response is: why fax? I mean, we're talking about a system that won't be delivered for, say, two years (unfortunately), so my naive head thinks: we can just cut this all off at the head and say hey! No faxing! Everyone should interact with this as a digital service, and we'll make *extra special sure* that it'll be easy to use and we realize that every other time we say this, we've been lying and what you got was a piece of shit. But this time, we mean it.
The response, of course, was that it would be impossible to remove fax as a method of communication. Too many rural (take a drink) providers with older (take a drink) non-computer savvy staff (take a drink) rely on using freeform faxes to communicate. It would be an unreasonable imposition to require them to use an alternative.
To which I say: tough shit.
I mean, I'm sorry. The *cost* of supporting faxes is *too high* given the quality and level of service that we should expect. I asked: do faxes mean that there are lots of people employed, well, looking at faxes and correcting mistakes or calling up to clarify? Yes, there are, I was told. Would it be better if, well, instead of people doing administration and *error correcting handwritten faxes*, we actually spent more time and money on delivering healthcare? Uncomfortable silence.
Anyway, I lost the argument. Freeform fax has been retained as a data input method, and as an improvement, group will look into providing, er, providers, with a "form" that they can fill in and, uh, fax. You know. Like printing out a PDF from a website and then filling it in by hand and then faxing it over.
(Look, I know. There are real places where internet connectivity really doesn't exist and where doctors really do practice. So yes, there *must* be methods that support those users. So, support those users specifically! And yes, part of the blowback I got on Twitter was that "faxes work" and I recognize that "faxes work" but the issue is that faxes work *at a cost* and what if we did the hard work to replace faxes-that-work-with-a-cost with a new thing that *also* works but at a *lower* cost?)
Ah, here it comes.
There's so many things that work if we just throw humans at them. "Procurement," a thing that happens in special organizations that are large enough they're allowed to call themselves "enterprises" is another example of a thing where there's a process that's carried out by humans and it kind of works and if it doesn't then maybe you can throw more humans at it?
Maybe the process is the thing that needs fixing or, to be less antagonistic about it, maybe the process is the thing that could be improved. Well, what *if* we did more with less? What *if* we took a long hard look at the "way we do things" and figured out better, faster, cheaper, more reliable, higher quality, better-outcome-for-whatever-x-it-is-you-want ways to do things that just *happened* to mean that less people might be involved?
I feel like one of the contemporary canonical examples is "healthcare administration" in the United States, where you can quickly say that there are "too many" people involved in administering healthcare as opposed to actually delivering healthcare. All those billing people, we say. A whole bunch on the insurer side and then a whole bunch on the provider side! Just moving bits of paper around!
All just legacy cruft and, I don't know, the inability to replace what already exists with something better (it should be! There are ways to check!) because of perceived personal or organizational risk. And now, the whole thing feels systemic. A friend needs to get a copy of her medical bill for her insurer so calls up the provider, the provider's administrative assistant says they can access the data 'in the system', but that it can't be emailed and can only be put in the post. "Not even faxed?" "Not even faxed." Not even faxed!
At every single level, the *ways of doing business* and the *psychology of doing work* have built up easy, established ways of doing things that are too big to be threatened. And then what? What if inefficient processes were improved and it turned out that we'd accidentally employed millions of people *to make things slower, more expensive and of lower quality*? What would we tell them, and how quickly could they retrain into, I don't know, doing things that would help *improve* outcomes?
I suppose this is scarily close to (or actually is) the sort of thing where a young impassioned individual believes that "common sense" should apply and that everything would just be *easier* and *better* if, well, everyone did things my way. But this is part of the whole "innovation in organizations" issue: there are people in those organizations who *do* know how to make things better, who *do* see the inefficiencies and how things could be better. And it's *everywhere*.
How do we get out of that? At this point I worry that it's not a management issue or even a cultural issue, but indicative of a sort of weakness in how our brains work.
Alternative take: everything is actually getting better, and I'm just really impatient.
Alternative alternative take: no it's not, just google *cost disease*.
4.0 No, Really
OK, some more Facebook in what I should probably call My Collection Of Things That Confirm My Bias Against Facebook.
This is still about The Ad Thing, from 2.0 above. Antonio Garcia Martinez wrote a Wired article called "I helped create Facebook's ad machine. Here's how I'd fix it"[0], which with my set of priors, instantly makes me think he's a kind of jerk because, hey, it's not fixed. Spoilers: I end up thinking he's a jerk[1].
Martinez ended up in a Twitter thread (of course he did, it's 2017) with academic Zeynep Tüfekçi, where she had referenced his article and pointed out, reasonably enough, that others had been pointing out this exact moral/ethical product issue *for quite a while* and that Facebook had pointedly ignored their criticism. Martinez's rejoinder to Tüfekçi (omg, now I think I'm literally writing some sort of insider baseball tech twitter conversation social diary page) was *literally* a "well if you academics think you're so good, you should've been smart enough to come work at Facebook to fix the problem you found".
Now, Martinez was an early employee at Facebook, so I'm just going to come out and say that he may be more indicative than most as to the corporate culture there. So to essentially say *it's your fault for not coming over to fix it* is some weird kind of having your cake (acknowledging criticism) and eating it (ignoring it, because they didn't fix it for you).
It's behaviour like this that results in, well, product behaviour that I feel is inevitably going to result in regulation of software development. It's not like I'm especially looking forward to institutional review boards being involved in every single pull request in the future but *this is how you get institutional review boards involved in every single pull request in the future* - by willfully ignoring valid criticism until the point at which, I don't know, enough people will die that the regulatory class will come down on you like a tonne of bricks.
(Random idea: write a fictional Havard Business Review article from 2065 covering the events that led to the contemporary software development regulatory regime. For bonus marks, include a reference to HBR self-consciously examining it and its parent institution's potential role in the outcome)
[0] I Helped Create Facebook's Ad Machine. Here's How I'd Fix It | WIRED
[1] Dan Hon @ 30k ft rn on Twitter: "That ex Facebook dude who wrote a Wired article about their ad targeting product, essentially saying academics should fix things 1/"
5.0 Selected Scenes
* Against the backdrop of anthropic climate change-catalysed hurricane Tesla pushed a remote update to its fleet in Florida unlocking battery capacity (60kWh cars actually have 75kWh batteries in them) in perhaps another example of showing people what’s lurking behind the scenes in a complex, human technology world. There are ghosts in there, but they’re ghosts that another human controls, not, uh, a different kind of ghost [0]
* At the same time, in Portland, a (couple? Few?) thousand miles away, I get a pop-up when opening ReachNow, telling me that “Our Member Support line is down due to the impact of Hurricane Irma.” Just so we’re clear, the voice-based customer support call center for my on-demand, pay-per-minute, Internet-connected car sharing service in Portland, Oregon, is having service difficulties because part of its customer service operation is situated in Florida, a region of the country that at the best of times was susceptible to hurricanes and at the worst of times looks like it may be perpetually subject to, well, Large-Scale Weather Events.
* At the same, same time: on my way home from an outing on Saturday night, the Lyft app on my phone stops responding. It could be a number of things: the cell reception on my phone has been acting up lately, so it could be a hardware issue. It could be Verizon, my wireless carrier, who have also been acting up. And lastly, it could be that Lyft’s server infrastructure has suffered a hiccup and the on-demand, “decentralized” ride-hailing service had fallen over. It turned out to be the latter: for about 15-20 minutes, Lyft wasn’t working in all of North America, and maybe even worldwide. And then it started working again. Just one of the things that happens when software eats the world.
[0] Tesla flips a switch to increase the range of some cars in Florida to help people evacuate | TechCrunch
6.0 Magic Window
I have had a total of two "interesting" thoughts about augmented reality now that iOS 11 has potentially brought the technology back out of the doldrums of Gartner's hype cycle.
The first was a reaction to the initial crop of AR apps that are the equivalent of the first iOS apps: Look! Motion control! A gyroscope! Shake to get a restaurant recommendation for no reason whatsoever! If we're going to get a whole bunch of novelty AR apps, then let's at least get some *absurd* AR apps. I want an AR measuring app that will measure things in culturally relevant but practically useless units, so that I know how long the north side of my kitchen is in football fields or what the length of our bed is in units of Wales-the-country. In-app purchases of course unlock more pop-culture units like "Statues of Liberty" or "Empire State Buildings" (Empires State Building?)
The second one is the more optimistic one: if people are carrying a magic window with them, use it to help them see and understand what they've been trained to ignore. In other words, the alternative non-cyncical take is that augmented reality is a fantastic opportunity to help people understand and explore the hidden complexity of today's world[0].
What if you could see the supply chain? What if you could see the inequity and inequality, the externalized environmental costs, the context behind government and community funding decisions? What if you could see the result of time that’s normally opaque to us?
There’s a pessimistic take to this where people recoil from seeing the true complexity and inequity of the world. But - I think if it makes people angry, then maybe that anger is a valid emotion. Sometimes the world *is* broken, and maybe the better thing to do is not to ignore it but to accept it and to confront it, get made angry by it and then be provoked into doing something about it. Maybe making these things easier to see will help provoke that change?
At the same time, think about what it might mean to easily see otherwise hidden complex structure. Growing up, my parents happily bought my Stephen Biesty’s Incredible Cross Sections[1] and I know I wasn’t the only child who liked to see inside things to see how they worked. My four year old loves the copy that I got him. Our children want to understand the world, and the magic windows we make for them might help them understand how a complex world fits together in a way we may never.
I used to get excited about the Young Lady’s Illustrated Primer from Neal Stephenson’s The Diamond Age as a sort of perfect interactive textbook combined with committed human teacher. But now, now I’m excited about the idea of a magic window that shows you how the world works.
(Of course, what would happen if you pointed a magic window at a magic window! It’s not a magic window: it’s a computer. Here’s the processor. Here’s the camera. Here’s the radio where it communicates with the network. Here's the radio waves heading off to the 6G picocell network. There's the picocell. Here’s the software. Here's how TCP/IP works. This is a packet…)
There's a generation right now growing up with Wikipedia. I wonder what the generation that grows up with this might be like.
I want one.
[0] Dan Hon @ 30k ft rn on Twitter: "Alt non-cynical, optimistic take: AR is a fantastic opportunity to help people understand and explore the hidden complexity of today’s world"
[1] Stephen Biesty - Illustrator - Cross Sections - Rescue Helicopter
--
It's 6:07pm at my departure origin, which means I'm about half way to AMS. Time for me to knock myself out and see you on the other side.
As ever - notes and just random hello's (but not a Vulcan Hello, perhaps) - are welcome.
Best,
Dan