s06e07: Firebreak Protocol
0.0 Sitrep
[Sorry. This is a long one.]
I have started this episode multiple times. The first time I started it was on Saturday night, afte rthe end of the first day of mainstage at the XOXO Festival, when I wanted to write some things down while they were still jangling around in my brain, as opposed to in a few days time, when I might not be able to remember them as clearly. Well, as the narrator would say, it is now at the very least a few days later. And with that, the creeping anxiety every day that I hadn’t written something. Which, you know, I have tried to alleviate by not putting myself under any pressure! The truth of the matter is that I had a whole bunch of super interesting conversations and, as always, things that had caught my attention during XOXO, and I wanted to make sure that I got them right. Done, as they say, is better than perfect.
It is now many, many days later (Wednesday 26 September, writing this while having dinner at Sacramento airport on the way back to Portland), as opposed to when I started (Saturday 8 September), so let’s just go with done, and forget about perfect.
First, I guess I can cover some of the things, etc:
1.0 Some Things That Caught, Etc.
* I am paying attention to Jay Owens’ thread about good articles on the current and future state of speculative fiction in particular because it looks like it’s got a bunch of good references to under-represented and minority voices.
* Speaking of Jay Owens (look, just follow Jay), they have some great commentary on a New Balance “out of home campaign” that will use Tensorflow-powered AI to, in Owen’s words: ““identify people who don’t look like everybody else” - and give them free trainers, not ship them off to an internment camp. So that’s cool.” Owen has got a really good point about how you’d do this campaign in a significantly less terrifying way rather than going along with the current enthusiasm for continual, pervasive surveillance and also this weird idea that taking pictures of people isn’t “personal data”, which come on, feels laughable naive *even now*.
* Sort of self-help management lesson style, there’s a great quote from Phil Schiller, in Steven Levy’s Oral History of Apple's Infinite Loop, about what happened when Apple killed the Newton and the reality of having to make hard decisions that will upset people:
Schiller: We’re like, “Steve! Newton customers are picketing! What do you want to do? They’re angry.” And Steve said, “They have every right to be angry. They love Newton. It’s a great product, and we have to kill it, and that’s not fun, so we have to get them coffee and doughnuts and send it down to them and tell them we love them and we’re sorry and we support them.”
This is a really hard thing to do, and it keeps coming up in Silicon Valley hagiographies - that you have to be a dick, that you have to be abusive in order to get the job done. Most recently this came up with Linus Torvalds when he apologized about his past behavior on the Linux Kernel Mailing List: Torvals, in a way similar to Jobs, has a reputation of being abusively blunt and uncompromising, and here he is now saying that perhaps there’s a better way. What I liked about the Schiller quote was that there was a side of Jobs that we don’t get to see much - probably because it doesn’t fit the trope of What Steve Was Supposed To Be Like (and I know there are people reading this who did work with Steve) - and because it demonstrates that even someone who was apparently an asshole understood the importance of a) making a difficult decision, b) understanding that people may not like that difficult decision and c) helping them work through that decision. Reality, it turns out, doesn’t owe you shit, but that doesn’t mean we have have to be mean about it.
* I recently accidentally read Maria Semple’s Where’d You Go, Bernadette [Wikipedia, Amazon] (soon to be a major motion picture from Annapurna starring Cate Blanchett!), which read like a sort of wonderful Microserfs-style Douglas Coupland novel because it hits many of my Interests right now (parenting, being stuck in a creative rut, moving to the Pacific Northwest, knowing people who do TED Talks, software/tech corporate culture, parent teacher associations, and even epistolary storytelling via material collecting by a fifteen year old girl so I’m like SIGN ME UP).
* In fact, what actually ended up happening was that by the third time I’d seen a reference to Vulture’s premature attempt at a 21st century literary canon, I had somehow persuaded myself that in an act of personal growth maybe it was time to re-emerge from burying myself in the books I regularly read (science fiction, tech, science) and, I don’t know, go and read some Proper Literary Fiction, so started working down Vulture’s list picking out books that interested me. I mean, I didn’t want to go the whole way and read books that I would feel like a complete slog, but something about how I was at that time was more *open* to reading things that I might not otherwise choose to read. So, I ended up reading Maria Semple (above), devouring Helen DeWitt’s collection of short stories, Some Trick, while I waited for The Last Samurai and Lightning Rod to become available through Libby, and oh-my-gosh Some Trick completely blew me away. What I especially loved was On the Town, which you can read on Medium, and let me just tell you that I’d already fallen in love with DeWitt’s writing when one of the short stories included statistics code and plots from R, *many* references to Edward Tufte and, *even if you have issues with Randall Munroe*, when another one of the stories referenced an xkcd comic. Anyway. DeWitt turned into the kind of writer who, in the best sense, I would be *envious* of, in that I wish I could write like that, and need to find a way to understand what of her writing I admired and how I might incorporate that into my own voice.
* This is not news now, but via Adam Banks, the news that John Hancock, one of the oldest and largest life insurers in North America, is going to stop underwiting traditional life insurance policies and will only sell policies that track fitness and health data through wearables. So! Banks’ commentary was: “Can it be more than a couple of generations now before most forms of insurance are no longer commercially viable because there’s too much information about risk” to which I am finding it hard to answer “No, it can’t be, unless a policy decision is made and regulations are put in place” because *of course* insurers are going to try and reduce risk and then it’s going to get really interesting because, say, in *some countries* there’s a strong presumption of personal responsiblity and free will and of course I’m not just looking at *you*, White, Anglo-Saxon protestant work ethic North America, but there’s risk management and there’s also “shit happens”. Another sign that this is just going to happen is that you can buy AppleCare+ with Theft and Loss protection for your iPhone, *but* “AppleCare+ with Theft and Loss coverage requires you to have Find My iPhone enabled on your device at the time it is lost or stolen.” which, again, makes sense! I can go straight from here to stuff like Smart Contracts because the availability of data is going to force us (us-as-in-society) to have to resolve the delta between how we would like to behave (e.g. the speed limit) and how we actually behave (most people driving faster than the speed limit, and in my experience, it kind of being more dangerous if you don’t travel with traffic). I mean, if we dinged the car insurance premiums for everyone who broke the speed limit, then… everyone would have higher car insurance premiums? This is awkward! Should our speed limits be higher? This kind of thinking and availability just opens up yawning chasms of confronting our actual behavior, somewhat like the current Republican talking points in the US right now where political surrogates point out that “if we investigated every man who had ever done x to a woman, then we’d have to lock everyone up”, to which the only reasonable question is: what the fuck are all these men doing, and yes, doing that’s wrong! (The implication, of course, is that doing x is absolutely fine, and, well, it looks like America is perhaps starting to come to terms with what might happen if a bunch of people want to express the view that behavior x is absolutely not acceptable at all. One would think that revolutions have been started over less. And, one would think that if, for example, a contiguous group of society suddenly exhibited the ability to, say, electrocute people, now would be a pretty interesting time).
* via Nat Torkington’s Four Short Links, news on Emily Short’s blog of Textworld, a reinforcement-based machine learning approach for creating text adventures with Inform 7 which ticks many of my “this is interesting” boxes right now.
* Walmart is apparently using the blockchain to do something important. The important thing is food safety traceability so we can do things like find out where that lettuce with the e. coli came from, which is definitely a super important thing right now. The more weird thing is that Walmart is using the blockchain to do it: and to be fair, what they’re actually doing is have IBM do it, and IBM have figured out a way to use the blockchain to do it. Now, I read IBM’s Food Trust Solution Brief, and look: here’s what I think. I don’t think you need a distributed ledger to do this. You certainly don’t need a distributed ledger if the entire ledger is being run in an IBM datacenter somewhere. Yes, I see why it’s potentially useful for everyone in the food trust chain to be able to see all the actions that happen on that chain, but… that doesn’t appear to be happening. What I suspect happened is something a bit like this, which is that having food traceability is a good thing and it’s something everyone needs to do, and you need it to be interesting enough for people to care about it, or a big enough stick to force people to do it. Now, Walmart are a big stick! They can totally force all their providers to do this. Blockchain is something that everyone can agree will make them sound super interesting and unsophisticated investors will totally reward you for doing something Innovative when really, if you want to be super cynical or just pragmatic about it, Blockchain might just be the thing that you use to get people interested and the price of doing business so you can get the regular, boring thing done. So if Blockchain - run on a ledger, just in one data center - is the price of getting food traceability then fine. I submit. This is also why we can’t have nice things. This is also not quite permission for anyone to go off and incorporate Blockchain into whatever proposal because it will help get some other thing done! I have the sneaky suspicion that will not end well!
* via Cal Henderson, Microsoft PowerPoint can be Turing Complete, which feels like just another way of saying that those horrific Department of Defense PowerPoints *could be worse*.
* Fast Company’s sort-of oral history of the OXO Good Grips vegetable peeler is a good story to have in the back of your pocket whenever anyone asks why they should bother paying attention to accessibility in {web, service, etc.} design because I hope anyone you’re talking to can viscerally understand how the arthiritis-friendly vegetable peeler ended up being easier to use for *everyone* and hey, if *that’s true* then what if… And via Pamela Drouin, this point: it was Sam Farber’s wife Betsey who had arthiritis, a background in architecture and design, and recognized “this is something that could be made better”
* The fact that Nintendo’s Super Nintendo Entertainment System Mini is pretty much exactly the same hardware as their Nintendo Entertainment System Mini is, I think, either a triumph of branding and marketing or a great example of how branding and marketing is horrible and how we can be taken advantage of for segmentation purposes.
* Tor.com published The Nearest, a novella by Greg Egan (you know, Permutation City, Diaspora, etc) which popped up in a Reddit thread that was more or less “please does anyone have any more books like Blindsight” and I agree that the novella *is* like Blindsight in that it’s as bleak and also about the nature of human consciousness and how our brains work.
* I did not know about data diodes, which are… hardware/software devices that make sure data only passes in one direction. Look, BAE Systems makes one (of course they do).
* Via MacRumors, it looks like Apple are using the T2 chip to brick machines that have a T2 chip from booting unless Apple diagnostic tools have been run and passed when certain repairs are made. There’s an easy good reason for this - the T2 chip is there to ensure a root of trust, and everyone knows that if someone’s got physical access to your hardware then pretty much all bets are off. So this is one way to restrict fundamental system repairs to Apple Authorized Repair locations (which brings up its own set of problems if, say, you’re not near any). But! All this is business as usual and groups are reasonably and expectedly bringing up the right to repair issue. Your John Deere tractor and Apple Pro laptop are increasingly closed systems. The thing is, Apple have trotted out, on occassion the “trucks versus cars” argument, where on one level, the iOS ecosystem are the cars (they just help you get around) whereas the macOS ecosystem devices are the “trucks” - more specialized, niche devices that expose a bunch more functionality and power and aren’t just for, say, consumer transport. But what if one of the whole points of a truck is to be able to pop the hood and, say, install a new GPU? That is something you cannot really do right now with a new Truck Mac! And one way of looking at this is asking: who are Apple’s pro machines for, anyway? Are they for enterprise-style customers who are okay with high performance and “relative” amounts of extensibility (ultimately, I feel like the biggest issue here is whether you’re ab le to stick arbitrary PCIe cards in and no, TB3 doesn’t count) *and* a T2-style chip that still locks everything else down? Or are Mac Pro Trucks supposed to be *much* more configurable and, well, you get to choose if you want a T2-style chip or not, or you get to choose how much of what it does? I don’t think, for example, that Apple’s will let you decide whether or not the T2 chip enforces authorized repairs or not. So hey, this will keep happening so long as software does stuff “in the real world”.
I have lots more things that caught my attention, but that’s probably a good start for now, because I want to get to:
2.0 A Long-Winded Set Of Meandering Thoughts Regarding Social Media
This is the attempted documentation of a bunch of thoughts I had both leading up to, during and slightly after the XOXO Festival earlier this month (oh my god it is actually October now). A lot of it has to do with the resurgence of interest in Mastodon and the weak signal of a sort-of exodus from Twitter, and that a bunch of Twitter early adopters had jumped ship. What might this mean? Clearly, lots of opinions can be had.
I still think that one of the big advantages of Mastodon is that it *doesn’t* provide a mechanism for dealing with abuse in a global, top-down manner. This is because Mastodon is federated - lots of small groups peering with each other - as opposed to something like Twitter or Facebook where you can always find out where Jack or Mark live and try to make them - and just them, as the CEO - impose some sort of change, globally, on the social graph. There are good reasons why you might want this: there might be bad actors that you want to deal with once and for all, instead of repeatedly. But! I submit - without much evidence, I have to admit - that we as a species are not ready for such a thing! Zuckerberg has already jumped ahead and is musing - a bit like a first year law student, I might add - about the possiblity of Facebook having to have some sort of “supreme court” without realizing that as a species, we can’t even decide what the UN should do, or what the UN should go about doing things. All of this is to say that maybe we’re still at the point where small groups of humans should get to decide their own community standards as opposed to a literally global standard.
I’ve had a number of conversations with friends about this and the idea that Facebook is a “community” to which I have the strong position that Facebook is only as much a “community” in that people who send letters are a “community”, which is to say that they aren’t. I mean, yes, in early times there might have been the Bay Area Postal Service User Group, but there’s some sort of threshold that’s reached where something is, well, a utility. We do not speak of the Drinking Water Community or the Television Watching Community and when we do talk of things like the Videogaming Community we discover that there are bunch of people who think *they* are the “videogaming community” and that other people shouldn’t be in it. (The former are wrong, and this has a lot to do with confusing a medium for a genre).
But, I think, the deal is this: Facebook and Twitter, with their global namespaces (ie: the fact that you can find anyone in the world and add them to your “timeline” - they might be the aberration, right? Maybe discoverability shouldn’t be that easy? Maybe it shouldn’t be so easy to find someone and “add” them to your network? The analogy that I have is when I happened to be reminded about the concept of small-world networks in Sean Caroll’s book, The Big Picture. From what we know, the neurons in our brain are connected in a small-world networks fashion: every neuron isn’t connected to every other neuron (if that happened, our brains would fall off the edge of criticality), but instead they’re very well connected locally, less well connected at the medium end an sparsely connected at the long distance end. But being connected in this way means that every neuron is, say, only about 7 hops away from any other neuron in the brain. Facebook, at the time, were very enamored by this concept. One of their most enduring images - and I remember this from when I worked with them - is the connection map that they were able to produce. This connection map of Facebook relationships ended up outlining the world (well, not quite, there were significant dark areas), but it was something they were (and are, I think) very proud of. Look at us! Look at us as some sort of colossus of connection, bestride the Earth, the nodes from our graph revealing the geographic distribution and density of humanity. In that image, Facebook could assert that it was powered by something that was innately human: social relationships.
And yet! And yet, the default is to have the global, public timeline on both Facebook and Twitter. The idea being that openness wins which, okay, that’s a nice saying, but maybe, just maybe the thing about the Dunbar number that a group of increasingly irritated people keep going on about (and these people are increasingly irritated because they appear to be saying “we told you so” increasingly frequently) is that the Dunbar number of a group of around 150 people helps prevent a) context collapse and b) mob behavior. Look: someone can say a dumb thing and lose their job before their plane lands. Does it make as much difference whether 150 people find out about the dumb thing or, I don’t know, several hundred thousand? What does it mean when someone could be strung up and paraded around in stocks to the town or village instead of, well, anyone who happens to be on Twitter that day? This, I think is one of the biggest criticisms of Twitter Moments, which *at times* take what could be small-scale events that can be dealt with locally and instead turn them into News that invites everyone’s attention.
Part of the deal with having small(er) federated communities is that they give us - users - *choice*. Right now, I don’t have the choice if Jack’s going to be a dick and not realize that having a Nazi-friendly posture is not a great thing. Only, I do have a choice now because I can go find a Mastodon instance that will allow that. I get that certain problems are solved more easily with centralization - but they also take away choice. On the one hand, if you want an idealized system that is going to have some sort of cryptographic chain-of-trust that’ll enable you to run Mr. J. Nazi out of every single town then it *feels* like you’re saying we can have secure DRM and… we know we can’t have that?
So then, there’s this idea of a firebreak, or to make it sound cooler and more like a Michael Crichton novel, a Firebreak Protocol. A friend and I were talking about moving from Twitter to Mastodon and how, when they’d deleted their Twitter account, they’d lost access to the Twitter->Mastodon bridge, which would’ve used their Twitter account to discover all the people they used to follow and their accounts on Mastodon. Essentially, a way to re-create a social graph that had existed on one platform on another one.
And then I said: ok, but why? Why do you need to be discoverable? Why do you need to recreate that graph, and why do you need to make it easy for people who used to follow you, to follow you? Because I said look, I know that one of the reasons why I’ve stopped using Twitter is because of a need for external validation and maybe having one less avenue for that might be healthy for me. And what if it’s good to have some friction in bringing that graph back over? What if it’s good to have that space - a firebreak in the woods, some intentional burn - to stop a graph from jumping from one place to another and starting another fire? What if it’s okay to be intentional again and to build up that graph again from scratch. It is, I think, a bit like declaring email bankruptcy and trashing all those unread emails on the trust that if it were *really* important, then they’d email me again about it.
But anyway. I’d been invited to take part in a conversation with a team in Intel’s client computing group - working on VR - as part of their research into what Intel might be able to do in the space of anti-toxicity in technology. There were a few things that came up in conversation that have since stuck in my mind and are rattling around.
First, restating what feels like it should be obvious - technological solutions won’t work for this. These are policy decisions, human decisions, to which technology can be applied, but if no-one’s willing to stand up and say “this is the kind of community we want, and these are the standards”, then… you’re going to end up with the worst of both worlds? Vague technology aimed at something like sentiment analysis trying to detect “abuse” when the reason why we have a court system for this kind of thing is because context is so important. And I remember saying again that there’s a uniquely American point of view here which is to prioritize freedom of speech as an ends, not a means and this is clearly *just one point of view and approach*. China clearly has a very different point of view on what speech is for and the role of the individual versus the group! And then, you have what played out over the summer with Apple apparently being the first mover to take action on first banning Alex Jones’ podcast and then the Infowars app from their store. For whatever reason - and fine, you can tie this back to Steve Jobs being opinionated, asshole or not - but one of the things about Apple is that they *have a point of view*, and the App Store was one of the most recent instantiations of that point of view. Apple can decide whatever they want for their storefront, in the same way that Valve appear to be abrograting their responsibility for the store that they happen to operate, the Steam platform.
But these decisions are hard and people are going to get upset and disappointed or, apparently, start making death threats because that’s the societal level of discourse these days. Banning Alex Jones’ podcast and app is, in this way, a little bit like deciding to kill the Newton and having the courage to do so even if people are going to disagree with the decision. But it turns out, hey! It’s your company! You can decide!
There is no technology that will make the decision about banning Alex Jones for you. I submit that you can’t even use utiliarianism for such a decision because you’re going to have to supply whatever utility function with seed values in the first place. (It is probably not as surprise to you that I come down on the side of “values are not an absolute, inherent constant of the universe, they are a human construct).
Part of my analogy is this: there’s been a trend recently for anti-homeless architecture in urban places. Benches that you can sit on but not sleep on, and so on. For some reason, it is okay to design public spaces that are hostile to homeless people (for the record: I do not think this is okay), but it is not as okay to design pseudo-public spaces online that are hostile to Nazis? That feels weird, right? I mean, all it would take would be for one of the entities that controls a pseudo-public space to kind of declare “hey, this space isn’t for Nazis”.
So then the question is: what happened to community moderation tools in the past 20-odd years? And I think the answer is: not very much? Back when I did actual community moderation, it was a Yahoo! Groups mailing list and I had about seven other moderators and let me tell you: we read every single goddamn email to that mailing list, much in the same way that the Metafilter moderator team reads *every single comment made on Metafilter*. At least, I’m reasonably sure that they do. And then the response time is, like, sub-5 minutes. This is a lot of work! There’s a lot that’s being done brute-force. There are lot of heuristics that go into figuring out whether a certain message is going to be okay or is going to derail a conversation and I think a lot of that has to do with many small signals like the account’s username, the tone of voice involved in posting, the time of the post, the interval taken for the post since the last post, and a whole bunch of others. At the same time, I’ve learned about other home-grown moderation tools - automod bots on Reddit have moderator-run watchlists for words that on the one hand do contain what you’d expect (e.g. the BBC Bad Word List, like not being able to say you’re from Scunthorpe), but also independently discovered words that turned out to be pretty good indicators of conversations going down-hill - like calling each other stupid.
This is why - from what I can make out - the work that’s going on in areas like semantic analysis to ensure civil behavior in psuedo-public fora needs to be more connected with people who’re actually doing day-to-day moderation activity. It is not helpful that the number of people doing so professionally (and I’m not necessarily talking about *content* moderation, but instead *community* moderation) has been dwindling over the last few years as, post-Slashdot and Reddit epiphany, the trend was to rely on crowdsourced up/down-voting to deal with “the moderation problem”. The point being this: it’s not necessarily providing a float value of the *sentiment* of a comment (“is this racist”) but *also* whether it’s possible to have a productive conversation about something like race and how that can be encouraged. Sentiment analysis as naively practised is an unhelpful, blunt tool, in that sense.
Humans appear to love thinking in binaries - one reason might be because hey, our brains are expensive and any ability to collapse issues or decisions down to Thing A or Thing B is potentially an optimization that has served us well (so far, etc.) So there’s this weird thing that always happens where people say “let’s use technology to fix this [community/content moderation problem” and a bunch of other people say “haha no, are you kidding, this is a human issue and only humans can really deal with it” and instead (fine, spit at me and call me a centrist, but I don’t think that’s what’s going on here), it seems pretty clear to me that:
a) humans have to decide what the outcome is
b) they intentionally design and implement tools (“technology”!) to achieve those outcomes
c) obtain feedback, adjust if necessary, maybe think about whether coming down from the trees was such a good idea etc.
The example I gave to the Intel team was that maybe a way to think about how a company like Intel could contribute positively to managing and remediating toxic online communities was the background / real-time spellchecking introduced in Microsoft Word 95, when the most exciting thing that happened that year was seeing a squiggly red line.
Specifically around managing and moderating conversation online, where are the tools that help a human moderator do their job? What’s the equivalent of that spellcheck? What’s the equivalent of being surrounded by a bunch of helper bots? One theory I have is that the energy that *might* have gone into developing that infrastructure was instead diverted into social media analytics that would help brands identify Influencers etc and reach and so on. But hey, that’s where the money was, so who am I to begrudge.
The thing is, there’s no money there right now because all the money got sucked out of managing community online and got dumped into some sort of negative externality. So, says I, gesturing at the multinational behemoth that nonetheless continues to have horrific problems with its process shrink, you’ve got a crap-tonne of money and you’re not half-bad at making software, maybe you can fund some commons software work and kickstart that research and delivery of community moderation aids and hey, why don’t you start somewhere like Mastodon.
—
Look, I feel like I could’ve written a lot more, but it’s clearly taken a long time to just write this much and I fear that I didn’t really stick the landing on this one. Well, fine. It was just supposed to be a bunch of thoughts and sent is better than perfect.
One little bit of news: I will be co-chairing the Code for America Summit again and if you’ve got opinions about government and technology then my inbox is open. (My inbox is open anyway, I very much welcome replies to this, even/especially when they’re just to say hi).
Best,
Dan