s07e18: Do you have WiFi? / Liars are bad
0.0 Sitrep
Wednesday, October 30 2019.
Written mostly in the air. My head is swiss-cheesed because I just quantum-leaped from DCA to PDX and something in DC bit my arm and it’s all hot and swollen despite all the antihistamines I’ve taken. Don’t worry, it’s probably not transmittable over the internet, otherwise we’re all screwed.
At some point today I also need to pay a lot of attention to some Word documents and have opinions and recommendations about things. Fortunately it looks like it’s really easy for me to have the former.
Anyway, on with the things.
1.0 Things that caught my attention
Two things today, one hopefully funny and dumb and the second one, honestly, I really don’t know.
1.1 Do you have WiFi?
The first one I think is an example of “why was this rule made” and I am going to explain how it popped into existence in my head. I’m walking through SeaTac airport connecting through to the Seattle->Portland shuttle on my way home. One aside is that we should not be allowed to call things shuttles in the 21st century unless they are short-hop, carbon-negative sub-orbital flights. The second one is that I connected to the airport wifi, whose SSID network name was something like “SEATTLE-AIRPORT-FREE-WIFI” and, well, I continued my habit of putting silly things together.
Because if we live in a world with SEATTLE-AIRPORT-FREE-WIFI and also things like MCDONALDS-FREE-WIFI or DMV-FREE-WIFI and so on, then it is funny (in my head, at least) if there is in the world of Star Trek: The Next Generation (sorry, again), the 1701D-TEN-FORWARD-FREE-WIFI network which would be the wifi network for the equivalent of Starbucks or the Cheers bar or whatever on the Enterprise that has Picard on it.
Now, if there’s a free wifi network at Ten Forward, which makes sense, then it would make even less sense (and therefore potentially funnier) if there were a wifi network named 1701D-ENGINEERING-FREE-WIFI in... the Engineering section. It would be funny and make no sense because really what business do guests have down in the engine room needing wifi and secondly, this is a stupendously stupid security risk. It is the kind of risk that results in your enemies stealing your shield rotation frequency codes and being able to steal your ship. [duras sisters].
But what would be an even dumber place to have Wifi than Engineering? The bridge, of course! And then we can imagine: well, under what circumstances would Starfleet in its great and knowledgeable bureaucracy decide to disallow the deployment of free wifi on the bridge of its ships? What possible event could have happened? I mean, at a high level, it was probably something like this:
*Alien materializes on the bridge of the Enterprise, punching through shields. In theory, everyone should be surprised about this but this is not real and things happen that are driven by plot, so people are only mildly surprised that this happens. In any event, PICARD greets the alien thus:*
PICARD: Greetings. My name is Jean-Luc Picard and we come in peace from the United Federation of Planets. We are scientists, explorers and diploma-
*the alien taps at some sort of device it is carrying. It is inexplicably bipedal*
ALIEN: Do you have WiFi here?
*PICARD smiles*
PICARD: Yes! Yes we do have WiFi. The network is 1701D-BRIDGE-FREE-WIFI and
*the alien taps at their device*
ALIEN: Do I uh have to buy anything?
*in the background, TROI is tilting her head as if she can tell something out of the ordinary is going on*
PICARD: Oh no, the Federation is a post-scarcity society and we have technology called replicato-
ALIEN: OK what’s the password?
PICARD: Ah yes. The password is “infinite diversity in infinite combinations”
ALIEN: is that all one word, all lowercase?
PICARD: Yes! You must have had contact with other species in this sector to be familiar with standard communi-
LA FORGE: LA FORGE TO CAPTAIN PICARD! THE ENTERPRISE MAIN COMPUTER CORE HAS BEEN BREACHED AND WE’VE LOST ANTIMATTER CONTAINMENT! WE’RE EVACUATING ENGINEERING!
ALIEN: ...
*PICARD narrows his eyes at the alien, who sort of shrugs before dematerializing again*
*Three “star dates” later, in the main conference room, we join in the middle of a report Geordi’s giving* [Aside: do you think they name conference rooms in Star Trek?]
LA FORGE: ... we’ve successfully purged and restored the main computer by using a backup restored from the most recent comms buoy. I’ve recommended Starfleet immediately disable all guest WiFi on all Federation ship bridges pending a full security audit.
PICARD: Very well. Good work, Geordi. The lack of guest wifi on the bridge is certainly a minor inconvenience to our mission of peaceful exploration and cultural ties with alien culture, but the implications to ship security are... *beat* very much apparent to me after the events of the last few days.
chuckles around the conference room table apart from Geordi, who’s had to clean up this whole mess, fade out to credits*
... and that’s why some places aren’t allowed to have free guest WiFi.
1.2 Liars are Bad, or, Scaling social norms
Look, I still haven’t read Seeing Like A State and it is still in my unread queue. If I’m being honest it has now approached some sort of mythical beast status, lurking in my queue and hanging over me like some sort of respectability sword of damocles. I need to read it and now it just feels like such a chore. I have recently read, though, a review by Scott Alexander about Against the Grain which at some level explains that Seeing Like A State explains why States need to see things, which is because they need to tax things. I expect the book is a bit more complicated and nuanced than that and also has a bunch of evidence to back up its hypothesis.
I mention this in the context of Facebook appearing to blunder its way through figuring out a) what a society could be, and b) how it might operate in what appears to be realtime through press releases, speeches to Georgetown students and ill-advised personal commentary from executives about the value of Free Speech.
A couple points I’m drawing together here:
First, Adam Mosseri tweeted a few days back in response to criticism that Facebook News had selected (qualified, I suppose) Breitbart News as a “trusted news source”, having passed muster for its inclusion in the Facebook News “section” of the Facebook Platform.
I should disclose — and this is not a humblebrag in any way - that I briefly worked alongside Mosseri when I worked in ad agency land. I have to admit I had an opinion about him before I started, that opinion wasn’t significantly disproven while I worked with him, and has subsequently not been disproven either. Which is to say Mosseri totally fits into my bucket of “tech person who thinks they know better and believes they are entitled to forge ahead”. If I’m being self-aware, I’d say I probably don’t like him because I see in him traits that I don’t like and see in myself, namely that in my interactions with him, he appears to me to think that he’s smarter than everyone else and I’m especially sensitive to that. I would like to think, though, that a part of me is very conscious of what I am not smart at, and likes to work with people who know things that I do not *and that I do my best listen to them* before telling them why I think they might be wrong.
All of this is to say that Mosseri asked two questions:
a) Surely it is better for Facebook to have no political position than for it to have a political position, given its size and influence?; and
b) Why is it that there is outrage over Breitbart being included in Facebook News and not, say, Apple News?
There are two easy answers to this.
To the first, Facebook already has a political position. It may not have one that it thinks is clearly aligned to one of the two major political parties in the United States (which, to be fair, is what Mosseri might be trying to say on Twitter). I mean, it’s true that Facebook haven’t come out and said that they explicitly support the GOP. I mean, they clearly do in their current actions. But it would be embarrassing, cause issues with their employees, engender a firestorm of reporting and so on. It would be more accurate to say that Facebook’s political alignment is more or less a realpolitik toward whomever is in power at the time, AS WELL AS a sort of undisclosed, not particularly well articulated internal sense of politics that is derived from Whatever Mark Thinks Is Right At The Time. I note for the reader that this is not an altogether dissimilar approach to the current head of state for the world’s most powerful and influential [citation needed] country.
To the second: are you freaking dumb? The reason why there is outrage over Breitbart’s inclusion in Facebook News as opposed to Apple News is that a) Apple News is only 4 years old, having started in 2015, and the bigger point that b) Apple News is merely an aggregation platform and not intimately tied to the world’s largest homogenous communications platform, comprising over a billion daily active users.
One of these things is clearly not the other. Again, there are two high-level responses to this. One: Mosseri (who I should point out was Head of Newsfeed before his current role of Head of Instagram) appears to genuinely not understand the difference between a) a social platform that also acts as a news distributor and aggregator, making editorial choices; and b) a juvenile news aggregator that might make editorial choices but what do I know, I never open Apple News anyway. If this is true, then he is certainly not the smartest point in the room. Two: Mosseri is purposefully dissembling and implying that he does not know the difference when of course he knows the difference, his job depends on the money being made by virtue of Facebook being the world’s largest communications platform that, to be clear, is funded by advertising.
The former option is horrifying because it feels like Facebook is genuinely stumbling through what appear to be elementary issues and that, you know, hiring smart people might fix, but only if they happened to listen to them. The latter option is less horrifying but only because it is not necessarily a new revelation - that Facebook leadership is lying, again, about its intentions, motivations and quality of reasoning. Both of these are bad, they’re just bad in different ways. One of them is more surprisingly bad because it’s just... a new level of displayed potential ignorance.
The second point is the case of Adriel Hampton who has decided to run as governor for the State of California to protest Facebook’s decision to purposefully allow unregulated, unmoderated paid political speech. This is a person who has satisfied the governmental requirements to stand as a politician, and who stated his intent as he started his candidacy. I want to be clear that the government is absolutely fine with him doing this and one of the reasons why is that the government has no business (or interest, frankly) in regulating a candidate’s first amendment right to expression in this arena.
But as of writing (ooh, I feel like a proper writing person), Facebook said Hampton won’t be able to run those ads, because he would be contravening policy. Facebook appear to be saying that it’s not OK to say that you want to lie in an ad and then find a way to allow you to lie in an ad. In case it’s not clear, Facebook are saying that this person’s candidacy is not genuine and is disguising a hidden motive, which is to contravene their terms of service. I mean, yes? He explicitly said he wanted to run ads that are lies. So... it’s not a hidden motive? And Facebook explicitly said that politicians are allowed to lie. I mean, this would be like some country saying that there are rules restricting certain thresholds of media ownership based on citizenship and then a media owner going out and marrying someone and thus acquiring said citizenship and increasing their media ownership. Sure, there are rules, and who is to deny the love between a media owner and their partner?
So Facebook is now in the position of deciding who is a political candidate and who is not a political candidate (ie: who is allowed unrestricted paid speech) all the while at the same time as Mark was speechifying at Georgetown that Facebook’s virtue, nay, destiny, is to inch-by-inch inexorably move the world toward Total Complete Freedom of Speech because you can never have too much of that. Oh, and by the way, but not for these people, or for those reasons.
If I’d advanced a position like this way back when I was a baby law student in one of my supervisions, I would’ve been torn to shreds by a very mean Cambridge law professor in front of my peers and probably ended up crying. This byzantine wriggling and attempt to justify actions that Facebook are demonstrably making up on the fly is super embarrassing and, for anyone who’s capable of or inclined to exercise a bit of reasoning and thinking, falls apart at the slightest touch.
No, what Facebook are saying (which is, you know, within their right) is that they’re pretty much a sovereign private platform capable of making decisions about how its products and services are used.
Funnily enough, a younger me would think that sure, what we need is smart people making decisions because on average, people aren’t great at decisions, so let’s totally have a representative technocracy. Turns out the people who think they’re smart are dumb! Turns out this is exactly the kind of thing a bunch of people would leave a country for and go steal a country from a bunch of other people, just to get away from someone making dumb decisions who can’t be easily replaced! Lest we forget, Mark has managed to form around himself some sort of corporate law shareholding magics that indeed makes him God Emperor For Life of the Facebook franchulate.
All of that was just preamble to this thought:
If, say, a society decided that in general, it’s not great if people lie, how might a society enforce that principle? We know how we deal with it already in western society, through laws about libel, defamation, slander, product advertising and so on. It turns out that a society, through government deriving from the consent of the governed can make rules about this kind of thing if it wants to!
So the question then is, how do these principles get enforced? At a fundamental way, this goes back to negotiating the tension between surveillance (ie: seeing like a state in order to achieve stated goals) and delivering on promises. Note that I say this is a tension to be negotiated. There is inherently a compromise. I got to ask a few questions and talk about this last week when I was invited to the Public Theologies of Technology and Presence conference, which was to say, in response to presentations from scholars, if on the one hand surveillance is inherently anti-democratic *and* on the other hand some sort of state knowledge is required in order to have a functioning society that meets the needs of its members without trashing the planet, then how do we decide how much surveillance is enough, or how much is too little?
America, generally, has a principle that there should be as little surveillance as possible, and this principle has been hilariously shown to be a wish, and not a statement of fact, especially after 9/11 and knowledge about the NSA’s wiretapping activities, never mind the government’s activities in surveilling activists for social change in the past and, in all likelihood, the present.
But the related issue here, that I also got to discuss with my committee members for the Code for America summit yesterday, is where regulation of technology clashes directly with implementation of the social contract. I will use the food stamps example again: We live in a world where we have decided to make sure that people should not go without food and the country is rich enough to make sure people do not live in hunger. Delivering on this promise involves needing some information to be able to distribute food resources - whether that’s money or food directly or some weird Republican idea of food-in-a-box direct to the door via Amazon Prime. Sure, why not! But the act of administration itself requires some sort of infrastructure of knowledge. How much is enough? When there are directives for social service programs to be “more efficient” and offer “better care”, and when there are egregious, tragic examples of care that could have been provided but for “bad information sharing” resulting in hypothetically preventable deaths, how should that information be shared? Who should it be available to? Under what circumstances?
All of that is background to this one thought experiment, and I apologize for using the word “scale” here, but I think it’s important. When I use the word “scale” here, I genuinely mean: what would be required, what would it cost, what institutions and processes and flexibility would need to exist, for equitable access for all to a certain outcome?
Consider “lying”. You could take someone to court. Going to court takes a long time. Sometimes, something taking a long time might be a good thing. Justice should not be rushed lest it make inequitable or bad decisions. Decisions that affect peoples’ livelihoods should be made with due care and respect. (Bonus points: try defining what due care and respect are in both the aspirational sense and then the realistic, achievable sense).
Small claims courts exist for people to recover debts, but are still hard to use and still take time. But small claims courts and the small claims process are again examples of societal intent — and therefore elements of the social contract — that aren’t then met because of bad or inadequate implementation. What does it matter to me if I can sue for a debt if it might mean I get my money in 9 months time, and it costs me more time and money to recover the amount owed? That doesn’t feel equitable to me. That doesn’t feel like government holding up its end of the bargain.
The point I was trying to make, or the provocation, I guess, was that some sort of technological infrastrucrure is *required* for government to meet its promises. Bureaucracy and the civil service, itself a *handwaves* 19th century innovation, is a technology. Recording how much corn people harvested is technology. Keeping those records to see if they lied is technology. Keeping records about how much corn you “stole” from them in order to fulfil a social contract is also technology.
So back to the central question: if we decide that there is to be a cost to lying, if we decide that there are to be consequences, how do we deal with that equitably? How do we actually follow through with administering justice blindly? My optimistic take is that technology can and must be used as a tool to achieve those aims. Facebook are acting as if they already know the answer and “engineers” are iterating on it. They simultaneously say that “engineers” as the ostensible only employees at the platform company should not be the ones making decisions about what counts and what does not count as allowable speech. Sure, I’ll go along with that.
So who does, and how do we make sure it’s equitable? How do we make sure it’s fair? I say all this in the knowledge that Facebook is not everything. It doesn’t reach everyone. It does not make sense for Facebook, for example, to be the main method through which people are chosen for, say, Jury service. I do know that people have been talking forever about the use of videoconferencing in courts and I guess it’s being used more than ever before when you could only do it over point to point ISDN? But for some people, SMS is a better way of getting in touch with them about a court appointment than sending them a letter to an address they don’t live at any more.
And yes, I get it. It would be very easy to get these things wrong and to be a technologist solutionist, who would swoop in and say I GAZE UPON YOUR ANTIQUATED INEQUITABLE ADMINISTRATIVE STATE AND GRANT THROUGH MY BENEFICENCE IMMUTABLE PROPERTY REGISTERS SO WOMAN MAY TRANSFER VEHICLE TITLE UNTO WOMAN OR MAN OR NONBINARY WITH EASE and sure that might be OK but if there’s one thing we’ve learned lately it’s that scale... doesn’t.
I mean, one thought I have is that if you want full employment, you could probably turn every single person into a customer service agent. Automate all the cases that you think are the regular cases (and turn out not to be in the majority, after you’ve actually examined them) and then build mechanisms that allow a mass of people to implement flexibility, sympathy and compassion into formerly inflexible and inhuman administrative decisions.
I know I’m rambling.
Mark wants to treat the world like one amorphous blob, at least he speaks that way. He talks about free speech in the aggregate. He refers, I am sure, to the tired cliche of the global village. Well, why wouldn’t you expect us to form mobs in this village?
I started this by asking what these institutions might look like for enforcing (better) a social norm like “thou shalt not lie” and, as I got a reply on Twitter, one of the historical ways of doing this has been religion and the Literal Fear Of God. And (I totally acknowledge I am speaking without learnedness or authority about basically entire swathes of human history and, I want to point out, I AM JUST GOBBING OFF), then we tried replacing those institutions with the State and things like Taking Money From You or Putting You In Jail rather than throwing you into the woods, burning you at the stake or insisting that the sun really does go around the earth.
Facebook will throw you out into the woods. Twitter will too, but not if you’re a head of State. And I see that as of today, Twitter has decided to spoil Facebook’s earning call by announcing their new policy on paid political speech, which is to say: they don’t want any part in it. Paid reach is cheating, says Jack, so you should earn your reach. There are, and will be, of course, nuances to this strategy. Turns out 280 characters may not be the best forum in which to discuss policy.
But is Nextdoor now an institution through which social norms can be enforced? How can we be sure that Nextdoor isn’t inadvertently enforcing some social norms that we know we might perform, but that we’d prefer we don’t? How might our better angels be unshackled, deliberately? I mean, Nextdoor makes it super easy for people to be racist, but it kind of also makes it super easy for people to be nice?
An issue of course is concentration of power. Much of Nextdoor’s architecture is set from the top down, and now I start sounding like I’m advocating or in favor of America’s federal, local devolution of power stance. And, I guess I am? Let communities make their decisions but (or, and) let’s not forget that we’re having some critical arguments about what basic human rights should be enshrined. If Nextdoor is essentially the new Town Hall and its platform design encodes and encourages behavior, then how do people directly engage in that? I have mentioned, several times in person over the last few days my go-to example which would be California doing Yet Another Product Information Label:
WARNING. NEXTDOOR CONTAINS DESIGN DECISIONS KNOWN TO INCREASE INEQUITY IN NEIGHBORHOODS.
I mean... so what? How would this help? I have to use this thing anyway, in some way, if I want more options to participate, but my only customization is whether I get an orange background or an orange background? This makes me start to think of the worker councils that are required in Germany, to enforce some degree of worker representation. I do not want to imagine what some sort of Facebook Council of Users might even look like, thank you very much.
To me this starts to point inexorably in the direction of more informed and purposeful regulation of technology, and also of inadvertently reinforcing existing structures. I think, I truly believe, that it is easier than ever before for *anyone* to create a new networked technology that would be fundamentally transformative to any person or group of peoples’ lives. But clearly not for *everyone*. And we live in an environment that’s set up where a guy who wants to have a better chance of having sex sets up a website and accidentally ends up being able to swing elections, but also thinks he must have been super smart to end up in that position. I’m not saying he’s *not* smart, but, you know, maybe that’s not the only thing going here?
What do we want technology to do for us? Where do we think there might be too much? What is actually OK? These are societal questions and they require all of society, or at the very least the parts that have been chosen to represent society and its interest to have a much more specific articulation of what we want to achieve and then deliberately go out and achieve it. The trend for KPIs was helpful, but then turned into a cult and we discovered how organizations might rig the game for results. But at least we had a target where none might exist. And targets require making hard decisions. It feels like we’ve been ignoring the fact that we have to make hard decisions for quite a while.
I did not know where this would land. I have written over 4,200 words now, and I don’t know how this ends. I do know that it was interesting enough to talk out loud about, and now I have a bunch of thinking to do, if I want to. Hopefully this was not a complete waste of your time and it prompted some thoughts, too. There were no answers here, attempt no landings, this was not a college essay and if it were, I would be ashamed.
—
Best,
Dan