It’s Tuesday, November 22, 2022 and a rainy but not freezing cold morning in Portland, Oregon.
Two more days until Turkey Day or Friendsgiving [sic].
This episode is mainly dedicated to the reminder of catching up with friends, after I had a fantastic chat with Blaine Cook yesterday. And it’s pretty packed. Nearly 3,000 words! How’s that for a U.S.-holiday-sized read for you.
Listening to: I Want You Back, Jackson 5, Dolby Atmos remix.
This episode isn’t entirely about Mastodon, it’s more that the birth of a new social network is an exciting time because it’s when the abstract/theoretical collision of people and computers becomes practical. In this way, Musk’s acquisition of Twitter is a bit like accelerating two dense particles at each other and watching the resulting particle tracks, seeing whether they result in new, long-lived particles, and also the little, short-lived tracks that nevertheless allow us to examine some novel characteristics.
Too-long, didn’t read version of the analogy:
While Elon Musk is speedrunning the content moderation learning curve2 (Masnick’s use of the term speedrun3 feels incredibly zeitgeisty in describing the contemporary habit of people uninformed in one area making broad proclamations and rapidly finding out), so too is everyone else, or at least, everyone else involved in running a social network. Right now that means Mastodon instance administrators and their moderators, who are finding out that:
But it’s not just the moderators and administrators who’re speedrunning “what is it like to be a social network with content moderation” but also the participants and members of these new loosely-joined communities.
What does it mean to get decisions wrong? Because the cost of getting moderation decisions wrong – like, what happens when a person of color has their account suspended for a report of racism when that clearly wasn’t the case?
There is an argument that Mastodon, or any other effectively volunteer-run social network, will collapse because the energy cost of content and community moderation (those things are separate) will rapidly outstrip the energy available to perform those functions. There’s a few things to tease apart:
There’s two that I want to focus on: abuse, and illegal/criminal content and what, I think, is an important about federation.
Let’s make this simpler.
There are small instances, which I’ll say are on the < 300 range, or more commonly around the Dunbar. Most people know each other, or personal vouch-for relationships are only two hops away. (I trust you, you vouch for this person).
I don’t think that these instances have much to worry about in terms of scale. I wouldn’t expect them to be too busy.
And then you’ve got a middle ground, say, the 3,000-5,000 mark, and let’s say only 10% of your community members are active posters. (A reminder that the 1:9:90 rule generally applies to community participation, and it would be interesting to see whether that still holds now).
Then you’ve got what I think might be More Likely To Be Too Big, which off the top of my head is definitely in the ~10,000+ members, which would be at least a thousand active posters. That… is a lot! Think about the moderation team size that Metafilter has, the active posters it has, and the ideal round-the-clock moderation coverage. Hell, in the commercial world, you’d be talking about an SLA for response times for reporting content.
Mastodon.social has two hundred and forty-six thousand active users, as reported on its front page. I have not looked up the size of its (volunteer, I assume) moderation team.
An aside: I would expect that there’s room, if we’re now in the phase of let thousands and more online communities bloom across protocols and applications, of some sort of community-minded certification or training in how to be a moderator for your community. Hell, the next step after online safety might even be I don’t know Scouts/Girl Guides/your local don’t-know-the-organization-yet providing training in the same way as, say, first-aid training or de-escalation/conflict management training. Guess what: THESE SKILLS ARE USEFUL during our entire lives, and across domains, anyway!
I guess what I’m saying is mastodon.social may well collapse under its own weight and splinter into smaller communities and maybe that’s okay or even maybe we have to accept that because as a species/global community we simply aren’t ready for communities that size. We may not yet have the shared values, the shared understanding, for this to work at scale. We may not even be ready to behave in as nice a way to each other, or to give each other the benefit of the doubt or, to be clear, have earned the benefit of the doubt, given our history.
Masnick’s point about a few of these moderation disasters has been this is how it is supposed to work and I want to say that that particular sentiment isn’t incompatible (i.e. is compatible) with these statements:
So, was this bad decision remedied as quickly as possible? Maybe? Should decisions like this happen less? Yes. Should we expect this to trend in the right direction over time? Yes.
So, then. How long is reasonable to wait?
If the people running these instances are sincere and understand the need for moderation and management, then you’d expect they should improve because now there is a clearer cost: they will be defederated. There is a clear, expensive cost to getting decisions “wrong” and not remedying them.
And then, you’ll have spotted that getting decisions “wrong” is all about shared values and tolerance (and the completely valid decision to not tolerate certain behavior). For the avoidance of doubt, I am all on the side of not tolerating bullshit and giving no quarter. It does not have to be your job to “fix” someone’s behavior.
So. Mastodon.social may be too big. We may reasonably expect it to collapse and splinter into smaller instances. Maybe that is okay. Maybe it doesn’t need to be that big, and that means maybe we are not yet ready for the global public city, because if we don’t have the tools to do this well-enough at scale in a way that isn’t funded by extractive advertising, then… maybe we shouldn’t do it and try to achieve it and we’ll fail until we’ve figured out different, better ways.
I have a few worries about Mastodon and because it’s technology and computers, it all comes down to people (who or what else is it going to come down to, in the end? We are the ones with intention and the capacity to act). The worries:
Here are my signals and what I think they imply. They are in my head, which means they are not necessarily true. I have not gone and tried to invalidate them yet:
This could go badly. Mastodon is associated right now with a single person and a single instance (Mastodon.social). I think the distinction between the person, the application and the instance is hazy, and that haziness is being used (whether intentionally or not).
Branding, and the way the branding is used, is a choice. For example, instances could be described as:
I’m not optimistic about Eugen. I would like to be wrong!
First, it’s happening! Here are some institutional instances:
I’ve advocated for news organizations (and others) to set up institutional instances as a way of providing trust and discoverability, in s13e17: A Proposal for News Organization Mastodon Servers and More and s13e18: Mastodon, or What Happens When Your Software Has Opinions And Now You Have Choices.
I wrote about this from the point of view of a member of a social network community: how I want to be able to find and trust journalists and news sources in this federated ActivityPub network.
But that doesn’t say anything about the relative power dynamics, needs and incentives of news institutions (and their owners) in relation to journalists and their staff.
Put this way: absent legislation, unions or other effective mechanisms to negotiate power differentials for the benefit of parties and effective enforcement, staff and journalists are subject to their employers.
So it’s in the interests of staff and journalists to have portability of their following/audience/graph and to also be independent of their employer. Independence for a news staffer is not provided by that staffer having an account/identity solely at an institutional instance. There are competing needs here, and this is a terrific reasoning for the standing up of professional organization or union instances that represent the collective interests of their members. So the question here is: how to negotiate these different needs that can be both complementary and in opposition? Multiple accounts? Aliases? I think the answer will always be a combination of technology/software and negotiated agreements. No software will fully fix the what if Jeff Bezos goes evil or what if Peter Thiel buys your organization, but it can help. I do not think the trust/verification issue goes away. I think the inherent trust in domains helps with that. I do fully agree and accept that it does not acceptably negotiate power dynamics sufficiently in favor of staff/employees/freelancers on its own.
An interlude for a short neologism I coined in the conversation with Blaine I mentioned at the top today: there are places where you just can’t get cell service. Some people live there. Which means you can’t receive SMS 2-factor auth. Which means these days, there are services that you might rely on that just don’t work.
I would call these places verification deserts, after food deserts. They are areas where you do not have access to the infrastructure required to use secure services, in this case, and commonly, SMS 2-factor auth.
I’ll leave out of it today what we as societies might do about this problem, but I’ll mention/recognize at least that “well, you chose to live there” is certainly a point of view.
There’s a way I’m thinking about the birth of a social network that’s similar to our theories about the development of our universe. It’s the existence of an inflationary epoch.
In our universe, the inflationary epoch was the time when space got big super quick. It is the reason why our cosmic microwave background radiation is so uniform: things that were close together all got spread apart in the same way. One of the analogies here is that the balloon of our spacetime got blown up super big during a period, and then it continued to expand, but at a slower rate.
Imagine the birth of a network. It’s small, at the beginning. Small and busy. There’s a lot of people packed into a small space and the distance between those people is small. I won’t go so far as to say that there are fewer hops, just that the perceived distance and number of hops between people in the network is small.
In this inflationary epoch, it is easier to make connections because the perceived distance is smaller.
(It may not be that the distance is smaller, of course, it may be that at this inflationary period, there’s a high density of people who are significantly more open to connection creation than later on in the development of the network).
Later on, as the, uh, growth of the network settles down, space – the size of the network, the graph-size, I suppose – keeps expanding as new nodes are added, but the analogy is that the rate of expansion is different.
For a social network, I think this means that during the inflationary period, if reach is important to you, it’s going to be easier to develop greater reach. Or more crassly: it’s easier to get more followers during this early period of a network’s development.
There are some other ways it might be easier to get followers (and therefore, some people may think, influence, as a result of large reach), like being a default suggested followed account, and the placement of an account on that suggested list. Mastodon the application does not have suggested followers or topics as part of its onboarding. I like this. It evens the playing field.
This has some implications, one of which I’ll restate:
If reach is important, you need to get in early, because that will offer you the best result in terms of effort/reward (god, this reads crassly and like a judgment on people and I am working very hard for it not to be a judgment. There are lots of entirely valid reasons for desiring/needing reach that are not egotistical)
If this easy-connection inflation epoch is true, then there’s the element of luck in that you were lucky to be there, again, if reach was important.
“Luck” can easily bring up the concept of unfairness. Unfairness, and also equity and inequity plays into this. For you to take advantage of an easy-connection inflationary epoch, you need to:
Ugh, this still sounds horrible in my head.
We can counter some of these advantages. First is again, if this epoch is true, then tell people about it. You can’t participate if you don’t know. You can’t make a choice if you don’t know.
For time, energy and skills, there are also ways to mitigate/better ensure equity. Make the skills clear. Provide training Create tools that reduce the time, energy, that make it easier to achieve the result of the skills.
Then there are, I don’t know, epoch-invariant methods of dealing with this period of potential easy connection-making: being able to port your network from one place to another. Discovery, making it easy to find and offer to connect with relevant/interested people. And also, intentionally designing methods that don’t reinforce runaway effects where big reach keeps getting bigger. There are ways to do this! And critically, in networks that don’t have commercial drivers and don’t have to always grow, they can instead focus on tools that might increase richness or value of connection. And I acknowledge that this conversation here was all about reach in the concrete, and nothing about quality. It is one of those numbers that is the back of people’s head that is, sadly, in many cases, a substitute for value and personal worth.
Okay, that was a lot. That’s also it for this week – it’s a slow/holiday week here in the U.S. and I’m still getting over COVID. Things should keep returning back to normal next week on Monday, 28 November.
How are you doing?
Oh hey, a quick reminder that you can support this newsletter. I appreciate it!
See Bubble Chamber Pictures for the Classroom, CERN S’Cool LAB ↩
Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve, Mike Masnick, techdirt, November 2, 2022 ↩
Speedrunning, Wikipedia ↩