It’s Wednesday, March 23 2022 in Portland, Oregon and when I started this it wasn’t raining, it was sunny with wispy clouds in the sky.
I had a few things in my caught-my-attention-stack today, but perhaps just one I’m going to cover in this episode. I also had a super annoying brain-foggy start but apparently it went away at some point.
Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes of this newsletter is out now! Subscribers can get a copy for 20% off; paid subscribers/supporters get a free copy.
If you’re a subscriber and you’re enjoying or “getting value” out of this newsletter, please consider becoming a paid subscriber/supporter.
Have you read it? Was it good? Tell your friends!
Okay, now on with the show:
In Fear and Loathing in CI/CD. On psychological safety and progressive delivery yesterday, James Governor of RedMonk had some nice things to say about s11e11: But What Is Certain Technology Infrastructure Anyway? that prompted me to think a little bit more.
Governor goes into more detail about how fear and psychological safety are important in modern software delivery:
We talk about psychological safety in modern software delivery. Stuff is going to break. In production. Shit’s complex, yo. So we need to learn from failure in order to become more resilient, we need to have blameless post-mortems, so we’re able to experiment, and roll out new code, and make system changes, without being afraid of getting fired if something goes wrong. The truth is that folks in IT have broken a production system, quite often in an embarrassing way. Psychological safety and blameless culture aren’t just for “the IT org” though. “The business” will benefit from understanding that culture too [blameless culture originated in sectors notably healthcare and aviation]. Many small changes, less big risk, but even if things go south, we fix them and move on, having learned from the failure. It’s generally easier to roll back, or make a fix, for a small change.1
I would say that I’m dealing with this with clients right now, but it would be more accurate to say that I’ve been dealing with it, well, forever. James’ comments clearly stuck with me, because sometime today I wrote a little bit on Twitter that I’m going to revise and share here. It’s a bit of a build/remix on James’ points from a different point of view and approach.
When I talk about the idea of psychological safety at work, you (the imagined “you”) doesn’t have to think of it as “touchy-feely shit”. Instead, think of it as something like this:
When I talk about psychological safety, what I mean is that we all make mistakes, and what happens when a mistake happens.
Does it ever feel like if you did make a mistake and you were found out, you’d get pulled into a room and yelled at? That you’d have a completely new one reamed? That all of that might happen publicly? That you might be humiliated?
Because if you’ve ever felt that way, that means you’ve been scared at work. And I don’t know about you, but when I’m scared, the chance of me doing my best work goes down.
I’m not saying it’s impossible for me to do my best work when I’m scared. I might be exhilarated. I might be on edge. All those things might help. Being scared isn’t going to reliably help me perform at my best, though. I doubt it will for you, either.
It might feel like psychological safety in white collar work is something that doesn’t happen, that it’s not that you don’t feel safe or that you’re scared or frightened, it’s just that you’re stressed.
But what if we tried looking at this the other way? Psychological safety – knowing that you’re going to be okay – in a different context might look like “Hey, I’m hammering this thing in, if I miss and fuck up and put a nail through my hand… in the end, can I go see someone and get it fixed?”
At the very least, when I’m doing some work around the house, I know that I should be able to go get my hand fixed.
(Yes, this is America, so actually this is a bad example because many people are terrified not to do things precisely because they wouldn’t be able to afford the healthcare).
What I mean is this: psychological safety at work doesn’t mean whether or not you or anyone else is crying at work. Not exclusively, at any rate. Psychological safety at work also means being confident and knowing that people will have your back so you can do your best work.
In that way, you bring your emotions to work already. You just don’t want to see them. or you’re already trapped by them. Or you’re the fish that doesn’t know about water because you’re constantly, just, in it. And that shouldn’t be a surprise, because we’re emotional beings. Emotions are real things, real states that affect us.
So, if you’re a stereotypical white male dude, or hell, I don’t know, anyone, I bet there’s a fair chance you’ve felt afraid at every single job, ever. I know I have.
Wouldn’t it be great if that happened less.
Wouldn’t it be great if you had the power to help other people feel it less, too.
Like I said: we bring our feelings to work already. There’s practically no way you couldn’t bring your feelings to work. So know why, and know what they do. Then you can make a choice and decide what to do about them, both for yourself, and for others.
Faine Greenewood, who has now escaped shitty Twitter jail, was contemplating how remarkable it is that R/CombatFootage actually isn’t a complete horrible cesspool and that perhaps it was because of some solid moderation2, to which I have these perennial observations:
First: I’ve always said, for, like, ever, that Reddit-the-place is totally capable of and does actually have well-moderated subreddits and communities and it should be a complete NOT A SURPRISE AT ALL that actually, the quality and experience of any particular community on Reddit is down to the way the moderation staff of that community run their community. I mean, who would’ve guessed! A place where people tend to and garden their place of gathering, where people show up, where there are clear rules, consequences, examples, teaching and so on? Who would have figured?!
But Reddit is even more interesting, I think, because you look at some of those good subreddits and they’ve employed all these tools like automod tools and bots that encode policy into software that hooks into Reddit’s APIs. A sort of human augmentation, or a sort of autocorrect that is used to support healthy community behavior. And this kind of makes my point, forever, about software reflecting intention because duh, it’s a human artifact. You could easily write shitty and mean automod software that calls specific people slurs and that would be horrible. And yet these particular communities decide to use software and make as much a deliberate decision for that software to enforce, well, their community standards. That enforcement can still be done in a shitty way! But again, my point: it’s a deliberate choice, it’s an iterative process, and it requires knowing things like “this is the kind of community we want” and “these are tools we can use to achieve it” as well as “and if they don’t work, here’s how we’ll know” as well as “and if they do work, here’s how we’ll know”.
Now, the kicker.
These communities exist. Some are big, some are small, but they aren’t uniform because Reddit-the-platform has a pretty, or relatively laissez-faire attitude to how you run your subreddit, which is to say they are an American company with a certain set of American SV-style values, which is a bit like States’ Rights but for Subreddits. Yes there are some base-level ground rules of being a member of the Reddit body, you know, nothing egregious, some sort of legal class inheritance like “thou shalt not post CSAM3”.
Reddit-the-platform could take a look at all those subreddits with healthy communities and take a look at what software they use and how those bots work with their APIs and maybe even sponsor those bots or god knows figure out ways to support them better or, no, really, I shit you not, make official, sanctioned Reddit versions of those tools and make them available to every single subreddit as a plan of “hey, here’s how we’re going to use software to support you human moderators have healthy communities” and they could still give subreddits the choice as to whether to use their tools or not.
But, you know, there’s no sign they’ve done that.
Now there might be lots of reasons why they haven’t done it. Lots of internal reasons, some technical, some legal I’m sure because legal always has opinions about things and that’s how you know good legal from bad legal: good legal actually lets you do things by helping you understand the risk/benefit involved, and sometimes the risk/benefit is super obvious like “don’t do that you idiot, it’s illegal and you’ll go to jail”.
Anyway. Reddit haven’t done that. They haven’t taken a look at how their successful (sorry, healthy) communities use tools to stay healthy, and they haven’t gone in on that software approach.
Why not? I have some other guesses apart from “internal shit”. And the main guess is this: it would cost money.
The whole movement of online community, post, say slashdot was this realization that instead of paying community managers and moderators, you could instead rely on the wisdom of the crowds and simply let them upvote things and downvote things and have a few volunteers and ta-da, any sufficiently advanced up/downvote mechanism ensures you have a well-functioning online community with minimal cost because now you don’t have to manage the community, you only have to “moderate” it, in terms of, say, actionable content, and we all know that that’s not going to scale.
So yay, you saved money.
But what do we get instead? We get shit like using ML to infer (sorry, GUESS) emotional intent that’s, I don’t know, not particularly accurate and has LOTS OF PROBLEMS with it like being COMPLETELY WRONG AND RACIST a lot of the time thanks to a blindspot the size of Iapetus in terms of training data, instead of ‘hey, there are some relatively simple tools that work relatively well in defined domains, we don’t need to sprinkle magic ML dust here’ and ‘how about we ask people what already works’.
But oh no. We don’t do that. For some reason.
Gosh, that was a lot today. Look, if you don’t count 20 minutes futzing with iCloud sync because apparently your quad-core laptop and iPad Pro can’t sync 750 bytes worth of text between them, then that only took around… 25 minutes. And I didn’t even get around to it until after kid bedtime.
I am doing really, really bad at keeping this short and to 15 minutes. Sorry.
How are you doing?
Fear and Loathing in CI/CD. On psychological safety and progressive delivery. James Governor, RedMonk, 22 March 2022 ↩
contemplating how remarkable it is that R/CombatFootage actually isn’t a complete horrible cesspool[.] that’s some solid moderation?, Faine Greenwood, 23 March, 2022 ↩
That’s child sexual abuse material, you can go look up the Wikipedia entry yourself, I can’t believe I’m self censoring because of the possibility that this would get moderated, how ironic. ↩