s16e07: It Sounds Like You're Jealous; Scale Considered Harmful
0.0 Context Setting
It's Tuesday, the third of November, 2023 in Portland, Oregon.
I'm excited about the people who've booked a virtual coffee with me1, so if you've been on the fence, go grab one.
Just two relatively long things today.
1.0 Some Things That Caught My Attention
1.1 It Sounds Like You're Jealous
Matt Jukes commented3 on the episode about hands work versus brains work2 and added the concept of heart work.
Though I do wonder whether actually I’m often more valuable for hearts work. My presence is seen as reassuring, I improve things like learning and development, wellbeing and confidence in teams. I help land messages and visions (if I believe in them). Hearts work? Hands, brains and hearts.3
On the one hand, I like the idea of three things! Three things in a list are always good (your other choices are three, five, and ten. Seven is not good, we're clearly not doing primes here.)
There is something that isn't brains or hands but some third thing in effective communication: that basket of understanding people, producing clarity. This type of work is about connection, the work that Jukes is describing sounds a little like coaching, too. I don't know if it's because I'm having a sort of sappy reaction to the term hearts, either.
In my experience, I think this category is relevant to my work (and its success) as a skill and capability: to be good and effective at the brains work I do, I need to understand people and gain their trust, for example so that I can better understand the shape, constraints and opportunities of a problem. The more I think about it, I do see it as a skill -- for me at least -- rather than a practice.
Here's an example.
Every so often when I'm working with a client who's stuck -- which normally presents as not knowing what to focus on, or not achieving a goal because of a lack of focus -- we end up talking about the rest of the organization because, well, how could you not?
This is because there's always going to be limited resources to go around. Nobody ever has enough, and nobody gets to act like an island. There's always a relational part to work, there has to be so long as you're working with other humans. And to be clear: you will always be working with other humans, leaving aside any prospect of working with non-humans in the next few decades.
Anyway, some other group will invariably have more people, more time, or more executive cover than you. Or they'll appear to have more people, more time, or more executive cover than you. They might have clearer goals. These types of observations normally come out after a few sessions.
It might feel weird, because our overall scope is problem solving for your domain, and why are we suddenly talking about these relations with other teams or departments?
Part of the reason is because, after we've made decisions and choices, we're going to have to do a bunch of communication and, being human, we can't help but be coloured by emotional state.
(It's not going to make sense to say that you won't be, because you will be.)
You might be feeling jealous. Just look at this list of things that might prompt feelings of jealousy:
- an important relationship is being threatened or might be lost
- there's a competitor paying attention to something important to you
- someone is threatening to take away something important from you
- you're being treated as unimportant
These are all normal reactions. It's just not possible to say that they happen in our personal lives but not in the context of work. The reason why understanding feelings like jealousy in a work context is because if it is happening, it's going to colour judgment, the available options, and attempts at communication.
For example: you and your team might be stuck because you're being threatened by another group, that other group, whether internal or external, might be threatening you. You and your group may not be the new hotness anymore; your leadership may be volatile and capricious in their attention, as if they're a lighthouse or, I suppose, THE GAZE OF SAURON. You might be jealous because some other group has more freedom than you and yours -- they're "allowed" to do things that you're not, that would be helpful to you.
And if there's jealousy there, then you might react like this:
- attempting to control the other group's behaviour
- demanding more accountability
- increasing your signaling to leadership
This is not great, because at this point, your perceived options start narrowing down. Trust erodes, you start paying more attention to relationships with other teams.
Another way of looking at this is whether the way you're behaving is defensive in any way.
Some of the things we work on are checking if any of the things we believe are actually true. It's easy to get lost in your head and assume things. So much gets lost in communication, especially when that communication is waste-of-time meetings that could have been emails, and emails that should have been meetings. Many times, discounting of you and your group is imaginary because, to be honest, what you're doing is not that important to them because they've got their own shit going on. And unless you've made explicit and clear what your interface is with that group, those misunderstandings have the chance of multiplying over time.
Doing hearts work, I suppose, to do good brains work.
1.2 Scale Considered Harmful
Here's a question I've been thinking about: what's the difference between something being hard and something easy?
Or the difference between something taking a relatively long amount of time.
Or something taking a very short amount of time?
Or it taking a long time to do one thing, so long that it makes taking doing lots of things practically impossible or unrealistic?
The issue here is automation, prompted a little by the quote of Emily Bender's in the previous episode about replacing "AI" with the term4. Bender categorizes types of automation in a very helpful way:
- Automating consequential decisions, like setting bail, checking eligibility, and so on
- Automating classification, like being able to focus on faces, or classifying whether something is a car
- Automation of choice, like the recommendation systems in Netflix, and used in social feeds
- Automating access to human labour, like Amazon's mechanical turk, or other APIs for humans, like Uber and other gig economy companies
- Automation of translation of information, like reading of license plates, machine translation, and most recently, style translation and text translation from one form or genre to another; and
- Lastly, what Bender calls "synthetic media machines", that automate the generation of media based on specific content, style, and genre without commitment to meaning, so surface-level, like ChatGPT
The automation that I'm talking about is the one-level-up layer, and that's the automation of scale, which I suppose is a general interest of mine because it's also -- or pretty much -- technology as a tool and power amplifier.
I think -- citation very heavily needed! -- that the western societies I'm familiar with, so England and the United States, are particularly good at understanding scale. One example I can think of is if you do something that's super annoying in law, which is when you're a vexatious litigant5, essentially when you're using the law to harass someone or cause a nuisance.
For example: there's a difference between bothering someone once, a few times, more than fifty times, and so on. And it depends on the timescale! So context is important, and one thing that law is very good at is investigating particular contexts and figuring out how legislation applies in those contexts. I think it's general seen to produce bad law when legislation is made by degree. This is why there's commonly lots of Tests to see whether some behavior or thing has the right attributes.
So, scale.
Wait, I just thought of a good example! Robocalling. Robocalling is super annoying! It's when you get lots of spam calls. You don't necessarily get lots of spam calls from one source about the same thing or different things, it's more that there are entities considered annoying by society because they have automated doing something annoying. In the U.S., the F.C.C. nominally has responsibility for being the most annoyed about robocalling, and gets to fine the organizations that, well, I was going to say "do it a lot", but really, it's "get caught"6. And robocalling is, clearly, based on the name, not only enabled by automation (thank you, voice over IP), and that automation being applied at scale.
So we've had issues come up like this before. It used to be hard to find a person. Now it's much easier to find a person, thanks to -- like Bender points out above -- technologies like automated classification, to the extent that a single Taylor Swift fan is super popular on TikTok for doxing people7. Looking up names by hand in a phonebook to harass them over the phone? Ugh, you'd have to be pretty invested to take the time to do that. Search for a phone number? Seconds.
One thing I've noticed as thinking aloud about this is whether you can class technologies in terms of latencies, rate-limits, and access.
As an aside, I'm entirely certain that all of this thinking aloud, too, is elementary, like 101-level in something subjects like the history and philosophy of science, or science, technology and society. And I know a bunch of you have spent most of your adult lives studying those subjects, so again, I am That Guy Having Barely Researched Thoughts. You've either been warned or apologized to.
Printed phonebooks have latencies on the order of, I don't know, say thirty seconds per lookup? And the rate limit is, what, "until you get tired or bored of looking up names in a phonebook". Calling people was rate-limited until it could be parallelized and automated, and then made incredibly accessible with open source VOIP/PBX software like (oh my god) Asterisk (oh my god I'm so old).
The technology of "finding someone" was, I don't know, "being Interpol" or "a state" and accessible to only a few, and now, well, there's the Taylor Swift example above.
And, you know, the introduction of cotton spinning technology, now that we're starting to get renewed public interest in actual luddism8.
Where was I?
Oh right, scale.
Like I wrote the other day, the step change in the capablity of synthetic media machines and other machine learning automation tasks categorized by bender above due to reinforcement learning with human feedback wasn't the effectiveness of the algorithm itself, I think. It was due to increased automated access to human labour and cheap, or widely available money, because that access and money enabled scale.
The whole point here was thinking about whether scale is considered harmful, which is perhaps a bit of a trite observation, like "don't do too much of a thing" or "all things in moderation", which sure, might be true, but it's not a saying that's helpful.
I wrote above about being able to draw lines, or to be able to express what scale means in certain contexts. How many of a thing. How often. How quickly. How big. Here's another attempt: the E.U.'s Digital Services Act draws a line at defining Very Large Online Platforms as those with 45 million monthly active users9. Should high frequency trading be regulated?10 Why? How? Should you be allowed to airbnb lots of things? How many housing units allocated to automated temporary residency like airbnbs is too many, in a particular context like a city?
All these things are easier now, but it's not just that they're easier, they're easier to do a lot of.
This is, I suppose, a reminder for me to read Seeing like a State again. How would a society manage changes in scale? You'd need to measure it, make it visible, in the same way you need to make anything else visible and measurable to manage it. So as activity or outcome increases, thanks to scale, a state would need more legibility and visibility along different domains: how quickly, how often, how many?
But this visibility can also be surveillance and a method of control. I mean, combined with state power, it's totally a method of control.
As an aside, now I'm thinking of measuring annual number of transactions per capita, which for starers you'd need to define the sort of transactions you're interested in and whether that's even a useful indicator of... something. Other than a rate of change.
I mean, one dumb quote that just popped up into my head is:
We've recognized one face, yes. But what about thirty billion faces?11
Look, it's not that scale is always harmful. That's just being provocative and headline bait-y. But it's more like adding or qualifying the questions about automation Bender suggests we ask: not just who's being harmed, but how many people are being harmed? What proportion? What's the capacity? What's the frequency? Once is different from every single time.
So scale, then: not always harmful, but definitely an indicator of potential harm, and an indicator to pay closer attention.
Eeesh. Seven weekday episodes in a row now, and this one around 2,500 words, too.
Thank you to everyone who's sent a note or reply -- I really appreciate them, even when they're just "hi!"
How are you doing?
Best,
Dan
How you can support Things That Caught My Attention
Things That Caught My Attention is a free newsletter, and if you like it and find it useful, please consider becoming a paid supporter, at pay-what-you want.
Do you have an expense account or a training/research materials budget? Let your boss pay, at $25/month, or $270/year, $35/month, or $380/year, or $50/month, or $500/year.
Paid supporters get a free copy of Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes, and free subscribers get a 20% discount.
-
Look, I just really don't like the term. I think it's icky, transitively inheriting the ickiness of people I see who do them a lot. ↩
-
s16e04: Hands, Brains, Eyes, me, 28 September, 2023 ↩
-
Week Two – Don’t call it a comeback (but if I’m honest…) – Digital by Default, Matt Jukes, 29 September, 2023, Digital by Default ↩↩
-
Opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” | by Emily M. Bender | Oct, 2023 | Medium, Emily Bender, 2 October, 2023, Medium ↩
-
Vexatious litigation - Wikipedia, Wikipedia ↩
-
Stop Unwanted Robocalls and Texts | Federal Communications Commission, Federal Communications Commission ↩
-
The End of Privacy is a Taylor Swift Fan TikTok Account Armed with Facial Recognition Tech, Joseph Cox, 25 September, 2023, 404 Media ↩
-
The End of Privacy is a Taylor Swift Fan TikTok Account Armed with Facial Recognition Tech, Brian Merchant, 18 September, 2023, The Washington Post ↩
-
DSA: Very large online platforms and search engines | Shaping Europe’s digital future, European Commission ↩
-
Should High Frequency Trading be Regulated?, Harvard Undergraduate Economics Review (I guess, this was super hard (i.e. took more than 3 minutes) to look up a useful source) ↩
-
Second Breakfast | Know Your Meme, Know Your Meme ↩