s12e54: Manifest What I Mean, Not What I Say
0.0 Context Setting
It is Tuesday, 20 September, 2022 and I am at an altitude of 37,996 feet or a more sensible 11,581 meters, and my ground speed is 693 km/h, or a more sensible 430 mph.
I know this because I am looking at the seatback monitor running a Flightpath 3D linux application which apparently is "enjoyed by 375 million passengers". Citation needed for the enjoyed part, of course.
Anyway, now I am distracted because I am looking at Flightpath 3D's website and their product overview sheet is gated by an email address requirement. I would normally make fun of this ("I want a map for my aircraft") but then there are options like "You have great content to integrate with our map" and I'm like, okay, that's reasonable, and I suppose I would have to check the last box of "Some other reason" and write in "I love learning new things and this is one of the ways I'm obsessive about learning about how and where people use software".
Also because I have notes on Delta's implementation of Flightpath 3D's application and its latency and, you know, the whole seatback entertainment system software/hardware ecosystem. Because I'm me.
Here is a photograph of the first Halloween pumpkin I carved, which is very On Brand:
1.0 Some Things That Caught My Attention
Sorry, I had to do it
Look, the dynamic island1 is interesting because I was about to make the (admittedly privileged) decision to upgrade from an iPhone 12 Mini to an iPhone 14... until I saw this new user interface that's one of those typically Apple melds of hardware-with-software-to-make-something-new.
So now I'm in the position of deciding to spend a bunch of money on a different phone than what I was planning to get purely because it has an exclusive UI feature. This is, on one level, ridiculous, and on another level, exciting because oooh look, new thing to play with.
I don't care about the cameras, I don't care that it's got the next processor, I care only a little that a bigger phone than my Mini probably means battery life will be better. What I care more about is understanding what's happening with this new notification/interaction pattern, and it's super weird that it's because of how Apple's decided to deal with the hardware of two punch-outs in an OLED display.
Getting to the point
I have been in meetings where someone will try to be polite, or try to be helpful and the behavior is as if they're avoiding saying something painful to their audience. Here's an example:
- You should look into this This Thing
- Switching to This Thing instead of Current Thing will probably require a bit of work, so you should definitely look into it
- On the other hand, while it will require some work, This Thing should make things easier in the long run
- In fact, it may even solve $specificproblem that you're having
- In the long run, the way that you're dealing with $specificproblem is just going to get harder and harder compared to using This Thing
- In fact, while I'm here, I'm going to tell you that it's going to get so hard to use Current Thing that I'm kind of telling you that you will need to switch to This Thing, but don't worry, not right now
- But I'm kind of asking you to tell me, at some point in the future, how much effort it's going to take to switch from Current Thing to This Thing, and how long it might take
- But I'm not outright telling you that you must switch to This Thing from the Current Thing, merely reminding you that you should probably Look Into It
I don't think this is helpful. I think it's avoiding saying something painful, but if you're to believe all the statements (and you have no reason not to believe all those statements) then it sounds like this:
- We (this outfit, enterprise, organization, yadda yadda) are strongly encouraging all New Things to use This Thing, and the path of using Things is going to be much easier for This Thing
- We're going to practically deprecate and stop supporting Your Things. We say practically because while it won't be impossible to use your things, you can certainly continue using them, you'll just stuck in a bureaucratic nightmare that might for example severely inhibit your ability to dev the sec and ops, i.e. "iteratively deliver software frequently and regularly
- By the way, did we mention that our expectation is that you should be iteratively delivering software frequently and regularly
- I am actually telling you that you will be switching to this thing
I mean, at this point, I would cut out a whole bunch of politeness and instead substitute candor because with candor will come clarity. There is a lot implied in my imagined above conversation, which means there's a lot left outside of a shared understanding. You could get a lot of wiggle room in there, and again, it's because people don't want to start bringing difficult or hard concepts into the room.
But what it all boils down to, even more, is this:
- Over the next n timeframe, you will need to switch Things
- I want a plan, at the appropriate level of detail, as to how you're going to switch Things
- We will also need to understand how much work is involved, and so on
We've flipped it, then, from "hey, there's this thing you should look into, it looks pretty cool" into "you are going to need to switch things". Nobody wants (I think?) to have to switch things (modulo those things being better? easier? faster? cheaper?), especially if I assume that you're in the middle of delivering a whole bunch of valuable software yadda yadda and you hadn't really accounted for "hey, we're swapping rugs, you're gonna need a rug migration strategy that's more complicated than "hey everyone can you lift up your legs for a bit while I hoover under here?"
Anyway, my point is if you assume that everyone is smart, you can probably improve clarity, reduce the risk of understanding and generally help move things along by saying the thing out loud so you can move on from it, rather than trying to do some sort of praise/shit sandwich.
Time and Space
Here is a thing I noticed: if, say, you are in the position where you need someone to trust you more, and the thing that you think you need that will actually help with that trust is them giving you the space to do what it is you need to do to produce the thing that will help them trust you, then what is it exactly that you're asking for?
E.g. if you're saying "I need a bit of time and space so that I can go make the changes we just talked about" then... what would that look like? Is it fewer reports? Fewer meetings? Is it "leave me the fuck alone and just let me pop back in after period x has passed and I'll just leave the proof on your desk as a present?"
The thing I noticed is that if there are specific things along the way of your period x that would be helpful (i.e. get these specific people off my back; I am waiting for something from someone who is critical but they're behaving as if they're not critical and just not showing up, I need them to show up) then:
- I need time and space to do this thing
- What I then need from you (if you are asking what you can do to actually provide that time and space) is... time and space with you (i.e. being able to get meetings and talk about things and ask for things), because you, as some sort of executive leadership entity, are able to get those things done, or at least tell other people to do those things.
So. If you need someone to give you time and space, perhaps what they can do for you is to give you time and space in their calendar or diary so that you can ensure the time and space you need is being created and protected.
Or, you know, you're in a horrible terrible no good working relationship and you're fucked. That's a whole different problem.
Manifest What I Mean, Not What I Say
Here is a quick chain of thoughts:
- Stephen Marche wrote a piece in The Atlantic about AI which was called Of God and Machines
- It was not a good piece, as Prof. Emily Bender pointed out ("This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody").
- Dr. Damien Williams also pointed out how it was Not a Good Piece, also covering magic, technology and religion (See also: Belief, Values, Bias, and Agency: Development of and Entanglement with "Artificial Intelligence", Dr. Williams's doctoral thesis)
I, of course, am easily distracted and go sideways, so one of the reactions I get is wondering what the industry standard benchmarks might be to assess the performance of weakly godlike AI/ML systems, e.g.:
distGEnIE-3 achieved a new state-of-the-art 9.45 on the MWIMNWIS2022 (Manifest What I Mean, Not What I Say) zero-shot benchmark, compared to WiSH-R, the previous best-performing model.
(the inevitable tweet)
This is funny/silly to me (let me spoil the joke by explaining it dead) because:
- ha, a world where there are weakly godlike AI just out there doing weakly godlike things
- and yet we're still benchmarking them
- as if we can compare weakly godlike AIs
- because... they're only weakly godlike, and could totally work out and improve their godliness
- I mean, the whole benchmark being "Manifest What I Mean, Not What I Say" because what we're totally focussing on right now is compositionality, which is to say whether any currently hot deep learning system is capable of correctly interpreting and delivering on a phrase like "the red fly on top of a yellow ball rests on a green table, behind a large cone" or whatever. This, in my head, is a bit like the time flies/fruit flies parsing problem.
- Like, they're weakly godlike because duh they can't read your mind, so a lot of the time, say, 85% of the time they get the gist of your manifestation request right, but the rest of the time they do something a bit weird. This would, some people argue, be dangerous.
Anyway of course The Wirecutter would then be forced to produce a guide along the lines of The Best Weakly Godlike AI Systems To Subscribe To For Most People lest the NYT get cut off from that sweet recurring subscription affiliate revenue.
Okay, that's it!
How are you?
Best,
Dan
-
View activities in the Dynamic Island on iPhone, Apple, 2022 ↩