s5e05: Nobody expects the hiatus newsletter
0.0 Station Ident
Hello. It's been a while. Let's just jump straight into it and avoid any awkwardness - here's some things that have caught my attention recently:
1.0 Software will eat the world, so let’s talk about software
There’s a few inputs into this set of thoughts that are relevant in terms of context. Here’s two pieces of background:
1.1 Blood
I recently got a continuous glucose monitor (often called a CGM), which lets me check my blood sugar continuously as opposed to the current way I do it, which is to get out a lancet, prick my finger, and test (ie: discretely). My new CGM is from a different manufacturer (a Freestyle Libre) than my current discrete blood sugar monitor/meter (OneTouch VerioSync) and suffice to say, the software stack is incompatible and, broadly speaking, Not Good where Good means Meets My Needs.
(An aside: I am sad, tired and depressed that just letting you know what blood sugar meter stacks I use potentially makes it easier/more likely that someone could invade my privacy thanks to security vulnerabilities. For the nth time, I digress.)
One example: my old meter uses Bluetooth to talk to my iPhone and sync with its own service as well as write to my Apple Health data store. (An aside: the “it just works” on-demand Bluetooth connection between the meter and my phone stopped working in the latest iOS release).
My new meter works by having a small sensor stuck to my arm continuously measuring my blood glucose and communicating it, on demand, to a reader over NFC, near-field communication.
Manufacturer supported Android software exists (in part because Android provides access to NFC hardware) so one could use an Android phone instead of the reader device which has a Bad Touchscreen because apparently no one other than Apple understands what “good enough” means when it comes to touchscreens.
Until recently, although some Apple devices had NFC hardware (used to implement Apple Pay), Apple didn’t provide access through an API. That led to a workaround where a sort of Bluetooth dongle would sit on top of the NFC sensor, periodically use NFC to read out data and communicate it back to any device over Bluetooth.
a) If you ever wanted evidence that “intelligent design” doesn’t exist, you could certainly use this; and
b) Need I remind you that it’s 2018 and software engineering apparently exists as a profession
So: I can’t use my phone to get access to my blood sugar data. Maybe I can just plug the NFC reader into my laptop. The manufacturer advertises a “cloud service” that can be used to upload and view blood sugar data, so I try using that, and it’s only *after* I register for it that the site helpfully points out that there is no way to use the website to upload device data if you have a Mac. In other words: the cloud service is view-only if you’re not running Windows.
The question as to whether it’s a good idea (I mean, you can certainly *do* it) to launch a consumer medical device that doesn’t support a significant consumer platform is left to the reader.
So that’s point one: I can see all the pieces here. There’s nothing technical *stopping* all the pieces from talking to each other. But they don’t. Because of people and, well, it’s coordination problems all the way down.
1.2 CES
It’s CES, the annual Consumer Electronics Show which in my younger days used to be an event where I’d look forward to seeing electronic devices from an exciting future. In 2018, CES is an event that feels more like an elaborate Black Mirror LARP.
This year was the year it felt like everything got connected. I didn’t set out to follow CES news this year and instead learned about what was being launched through helpful Promoted Tweets that appeared in my timeline throughout the day. They were mostly about a new range of shower heads, kitchen and bathroom taps/faucets, microwaves, washing machines, dryers, ovens, stoves, lightbulbs, switches, cameras, locks, mirrors and so on that are now a) “smart”, b) connected to the internet and c) thusly Alexa/Cortana/Siri/Google-enabled. Oh and I guess maybe Bixby too?
So just from the ads I received in feeds, this year was the year you could just sit down in your home and look around assume that greater than 40% of the things you could name now had versions that connected to the internet for... reasons.
The two pieces above are the context: a personal reminder that software has a long way to go through yet another blood sugar meter stack, and an environmental reminder that despite software being, you know, “not great” it’s still apparently great enough that a group of people are deciding that software is worth putting in everything. (More precisely, software that talks to other software that talks to other software and so on.) My idiosyncratic niche cultural reference here that might make some of you wryly smile is: “IPv4 stacks find a way”.
(Seriously, though, there’s a thought here that “things that communicate to other things” find a way to increase the number of they can communicate to.)
The meta-issue here is that a) Marc Andreessen is right, software is eating the world, *and that* b) most software still isn’t very good (corollary: we’re still not good at making reliable, secure software).
Whether or not there’s a real increase in utility, we’re putting software in more things (and, maybe, *more* software in more things). Why? Printing is still shit! (When I was thinking this out loud on Twitter, Stephen Woods pointed out that printing has gotten better over the years[0], to which my response is: it’s still shit though, so that doesn’t count. Sorry.
(Some readers may note that I haven’t even mentioned SPECTRE or MELTDOWN yet.)
One of the assumptions here is that the software that eats the world is, well, we’re going to get gourmet-class, Michelin-starred software (I realize my metaphors are the wrong way around here). In fact, it’s not even an assumption. We just ignore it. We just accept that the software that’s doing the eating is increasing utility *and*, I think, that any increase in utility is causally linked to reliability. (Spoilers: I disagree, but I have no idea of how you start measuring the negative externalities of something that does do a better job, but only does so unreliably).
The realist/pessimist in me thinks it’s more likely that software will eat the world and, well, it will look like current software does: there’ll just be more of it. Broadly speaking, it will continue to be badly designed, fail to achieve its potential effective in meeting user needs and tragically insecure from both an operational and privacy point of view.
I just know that there is *someone* out there saying: “But Dan, AI will help us increase the quality of software!”, which isn’t... necessarily untrue? I mean you can drive a truck through that hope before you get to asking people what *practical* tools AI will provide to help increase software quality (someone from Deepmind will inevitably send me the paper), never mind the fact that we already *know* how to increase software quality, it’s just that we’re not incentivized to. The more developed version of this argument is “We won’t have to write software because AI will write it for us and we’ll just specify goals and probabilistic programming, and that solves the problem of software quality because we’ll take the humans out of the loop” and my response is: ...maybe, but that also presupposes that we’re good/get better at goal identification and specification, and then my virtualized Eliezer Yudokowsky who I’m sparring with points out that I’ve moved the goalposts and *then* Nick Bostrom waltzes in to say that *then* we’ll use AI to help us with the goal identification and specification issue and at that point because he’s a virtualized Bostrom in my head, I’ll punch him in the nose because SERIOUSLY.
Somewhat facetiously, one practical outcome of this is that for all the worry of the employment implications of software eating the world, there’ll be a lot of work just making sure this damn software does what it’s supposed to do. We (technologists) make fun of what happens during family holidays when it’s time to go back to the parental home and fix things. Do we think this situation is going to improve? The trend doesn’t seem to indicate so (and I say this *including* the positive effect that Apple’s more-usable devices have had, the Windows to Mac/iOS shift in reducing the support burden).
A house with a patchwork, heterogenous network consisting of smart lights, switches, ovens, taps, sinks, bathtubs, mirrors, washing machines, dryers, voice assistants, speakers and so on with a corresponding mixed software environment is an interoperability nightmare.
In other words: I predict (and thus will be wrong) that there’ll be ample employment for home network sysadmins/debuggers, a sort of equivalent of a plumber or general contractor who instead of making sure the water/gas/heat/cold/doors flow tries to make sure that the right bits flow in the right direction with the right security permissions and as best as possible.
Never mind learning to code: learn to debug.
One throwaway CES observation: Kelsey D Atherton reported that there was a practically infinite array of quadcopters on display at CES this year, which I’d say is evidence of drone commodification (which itself is an effect of microprocessor commodification). Put that way, if it took roughly 40 years (counting from the 70s) for microprocessors (read as “computing” and “network connectivity”) to get into everything (read as “the smart home devices I mentioned above”) not because there are particularly *good* reasons other than “we can put an IPv4 stack in it and then figure out what it’s good for”, to which there may always be *some* good answers, then how long will it take to make anything that we can attach rotors to fly? Because why not? (Remember: flying also means moving).
Another throwaway CES observation: the confirmation that Facebook has some sort of device with some sort of screen (“Portal”) combined with the news that it was shutting down the centaur human/AI hybrid bot M led me to wonder aloud about the name (“interface invocation identifier”) for Facebook’s inevitable voice assistant (“inevitable voice assistant” being as phrase that I suppose is quite 2018). That’s all. There’s going to be another thing that has a name. It’s probably going to be female gendered? I mean, it might not be, but Historical Evidence Indicates Otherwise and we’re all good followers of Bayes here.
Anyway. There’s clearly some incentive (“the market, I suppose”) for putting software that doesn’t quite work into everything and some people may say I’m just being curmudgeonly for pointing out that software isn’t as good *as it should be* and that something is better than nothing. To which I think I’d have to say: not all somethings are better than nothing? I mean, at what threshold do we decide that, you know, we deserve better than this?
As always with my current thinking, I can’t help but end up holding the position that, at some point, the production of software is going to be regulated and the real question is how we’re going to do it.
2.0 Idle thoughts on attention
This is a dumb train of thought but I like dumb trains of thoughts because sometimes they lead to interesting places.
I see that people are talking (more?) about different ways to value gross domestic product. For example: should we include breast milk production?
I choose to be obtuse about the attention economy and say, well, if we’re going to talk about how we’re going to value gross domestic product, then what happens if we think about how we’re valuing attention.
(I have to admit, part of my thought process is if Paul Krugman can think about the implications of economics on interstellar trade then I can think about the implications of being really literal about the value of the attention-second).
One naive question might be to ask what the theoretical maximum value of total global attention is to set an upper bound. Off the top of my head, that’s: attention purchase cost[max] * population attention seconds[max]. This, of course, doesn’t make sense because one of the reasons why a Super Bowl ad costs so much is because they’re a scarce commodity, and, well, imagine the hellish environment we’d live in if every single second of your life was spent experiencing the most expensive Super Bowl ad forever (and this is how I come up with my silly dystopias).
Now, when I thought about this in public, Faris Yakob and Matt Webb brought up[1] the fact that total “time spent doing stuff that marketers care about” wasn't projected to increase/decrease substantially. Webb himself pointed to a link claiming that we may have hit peak attention about 10 years ago. From that point, there were no new rich seams of attention to mine and we’d arrived in zero-sum land: the only attention you can get is at the expense of attention given to something else. A terrifying prospect might to imagine the ensuing land-grab were someone to open up a new seam of hitherto unextracted attention. Attention fracking: you can bet someone’s thinking about it.
(A stream-of-consciousness aside: there’s probably someone, somewhere who’s done the cost/benefit analysis on achieving widespread legalization of cognitive “enhancement” drugs like Adderall or Ritalin that could be said to have potential to a) slightly increase the amount of available attention from the total addressable attention (ie: you can stay conscious longer and sustainably reduce the amount of sleep that people take) and in parallel, b) increase the quality of attention. In another world, media conglomerates are trying to figure out if they should be buying pharma conglomerates and vice versa. Sorry if this is nightmare fuel for anyone.)
(Oh god here’s another one: “Methods Of Increasing The Bandwidth Of The Conscious Human Sensory Experience In Achieving 2018 Q1 Brand Awareness Targets, A Progress Report”)
[0] Stephen Woods, LLC (a Delaware corporation) on Twitter: "@blech @hondanhon I have a quibble: printing has gotten *much* better. Like, so much better. Maybe we've all forgotten how incredible bad it used to be"
[1] Matt Webb on Twitter: "@dancharvey @hondanhon @faris @undermanager reminds me that I once claimed we hit Peak Attention in *2008*. in retrospect I was probably right, and attention has been a war zone ever s… https://t.co/9AeqeIQ6pf"
3.0 After the Attack
[content/trigger warning: nuclear holocaust/post-apocalyptic scenario/mass death]
I’m told that one of the ways of dealing with existential crises is to move into the absurd, and as the threat of a potential nuclear exchange thanks to “the American situation” increases or at the very least continues to present itself in popular consciousness, I’ve been thinking about What Happens After The Attack as a way to think about What’s Happening Now.
For example: after seeing one too many Design Trends of 2018 articles at the beginning of the year, I went full-on “I’ll Show You Design Trends In Today’s Terrifying Climate” and wrote, well, a Design Trends of 2018 piece. Here’s an excerpt:
--
10 Design Trends That Will Shape 2018[0]
(Seriously.)
1. Augmented reality
Augmented (and mixed) reality will be a big hit in 2018 as nuclear war survivors look for any useful techniques to find water, food and medicine while on the move.
One to watch: Pokémon GO’s new “real weather” feature, will be invaluable when post-Attack winter comes.
2. Branding goes bold, simple
Clear, simple branding is expected to be a lifesaver this summer when survivor collectives will be on the look-out for Red Cross supply depots.
Consider a design refresh for easy recognizability for malnourished, radiation poisoned users.
--
The idea has stuck around in my head. In the same way that there’s a Twitter 'bot that retweets people who tweet startup ideas of the form “Uber but for [x]”[1] I now can’t get the idea of “[Startup] but After The Attack” out of my head.
Look:
* Tinder, but for After The Attack (there is an exploration of this on Twitter but I'm not going to link to it)
* StitchFix, but for After The Attack
* Moo Cards but for After The Attack (sorry Richard)
* Pokémon Go but for After The Attack
* Blue Apron but for After The Attack (meal prep time continues to be listed as 20 minutes but for some reason continues to take around an hour, even for foraged meals; some people think this is due to the requirement to wash and pat dry the vegetables when you can’t even find any goddamn clean water to drink)
* Warby Parker but for After The Attack (uh, don’t ask where I got my new glasses)
But: that’s the reality of living after a nuclear exchange, right? There will be millions of dead people, infrastructure will break down and it will be a literal nightmare. Ha, ha, ha.
[0] 10 Design Trends That Will Shape 2018 – Dan Hon – Medium
[1] Uber but for... (@uber_but_for) | Twitter
4.0 Unicorn chasers
All of that was a bit of a downer.
At work this week I’ve been talking about our individual hope that we can achieve goals versus our capability to achieve goals, and that it’s useful to catch ourselves when we confuse the former with the latter.
* I found these two pieces of writing, I Started the Media Men List, My name is Moira Donegan[0] and I Made the Pizza Cinnamon Rolls from Mario Batali’s Sexual Misconduct Apology Letter[1] by Geraldine DeRuiter, to be so many things, amongst them powerful, beautiful, vulnerable, compassionate, brutal, candid, apologetic, unapologetic, sad and hopeful. If you’re a man, please read them and practice thinking about the consequences of our acts before we act.
* Here, read about Come From Away[2], a musical about what happened when passenger planes in transit were diverted to Gander, Newfoundland and Labrador on 9/11 because it'll give you hope and also make you cry.
* I’m on Team Porg. Sorry if that means you’ll unsubscribe but it’s probably for the best because I don’t think it can ever work out between us.
[0] Moira Donegan: I Started the Media Men List
[1] I Made the Pizza Cinnamon Rolls from Mario Batali’s Sexual Misconduct Apology Letter – The Everywhereist
[2] Come from Away - Wikipedia
—
Anyway, Happy New Year. I think.
Dan