s5e07: Acquisition and ingest mode, dial for a wide spread
0.0 Station Ident
1.0 Acquisition and ingest mode
Go wide and take everything in - bring it all in, then start swimming to see patterns. Here's a dump of everything that caught my attention in the last few weeks before some themes and ideas start emerging or being reinforced:
* Raph Koster's favorite game designs from 2017. Played most of these, and am starting to feel a bit of a professional lack in not having a Switch but it's not like I have time to play games anyway. At least, I can wait until Labo comes out, which I think is wonderful for a bunch of reasons.
* Consensus is that Apple's Homepod, while not a great smart speaker (or not even a passable smart speaker, really?) is instead *really good for listening to music*. It follows Apple's recent trend of sticking hitherto non-viable amounts of computing power into non-computing related devices. Cf: "what if we put a computer in a phone", "what if we put a computer in a camera", "what if we put a computer in a Macbook Pro" and now "what if we put a computer in a speaker" - so, lots of future-gazing talk of "computational audio" but similar evidence that maybe Apple hasn't gotten some of the basics right, or clear. That said, that's what some people said about their watch and now they're all over the place.
* The Arc gene (via the always good to read Ed Yong at the Atlantic) looks like a gene that could have played a critical part in the development of memory and, from that, language. As usual Peter Watts (Blindsight, etc.) has already thought about how it could be weaponized. Such an arc-based weapon feels a bit like amusica, a disease from Alastair Reynolds' Century Rain, that stripped people of their ability to hear music as, well, music.
* Joseph Heath has a point of view on Iain M. Banks' Culture that has been doing the rounds on how the Culture can't be defeated - because it's designed to spread itself. Good contra arguments on how Heath may be wrong on Metafilter.
* ODINI: Escaping Sensitive Data from Faraday-Caged, Air-Gapped Computers via Magnetic Fields
* I read the excerpt of Sue Burke's Semiosis on Tor.com, the story of colonists coming to deal with life on a planet where the flora is considerably smarter than both them and everything else. Humans aren't the apex predator and end up being subservient to, well, something more than what we'd think of as a mere plant, at least. Tried to check it out of my library and it's already on backorder, which is always a good sign.
* Someone archived a difficult-to-find paper on the design of Windows 95's interface. I am a sucker for the thinking behind interfaces.
* Typeset in the Future is turning into a book!
* Instead of watching The Good Place (I got about 3 episodes in before, I don't know, being a parent of young children got in the way), I recently spent time watching The Cloverfield Paradox (not great) and Altered Carbon (... complicated?). On top of all of my friends and betters telling me The Good Place is amazing TV, I also have to contend with a goddamn philosopher saying that it's good, too.
* That Facebook patent application 20180032883, SOCIOECONOMIC GROUP CLASSIFICATION BASED ON USER FEATURES in case anyone wants to actually read the patent and what's claimed, instead of just going off the few images from the application that were tweeted. (I know! That would be more work!) One of the things going on here is that I believe Facebook *already* cross-checks the data it has on you with the established data brokers, so it's more like *refining* a socio-economic group classification, or seeing if they can replace needing that data by rolling their own. Or all of the above! Who knows and who can divine the intent of a large multi-national corporation!
* Multiple thoughts about voice user interfaces and my suspicion that *most* people (citation needed, etc) are just using them (Alexa, especially) to play music and add things to lists. But then, the realization that Alexa does a just-about-bad-enough job of playing music, or adding things to lists! I mean yes, it's certainly easier to use Alexa to play music than it is to look down and do it on my phone. But there's an 80/20 rule here: Alexa can play 80% of the music fine, but with the other 20% of the time, I need to reverse-engineer the right verbal incantation to get the specific piece of music that I want (and this is with the household Alexa hooked up to Spotify, so not, I imagine, the approved and more optimized route of Alexa working with Amazon Prime Music). An example: wanting to listen to Schubert's 8th symphony and listening to classical music and needing to figure out how to get what I want. Another one, having to write a note for a babysitter that they need to say "Alexa, play the album Laurie Berkner Lullabies by the Laurie Berkner Band from Amazon Music" otherwise Alexa will play "I can't find the album Laurie Berkner Lullabies". Anyway: all of this is to say that the verabl gymnastics required to best use a voice assistant feel a little bit like learning Graffiti in the 90s to derive the (real, but not *massive*) benefit of personal digital assistants. And, well, we're a species that adapts.
* And another thought, talking with longtime (10+ year!) game designer friends about how some of these failures in voice user interfaces feel like *really obvious and surmountable design issues* to those of us who have had experience designing narrative gameplay experiences that need to work in the right way otherwise people get frustrated. Real world, experiential + digital puzzle design that needs to funnel people down a correct path to work properly, but ideally best works when people don't realize they're being funneled.
2.0 The Unreliable Feed
Written-up version of a bunch of tweets written in the cab between work and Sacramento airport.
I recently realized (yet another) thing I dislike about "algorithmic" feeds like Facebook's newsfeed and Instagram's feed. I'm singling those two out because they're the worst examples but, as I started talking about this on Twitter, a few people reminded me that Twitter's main timeline does this too.
The big thing is that they're stressful in one particular way: when I open a feed like Instagram's and scroll past something without seeing who posted it, it's really hard to find that post again - and that's when I get frustrated. My mental model for this is that *some* of this is because apps like Facebook and Instagram do something like reinitializing/relaunching/reloading the feed object when you switch context back to that view from either another view in the same app, or if you're switching back from another app. The effect is a bit like the flash-render you get in desktop browsers where suddenly everything is laid out again (although I suspect/am fairly certain it's actually nothing to do with this under the hood).
From a user point of view, though, what happens is this: I open the app and see something that's interesting (good! 'Meaningful content' has been presented to me!) but, for whatever reason, the feed view is refreshed and, crucially, re-ordered.
There's an interpretation of this refresh/re-order behavior that insists it's helpful and *helping* the user. Re-initializing the feed object grabs the feed again from whatever network resource. In the intervening time between the last feed load and this one, new content has appeared, so that new content has to be ranked. All to make sure that 'meaningful' content is prioritized so I can be 'more sure' of seeing it, right?
But then you combine this with bad search (how, exactly, am I supposed to find *that* dog photo again if I don't know who posted it? What search terms am I supposed to use for specific visual search, if I'm not sure how to describe the parameters? The answer to this isn't completely "machine learning" will fix this). My theory here is that unpredictability of content order is super stressful.
What you've got is an interaction model where the primary object is constantly changing and updating and you can't rely on it. I could've loaded a particular feed right now and it accidentally refreshes due to whatever OS/application resource quirk and now there's *new* content interspersed amongst old content, in a new order. How do I find the thing I saw half a second ago?
I think a big part of people complaining about algorithmic timelines isn't *just* about something or someone else "choosing" what's important and displaying it to you - it's also because in the particular implementations that we deal with every day, we can't predict what will happen from feed refresh to refresh.
Pamela Drouin pointed out that this is essentially the breaking of wayfinding (she has a lightning talk on the subject here).
The standard refrain here is that, well, what do you expect from Facebook and Instagram - breaking wayfinding in this way is a way to encourage/reward certain behavior that's profitable for them. At the same time though, I would *like* to think that individuals aren't *purposefully* breaking wayfinding - there are quite a few instances where I do think that feed refreshes are unintentional and genuinely are bugs. But in those cases, I think they should be fixed! From my point of view, it's also a pretty suspicious coincidence that unpredictability probably contributes in some way to more "engagement", i.e. refreshes.
The second point is that, intentional or not, unpredictability in the state of the feed is a really bad way of getting "meaningful content" in front of people because it means they can't find it again.
Another way of looking at this is that the feeds themselves are stateless when the UI metaphor they've inherited has been a stateful one. Timelines (the old, reverse-chron only ones) worked a bit like this: when new stuff came in, it was prepended to the head of the list. Once anything *had* been added to the list, or the list had been viewed, it had state: it didn't change.
Or, to try another metaphor: it's okay for the future to be a black box. But once you take things out of that black box and you put them in an order, *don't fuck with that order*. The future can be a black box, but the past can't be.
My assumption is that users would *like* feeds to be stateful in the current model. By that, I mean you could go back and see the feed as it was at a particular point in time. But they don't feel like they are - they're recalculated on each view.
Some feeds don't do this, though: I don't think my tumblr feed does this - it acts more like a traditional reverse-chronological timeline and new stuff only appears at the top, and I don't expect stuff that I've seen before to change position.
One consideration here that might be used to defend the "recalculate on each view/refresh" position is that this is a big engineering problem. Sure, it's a big engineering problem, then! You're suddenly tracking and maintaining n states for n users where right now, you don't have to, you just throw the newly cached created feed at users whenever they ask for one. No need to remember the past. I imagine it's easier and faster to do things this way.
In other words (and one more attempt at explaining/describing this behavior), what we experience when we reload a feed and items we've seen before have changed their position in what was formerly treated as a reverse-chronological timeline (even though it isn't now! Doesn't matter! People, I think are treating it as if it *is*) is that the historical order of things have been re-computed. I think this is really stressful!
--
OK, this one had been sitting in my drafts for far too long and let's just not question the particular frame mind of mind that's allowed me to just get on with it and send it. We'll see what, if anything, happens next.
(oh, and notes as always are appreciated, especially the "I have never sent you a note and you always ask for notes and I don't quite believe that you really mean any note is ok, so here's a note that just says that" kind)
Cheers,
Dan