Around midday Pacific time on Monday January 18, 2021 in Portland, Oregon, U.S.A. There’s a lot going on.
As ever, if you’re able and feel like it, subscribing helps support my writing.
Here’s a thing that recently wandered into my head.
(“Where do your ideas come from,” asked the tired cliche of an audience member, and the answer was “the brain is endlessly combining, mixing and matching. It is a machine for making connections and some of them are strong, some of them are weak and some of them are novel and spark interest and sometimes not, so one answer, dear questioner, is this: that they come from an organ that was evolved to do so”).
Snowfall. No, not the Bond film, nor Bong Joon Hoon’s 2013 film.
Snowfall (actually: Snow Fall: The Avalanche at Tunnel Creek [wikipedia], but for the rest of this newsletter, just Snowfall) was a Pulitzer prize-winning piece of long-form journalism, published by the New York Times on December 20, 2012. Which was eight years ago.
If you’re a certain kind of person, you probably know about Snowfall because it was a piece of long-form multmedia journalism published by, of all the organizations, the New York Times.
I was probably thinking about Snowfall this last Friday after the trove of data following the Capitol insurrection had started coming out. Like most breaking news these days, making sense of the attack on the U.S. Capitol and understanding what happened was, and is still, difficult. It’s worse now than it used to be: a good 70 terabytes of data were scraped from Parler, and around the same time, open source intelligence outfit Bellingcat were busy archiving data from Twitter, Instagram, Facebook and other sources.
So, I wondered, would we get a Snowfall-type, multimedia integration of What Happened On January 6? If you were to be uncharitable, you might say that the Times’ Snowfall was a sort of Encarta article on steroids - solid reporting, integration of data, sources, creation of new material. Lots of viewport-wide video, using LIDAR to create new 3D reconstructions, all that kind of thing.
Aside from a need to understand, I feel there are more people, in general, open to and into a more interactive explanation of events. The world has just spent the last twelve years spending a bunch of time with the Marvel Cinematic Universe, kicked off by Tony Stark traipsing around augmented reality environments. Iron Man 3 has a whole scene with Stark reconstructing a crime scene, wandering through it. This sort of thing has been in shows like Bones for ages, games like 2009’s Batman: Arkham Asylum had the world’s greatest detective using Detective Mode and rendering an augmented view of the game’s world highlighting all the relevant clues, leading to the inevitable question as to why Detective Mode wasn’t turned on all the time in the first place (side note: no Sherlock Holmes game with this mechanic, then?).
But anyway. Snowfall felt like A Moment in Web Design. When the New York Times does something, that gives you a certain latitude, as if to say: well look, if this frankly conservative organization is able to do this kind of design, then why aren’t we? So for the purposes of this, the story I’ll tell is that it was more okay to include heavy video and renders on webpages. Just a few months later, Apple would announce the cylindrical Mac Pro (yes, the trashcan one) at WWDC, and if memory serves, the teaser site was one of the big scrolljacking ones: large images that animated and transitioned as you scrolled down the page. Instead of scrubbing through a timeline, scrubbing up and down - navigating the page itself - scrubbed forwards and backwards through a sequence of images. In Snowfall, it was the route taken down a mountain, on the Mac Pro page, it was the rendered components of Apple’s ill-fated desktop computer exploding and contracting. Between Apple and the New York Times, it felt like suddenly Big Heavy Animated Pages were a thing you could do.
Anyway, that’s just one of the branches wrought by Snowfall. The other one was the realization of multimedia journalism on the web, even though part of me feels a little disappointed that this kind of multimedia journalism was fairly static and, well, rendered. But it was 2012 of course, and while you might attempt to do realtime 3D renderings of data for narrative-driven storytelling (is there another kind of storytelling?) in a browser, Safari wouldn’t ship with official support for WebGL until three years later, in the summer of 2015.
All of this is to say: this is 2021, and no matter the other moral panic of the New York Times being that children are spending too much time in videogames (during a goddamn pandemic, I might add, where videogames afford a chance for some remote socialization, at the goddamn least), there are people who’ve grown up with games their entire life and are entirely happy navigating information-rich 3D environments.
Because now, the raw video, images and text posts of the Capitol seditionists include location data. Pretty fine-grained location data. Home computers have enough processing power, never mind on-demand cloud computing power to, I don’t know run offline photogrammetry, stabilization and 3D scene reconstruction of all of those image and videos, placing them in a collective space navigable not just in three dimensions but time, too.
Part of me wanting this is not just because there was a lot going on, but there’s a lot of data, too! And I don’t think at this point you even need to call this sort of thing data journalism at least not on the outside because it’s just making sense of what happened, and yes I get it, that actually includes doing a lot of data engineering and analysis and so on.
I don’t want to get completely sidetracked onto a whole essay on traditional media trying to figure out what to do with interactives when there are examples like vi hart and nicky case’s Parable of the Polygons which these days I would’ve expected on a property like Vox. Instead, we’ve got new publications like The Pudding, who are eight full-time journalist-engineers, making interactive visual essays. It feels like a sign of how early we are that we’ve moved on from interactive, which is what I remember the BBC calling them, to visual essays in a media world that’s still struggling with something other than static content.
But, things change. The same day I wondered out loud about how much I’d appreciate a Snowfall for the Capitol Insurrection, the Los Angeles Times' data and graphics team shared that they’re hiring a journalist to lead the creation of data-driven visual stories (there’s that word again, visual: is it because there’s aural stories and all the money pivoted from the video content boom to the podcast content boom? Is it because it just means… “with pictures”, in which case I’m still mildly disappointed that the difference is the interactivity with the pictures, not the pictures on their own).
The next day, on January 16, the Washington Post published a video timeline of the Capitol building siege. The Post’s… piece? “used a facial-recognition algorithm that differentiates individual faces — it does not identify people — to estimate that at least 300 rioters were present in footage”.
The following day, ProPublica would publish What Parler Saw During the Attack on the Capitol, a linear timeline of hundreds of videos posted by Parler users during the attack.
Of course it takes time to responsibly publish reconstructions or understandings of the events and to check that they’re accurate. We have the first-pass attempts when individuals were analyzing and querying Parler data, visualizing posts and individuals on maps.
But I’ve been infected by What Movies And Games Look Like. It feels like there’s a richer space of information here. Isn’t combining this data and making it queryable, presenting it in a way that humans might think they see intuitive patterns, the sort of thing Palantir is supposed to be good at?
I’m looking forward to seeing this, if someone puts it together. What’s somewhat reassuring is that we don’t have to wait for the New York Times to do it this time. Or the Washington Post, or ProPublica. There’s much more of a chance of a Bellingcat or others piecing together the data into different versions of a whole. There’s a lot to understand.
Via the Orange Site, here is a demo video [github repo] of a Reddit bot that renders Reddit conversation threads as Phoneix Wright: Ace Attorney scenes, which prompted a few thoughts. First: this reminds me of MS Comic Chat, a Microsoft Research project IRC client that rendered IRC as panels in comic strips, complete with a pie-menu style method of emoting, which I maintain as one of the greatest things ever for many, many reasons. Second: somebody please fork this project and use it on Twitter threads.
Actually, there was a third thing: I love MS Comic Chat because it added more depth and nuance to textual chat in what I feel was an accessible way. Feels like we could do with more depth and nuance these days! Which collided with Iain M. Banks’ idea of an aura field, which would allow the drones of his Culture universe to convey mood. What would it be like if Twitter let you quickly and easily choose colour backgrounds for tweets? I mean, it’s not like we don’t already have this for Instagram text posts…
More media stuff: what would it take for an organization like the New York Times to be able to publish and hold itself to the Axios Bill of Rights, which includes assertions like never publishing op/eds.
A group of researchers in Japan have produced superconducting processors. Superconducting means that with the state of materials science at the moment, they’ve got to be cooled quite a bit which at the very least is required to satisfy the trope of Very Cold Processors For The Artificial Intelligence.
A good (honest) essay on rethinking Design Thinking by Arielle Wiltz, Reimagining Design Thinking: How the Groundbreaking Georgia Election Wins Demonstrate Inclusive Innovation that also cites Maggie Gram’s excellent On Design Thinking, from n+1 magazine.
Look, videogames are just part of culture now. IKEA said so.
I’m sharing this with you on the understanding that you don’t just go and buy all of them leaving none for me, but I think it’s relatively new that Susan Kare is now selling signed, numbered limited edition prints.
I hope you enjoy learning about the Nonhuman Autonomous Space Agency.
Content warning: despair in the face of current events - my thread of rating things out of ten on a “How Children of Men Is This?” score
Okay. That’s it. I would ask how you’re doing (how are you doing, I guess?) and expect nothing in return. All I have to offer is weary sympathy and to note that all I’m trying to do is just get through to the next day.
 see: Cyd Harrell, Twitter, 2021.