Episode Twenty One: Net-Native Storytelling, Selling The Future, Always On

by danhon

1.0 Net-Native Storytelling

Paul Rissen[1] (and I’m struggling to find a way to deal with this in writing without feeling like I’m a DJ with a call-in talk radio show) asked if I had any further reckons about the whole Marvel Universe API and the weak signal of Irrational Games’ implosion.

I have a shtick of why and what I’m interested in about internet native storytelling. It mainly stems from the holy-shit moment of The Beast, one of the first Alternate Reality Games, back in 2001, where great writing (thanks, Sean Stewart), met great game design (thanks, Elan Lee) and an almost creepy knowledge of how to use the internet as a storytelling medium, as opposed to a transport mechanism.

(For those unclear: YouTube is more transport mechanism than medium, its benefit is mainly in enabling distribution of video to audience, the trappings of interactivity that it does have are baubles around the edges; whereas the internet *itself* enables storytelling of a qualitatively different types).

The defining characteristic of the Internet that we have today is the link. We’re maybe, just maybe, post-link with the world of practically balkanised mostly hermetically sealed app experiences on mobile, but for the most part, hypertext has won. Things, as it were, connect to other things.

It’s those connections that are so interesting and that enable a lot of what makes the internet so great. There were two talks that I think attacked the same subject but from different angles at last year’s XOXO: the first from Evan Williams[2] who perhaps has single-handedly, no matter how much Dave Winer may protest, been at the centre of enabling people put things that link to other things on the internet. The second was Mike Rugnetta’s[3] about, well, the entire internet and culture. You should watch them, because they really are very good, but if you want to watch only one, then I’d recommend Mike’s.

So. Links. Links are what makes the internet the internet.

And, if there were a thing, that was *really* *really* “the internet”, then I would have a hard time coming up with something better than the wiki, which is essentially nothing more than just a giant ball of links compressed together under a single namespace and even lets you put your own things that other things can link to in it! If you want!

Look, here’s a pop-psychology reference for me to throw in to prove a point without any real citation in some sort of Malcolm Gladwell-alike impression: you know that feeling of Flow? I get it when I fall into tvtropes[4]. I just click from article to article, endlessly exploring a maze of twisty passages, all alike in terms of textual content being injected into my brain through my eyeballs.

But, unlike linear fiction, and because of the link, the path that I take can be unique to me. I still have an entire fictional universe that I can explore, and that universe need not be, in the way that Wikis are currently, limited to mainly textual content. There’s been, I reckon, a preoccupation with the editing side of the Wiki, as opposed to the consumption side. What does it look like when a work of fiction is created by an author as a wiki and then experienced as one? Can that be episodic? When the emphasis is on the act of aimlessly traveling through a morass of universe? It feels like there’s something epistolaric about it: because, a navigated series of documents.

Let me also be clear that I don’t mean some sort of choose-your-own adventure “interactive” wiki-novel. For a variety of reasons, that would be dumb. More that there can exist an authored text with a defined event of beginning, middle and end where the reader explores it non-linearly without affecting the plot. It would be as if someone had written something like World War Z and just dumped it on your desk, out of order, and you had to piece together the universe story.

The counterargument to something like this is: gosh, that sounds like an awful lot of work. And compared to consuming regular linear media, yes, it is. But then so is spending time trawling through TVTropes, and that’s why I invoked pop-psychology favourite Flow, earlier. Because if you have a Flow card in your hand then you can play it any time and it trumps any other argument someone may have, so there.

All this is to say that I remain excited about the possibilities of the Marvel Universe API. Because once you have an API, you have Things and when you have Things, they can link to other Things and Operations can be performed upon those Things to Transform them and before you know it you have more Things. The API, and this really is off-the-top-of-my-head-reckoning is, in a fiction sense, providing you with ready made entities that can be played with. Sure, you could build Marvel Universe Reference Guides and all that stuff, but that’s not as fun as making your own splinter universe and *still* be able to have it all tie together (individual instances of parent entity objects retain their relationships, right? So you can use your entity as a point of reference and use it to pivot around different universes. Duh.)

[1] https://twitter.com/r4isstatic/status/435920387990622209
[2] http://www.youtube.com/watch?v=zR1xDBFdRZ0
[3] http://www.youtube.com/watch?v=-D9Xq3Xr8aE
[4] http://tvtropes.org/pmwiki/pmwiki.php/Main/HomePage

2.0 Selling The Future

I had a piece brewing in Medium for a while that just kind of petered out, so as an experiment, I’m trying to rewrite it here in a shorter and more concise manner. So, here are the things that caught my attention:

Item one: 23AndMe’s first TV commercial[1]. Check out the motion graphics and the structure that they use to portray information. They say we’re from the Future and we’re here Now!

Item two: Oh right, that reminds me exactly of the kind of stuff you see in Children of Men[2].

Item three: In fact, there’s an entire visual shorthand for selling the future and portraying the future that’s been developed in film and television over the past few decades. One good place to see that is at the Typeset In The Future blog[3].

The thing here is that the future that was always a few years away, and that we’ve been exposed to on film and in fiction, is tentatively touchable. Nest, with Google behind it, will surely embark on an advertising campaign for Smart Home Appliances. Ads featuring a pseudo-intelligent, voiced personal assistant are now an actual real thing, and not the kind of thing that you see comped in through some motion graphics in a Tom Cruise SF thriller.

One of the reasons why Spike Jonze’s vision of the future in Her is so – in a way, alluring, if not technically accurate – is that the guy cut his chops in the advertising world. He was, we should remember, the guy who anthropomorphised an IKEA lamp to us[4]. In short, Jonze knows how to sell.

So here’s the thing: advertising has already anthropomorphised things. Advertising already knows how to tweak your heart strings and show you that P&G Winter Olympics ad that will have you bawling, hands down, within thirty seconds. Because they – we – have this thing down to an *art*.

Now, in a very short amount of time, all of that machinery is going to be brought to bear on selling you the stuff you thought was in the future. If software is eating the world, and software’s going to be powering all the formerly dumb things we’ve had in it, then you can be sure that those physical things with software are going to be advertised, in a way, to you. Where this gets even more interesting is that the advertising is going to have to *explain* what these things do. Why you should have a thing in your home that is connected to the network and can talk to you. And, well, explaining outbreaks of the future to people? That’s going to be pretty cool.

[1] http://www.youtube.com/watch?v=h83n6V7q7S4 – not on 23AndMe’s YouTube channel because the FDA is preventing sporadic outbreaks of future until they’ve been fully investigated.
[2] https://vimeo.com/37658689 – gorgeous work from Foreign Office.
[3] http://typesetinthefuture.com – prepare to nerd out.
[4] http://www.youtube.com/watch?v=dBqhIVyfsRg

3.0 Always On, Slightly Slow

Matt Locke wrote in with the caveat that Always On doesn’t mean Always Available: this makes sense with teens and children, who have less control over their time than adults do (in theory, right? I mean, I know who has control over my time and it isn’t necessarily me and it more begins with a M and ends with an icrosoft Outlook Meeting Request). Now, what I’d probably do is point at someone who actually knows what they’re talking about, like danah boyd[1], in terms of real behaviour of teens and how they treat persistent connectivity.

My gut instinct, which also feels like a somewhat wrong reckon, is that people who’ve grown up with persistent connectivity have roughly the same amount of stress (ie not a significant difference or reduction) and anxiety in terms of being overwhelmed with a deluge of incoming information. I do wonder if there’s enough of an audience for an equivalent to the slow-food movement (which is, kind of, something that the long read trend is addressing.

A lot of the reaction that I saw to the new Facebook Paper app was about how the infinite vertical scroll method of displaying News Feed items was “more efficient”, which speaks directly to the way those particular people saw the task of checking a stream – a list of items that must each be evaluated, quickly, before moving onto the next because the stream is never-ending. But there are different types of feeds and streams: in particular, Twitter’s more ephemeral version places less pressure and grants permission (if not explicitly but implicitly) to miss items in the stream. Facebook, with its algorithm, places the burden upon itself to determine what the “most important” and relevant information might be.

So what does a slow, or rate-limited feed look like? (Extra credit for identifying a business model behind it, too)

[1] http://www.danah.org

4.0 Follow-ups

Matthaeus Krenn posted a concept for a new car user interface[1] – I last talked about cars in episodes twelve and thirteen[2] which looks pretty interesting, mainly for its initial concept of you-don’t-have-to-look-just-put-your-fingers-anywhere on a touchscreen. I see some problems with it (lack of glanceable information, for example) and wonder about the cognitive overhead of having to remember whether I need to use two, three or however many fingers to operate a control and how far apart they have to be in the absence of visual feedback. That’s the thing about physical controls: you can kind of see how you need to, or can, use them, and it feels like there are a bunch of Everyday Thing Norman-type affordances that have been lost.

[1] http://matthaeuskrenn.com/new-car-ui/
[2] http://tinyletter.com/danhon/letters/episode-twelve-attention-star-trek-and-cars and http://tinyletter.com/danhon/letters/episode-thirteen-the-auto-show-learning-to-code-and-niggling-thoughts

Alright! Episode 21 is over. You should all go and watch The Lego Movie now.

See you tomorrow,