Episode Forty Six - Snow Crashing 3; Video is a Content-Type; Blame Your Tools
0.0 Station Ident
I'm writing this at 1:25pm on Thursday, March 27 in Perth, Australia. I have the next few days in Perth - I'm doing this xmedialab thing tomorrow, then spending Saturday in a studio recording what I presented on Friday. I've added Yet Another International SIM to my collection - two dollars a day for five hundred meg on Optus, and while I know I'm in a vanishingly small market, I wish international roaming would get its act together.
1.0 Snow Crashing 3
I remember thinking that millimeter radar sounded so cool when I first read about it in Snow Crash. And then it started turning up, I think, in movies like Robocop and Total Recall as the technology that would show firearms on active security scans. Right now, millimeter-wave radar hasn't yet made the jump to consumer technology, and you're most likely to encounter it when you go through a security scan at an airport. So while YT's RadiKS Mark II Smartwheels use sonar, laser rangefinding and millimeter-wave radar to identify mufflers and other debris, the only place where you'll encounter the whole package right now are self driving Google Cars. Practically, though, that's a gnarly engineering problem. The smartwheels consist of hubs (presumably where your computing and battery/kinetic energy generation lives) and extensible spokes with contact pads - all makes sense, until you think about how small that package has to be. Perhaps the board is all ruggedised battery and computing power, and the wheels pull data from sensing built into the board.
In when forecasting the future, you have wildcards: the sudden availability of high-k ambient temperature superconductors would definitely qualify. (As an aside, I used to enjoy reading Peter Cochrane's[1] annual forecast reports out of British Telecom). As it stands, I'm not aware of any radical advance in materials science that's going to make cheap and mass-produced superconducting electromagnets anytime soon. Suffice to say that for them to filter down to harpoons for skaters they'd already have made an impact for someone like Elon Musk.
There's a wonderful description of the fire hydrants at The Mews At Windsor Heights: brass, robot-polished (another aside as to how the economy works in Snow Crash's toy universe - perhaps they're cleaned by relatives of Rat Thing designed out of Mr. Lee's Greater Hong Kong?), designed on computer screen with an eye toward elegance of things past and forgotten about: that is, artisanal fire hydrants designed to maximise property value, brand-perfect, maintained without needing any human touching.
Later, YT takes a shortcut through White Columns, a sort of capitalist-endorsed racial segregation through property values, a future not entirely unenvisagable if the religious discrimination legislation had passed in a variety of US states over the last month: WHITE PEOPLE ONLY, they say, NON-CAUCASIANS MUST BE PROCESSED. So, not strictly *denying* service to non-caucasians, just... processing them. This is another opportunity to remind ourselves about the other big thing that Stephenson - and a lot of other people - missed. Ubiquitous wireless data, or even the concept of wireless data at all, just doesn't figure in Snow Crash's toy universe. YT has a visa to White Columns, but it's encoded as a barcode on her chest, and a laser flicks out to scan it as she rides past. No RFIDs here or Bluetooth iBeacons or short-range wireless coms - it's as if she literally has a QR code pasted to her chest, ready to be read by anyone.
I'll leave this Snow Crashing installation with this one quote, though: "the world is full of power and energy, and a person can go far by just skimming off a tiny bit of it."
[1] http://www.cochrane.org.uk
2.0 Video is a content-type
The topic I've been asked to speak about in Perth is of trends in video, social and mobile. Which is pretty excitingly large, and I've decided to cover them that way in my keynote - mainly because they seem helpfully ordered in order of interestingness and impact. Once I'd sat down and done some thinking, a sort of hierarchy emerged where video was at the bottom - the least interesting and a mere content-type or format, followed by "social" as a relationship graph and open to way more interesting combinatorial explosion, followed by "mobile" - the two of which I'll cover in subsequent issues.
So here's what I've been thinking about video.
First, the easy stuff. Video's just a content-type. It's one of the best and most effective storytelling methods that we have at our disposal, mainly because we've spent so much time with it as a species that we're literate in both its consumption and production. At the day job, I can assure you that there's a stupendous amount of care and attention that goes into making sure that, at least in the 15-90 second ad format end, you're emotionally engaged. Same goes for TV shows and longer-format stuff like movies.
But because video is a content-type it's escaping the strictures of its dominant media. So goodbye, broadcast with your programme lengths and scheduling issues, hello, and I paraphrase the BBC here, the "amazing creative opportunities" that just getting rid of those two constraints promise. With digitisation, video as content-type can go anywhere now. For the people who make good video, that's great news: more demand, more opportunity for distribution. The work my team and I did on building Sony's brand site couldn't have been done without the expertise built from an understanding of animation, CGI and filmic storytelling. Sure, the end result wasn't film (and I'd argue that it wouldn't have been the same to just produce a film, either) but it couldn't have been made without that knowledge. Adam Lisagor does a good line in video storytelling to communicate the benefits of apps, regardless of what he thinks of our effort for Facebook's Paper. Now that we have (more or less) the right kind of distribution system, genuine multimedia packages like the New York Times' Snowfall are making use of video in a way that news organisations wouldn't have considered before in order to deliver compelling stories. And, of course, all of this is alongside the promise of companies like Condition One that have been developing 360 live video for delivery on the iPad and now looking at platforms like the well-funded Oculus Rift.
So the need for more video is great for people who're good at producing it. The people who need good video, though: their budgets are being squeezed because everyone's budgets are being squeezed. And the thing about video is, just like with the amateurisation of everything else on the internet, you can get quite far if you've got lots of time and no money. For brands, what they have instead is a frequently misplaced sense of urgency and not enough money, exacerbated by being able to point to what people with no money and lots of time are able to and wondering why they can't have nice things.
But that's not the biggest thing about video, I think. I think it's this:
I'm not quite sure what Google's doing with all the video that's being uploaded to YouTube, but I hope they're working on a way to understand it. It's a sufficiently hard problem, because you're essentially talking about speech recognition and a pretty good degree of semantic understanding. Or, you know, hard AI. It seems, to a geek, untidy to have to rely on human-supplied metadata, especially explicit descriptions and tagging, especially when you have so much unstructured data. And I'm not entirely sure how you go about implementing some sort of human understanding checksum: it's one thing to have OCR candidate text run through Recaptcha to help train OCR algorithms to mine data out of books, but given that most of the video (I'm guessing, here) is user-generated and not the kind that comes with broadcast closed captioning and there's *already* too much video to watch, I don't know who's going to check all those algorithmically generated transcripts against what's being said. Maybe the algorithmically generated transcripts will be good enough, though.
So here's a thing: at some point, Google will turn on "good enough" machine transcription and suddenly every YouTube video will have a searchable transcript. At some point, Google will turn on "good enough" entity recognition and its neural nets will get smart enough so that they can recognise individuals and not just cats, across up to 60 frames per second at 1080p. And then what? No one was watching most of that video in the first place, but there are always extra corners of information you can monetise.
That's the thing about video for me: it's a black hole. There's no Douglas Adams-esque VCR, watching all the realtime YouTube feeds and discerning knowledge from them. At least all the textual documents getting sucked into Google are being subsumed in some way and knowledge and understanding being mined out of them. Video is incredibly far behind, I feel.
3.0 You Can Blame The Tools
Bret Victor has done a fantastic talk[1] that you really should check out. It's called Inventing on Principle and though it's been out for a while, even if you have seen it, I think it bears and withstands re-watching. Although it's an hour long, the real meat of it is in the first thirty minutes.
Over the last three years I've been doing a bunch of thinking about what it means to be able to produce great creative work - and I'm using the word creative loosely, here. Victor's got a good general principle which is that to produce good creative work, especially novel work, the gap between you and your work needs to be as small as possible. Think woodworking and sanding, or planing: there's a tactile dimension to the work being done and feedback is instantaneous and visceral. In physical media, it's easy to achieve this sort of low-latency high-bandwidth sensory feedback because, well, we're embodied in a physical universe (and if you disagree with that, then we can have a separate conversation about Nick Bostrom).
So: pencil and paper is good. Draw. Don't like it? Rub it out, make a new one.
Typewriter. Word processor.
Non-linear editing in film? Also good. Think the piece might play better with this scene over there? Move it and watch it. Put it back if it didn't. Cut it here, cut it there: and in my last three years I've had a somewhat unparalleled education in what it takes to make great TV ads, and let's just say that it always pretty much involves sitting in an edit suite and trying things out to see what they're like.
Photoshop... is interesting. Watching screencasts of people who know what they're doing with Photoshop is a bit like watching Yehudi Menuhin play the violin. Photoshop is an instrument and these people know how to wield it.
Victor's point is that when it comes to code, our tools are nigh-on useless. Adapted from another era when we didn't even have interactive screens, even the best tools that we have, like when Microsoft introduced Intellisense for code-completion are predicated upon code-as-text. Not code-as-output.
Interactive work, say, building a web application that shows you running routes on your smartphone or a branded iPhone app that lets you give a Coke to someone on the other side of the world - my examples are necessarily drawn from the brand-driven world I've been living in - is a complete fucking nightmare.
At its worst, that kind of interactive works like this:
Someone comes up with an idea. That person explains that idea to another person. If they’re being masochistic, the idea will be explained without drawing anything: as a paragraph in a Word document, for example (and it will most likely be Word, not Google Docs). Now the second person sits down in front of a development environment and, in her head, pretends to be a computer. When she’s pretending to be a computer, she programs herself to work out how she would tell the computer what to do. Then she writes down, in something that ultimately is just another version of Microsoft Word, a bunch of instructions that may or may not do what she has guessed she needs to tell the real computer to do.
Then she tells the computer to run what she wrote. And swears and then tries again. After a while, she gets the computer to do what she thinks she needs it to do. This might have taken days or weeks. Then she shows it to the first person. This is all wrong, the first person says. This isn’t my idea at all. Our developer now commits ritual suicide.
This is a familiar refrain from the agency world. Agencies like mine prize themselves on being able to do two things: come up with a strategic framework that leads to the right kind of idea, and then coming up with the right, breakthrough idea. Ideas trump execution, in the world of advertising.
Coming from the world of code, execution trumps idea. And thus our inevitable clash.
Now, it's obviously not a black and white divide. Sometimes, a so-so idea can have an unbelievable execution and you can still get something magic. I'd put the work that our agency did for P&G and their Winter Olympics campaign in that category - a stunning insight, one that all Olympians have mothers, is paired with a script that isn't groundbreaking (mothers supporting their Olympian children as they grow up, cut to reaction shots as they compete and win or lose) but absolutely wins in terms of pixel-perfect execution. Music, editing, composition, casting, strategy: all work together to create something good.
In the environment of an agency, in the majority of cases, those entrusted with answering a brief with interactive work don't actually know how to craft that work. And it's hard enough to communicate ideas as it is, without having, essentially, the blind leading the blind.
This is why, naturally, there's a well-meaning drive to prototype and iterate. As soon as a germ of an idea exists, prototype that idea, bring it to life. But then you look at the other side of the coin, which is this: where are the prototyping tools? Facebook ended up building their own on top of Quartz Composer[2] - and having seen it in action with Paper and Facebook Home, never mind what you might think about the actual products, I can attest that their development would have taken significantly longer without having shortened the iteration loop.
I keep coming back to Pixar, and I don't mean to, but it's just because they're so frustratingly good. They're a storytelling organisation that, in part because of its roots in computer science and the fundamental understanding that computers can help you do things, have a team dedicated to pre-production tools. Imagine that: a team devoted to building software that helps you, as an organisation, make better creative work. When Ed Catmull says that all of Pixar's movies start out sucking and that it's their job to make them not-suck, why does it feel abnormal for them to be one of the rare companies that sees an opportunity in software helping to do that? Is it because they're a creative company, and computers and software are soulless entities, sucking out the last bastion of human distinctiveness? Because bollocks to that.
Victor's charge is that we need better tools. It's easy to see what might happen when software eats the creative industry in terms of metrics and analytics. But software should, and will, eat the world of creative concepting. Anything that helps us iterate our ideas more quickly, and share with more clarity, is a good thing. Who's building this kind of software? Where's the continuous integration for scripting? Sure, ideas need time to settle and nurturing. But they also need to be kicked and prodded and then, when they're right, polished and polished and polished. Not for the last time am I left wondering if Word isn't the best tool for that job.
[1] https://vimeo.com/36579366
[2] http://facebook.github.io/origami/
--
Best,
Dan