Episode One Hundred and Fifty Five: "Web" "Content" "Strategy"; "Probably Not"; Firewall Earth; To Clean The House
0.0 Station Ident
2:13pm after a nice lunch and stimulating conversation. Stupid jokes on Twitter about the new Dyson 360 Eye robot vacuum for which a teaser video basically screams: hey, did you see Robocop? Wasn't it awesome when you had that point-of-view shot from the robot/Murphy and all the technicians futzing with him? And then they booted him up? Yeah, let's do that because THAT'S TOTALLY NOT A TIRED REFERENCE or something that paints you into a certain corner of culture. But hey, you're Dyson. Maybe someone else can make a vacuum cleaner for the rest of us who're not quite so into biting social satire and ultraviolence.
DC, still. Hotel room. Wrote a bunch of notes for tomorrow's talk, some restructuring to do, probably later tonight as well as frantic image searching. Maciej Ceglowski is on tonight, so it's not like there's a hard act or anything to follow. Jesus Christ. Anyway, on with the show.
1.0 "Web" "Content" "Strategy"
I've never seen Karen McGrane[1] talk before, but she opened up today's conference day with a section on content strategy, formally titled "Content in a Zombie Apocalypse", which was more-or-less this talk[2] on Slideshare.
McGrane made good points, but I felt like there was something deeper that didn't quite come out. The general gist for those following along at home is that people involved in "web content" and "web publishing" are having an increasingly difficult time because of all the different places the web (or: internet) has extended its tendrils. So "web content" can increasingly appear on unused internet fridges, in-car entertainment systems, mobile phones, digital signage and so on. McGrane makes a good case for the tyranny of paper - reminding us that it was Xerox that invented What You See Is What You Get, and one of the reasons why they invented it is because they'd just invented the Laser Printer, which needed an excuse to exist. And then, blink and you'll miss it because you end up with Microsoft Word, Adobe Photoshop still requiring canvas dimensions for new documents, badly made school newsletters and PDFs.
McGrane's trying to get us to understand something important here when she talks about separating content from presentation. Mobile devices and fridges are all reminders that the "content" that we put on the internet can increasingly be consumed or interacted with or whatever in a variety of mechanisms, some of which might not even involve screens (to which those who've been dealing with assistive devices breathe a big sigh of where-the-fuck-have-you-been).
I'd even go so far as to say that what we actually want to do is separate *meaning* from presentation. Clients/organisations want potentially unicorn systems that can take "content" and deploy it, in the right way, in whatever place, be that digital signage around a campus to a push message that gets delivered to mobile phones to a set of notices in whatever learning management system they use.
But the tyranny of the page is pretty hard to get over, and the web hasn't really helped with that.
I'd like to go a bit further though and say that the web has unhelpfully confused those of us in the land of content strategy. Because the web is - at the very least - two things: a protocol for the transport of information (the Hypertext Transfer Protocol part of the web) and a set of standards about how you define and display that information (the Hypertext Markup Language part), before you even get into Web 2.0 things like runtimes and server/client-side processing and scripting.
My point here is that the *internet* is the real transport mechanism. The web as experienced by lay-people is just another display format mashed together *with* a transport mechanism, one that has been, over the last twenty to twenty-five years, predominantly associated with screens.
But the way McGrane (rightly) wants us to think is that we have atoms of *meaning* that can be transported over the internet into wherever they may be: printed onto toast, 3D printed onto tissue-scaffolds, projected by laser onto the moon, released by water droplets timed to the millisecond or, even, printed out onto paper.
In other words, content strategy for "places where there are electricity" or content strategy "for the internet" helps move a mindset away from content strategy "for the web" where web inherently is taken by lay-people to mean "something with a screen".
Perhaps helping people think about content that way, rather than inherently screen-based, will help move us away from meaning and display-bound unfindable, unsearchable, unparseable blobs.
[1] http://karenmcgrane.com and @karenmcgrane
[2] Content in a Zombie Apocalypse - Karen McGrane on Slideshare
2.0 "Probably Not"
I was doing that thing where I actually read a long-read on Medium: this particular one was about the abhorrent prosecution of scientists in Italy for a supposed failure to communicate the risks of an earthquake[1]. It's a compelling read, but one of the things that stuck out for me was something that was somewhat orthoganal to the actual point of the article (putting science on trial), and it was how humans deal with probabilities and risk.
I mean, we know that we're not good at judging probabilities and dealing with risk. You didn't know that? You should (ha) know that. You should probably start with a Wikipedia grounding[2] that covers some of the cognitive biases and black holes we have in terms of risk perception.
Anyway. I'm just going to wholesale quote the interesting bit, but you should also go away and read the entire article, too.
In the winter of 1951, a group of CIA analysts filed report NIE 29–51. Its aim: to examine whether the Soviets would invade Yugoslavia. And the bottom line? “Although it is impossible to determine which course the Kremlin is likely to adopt, we believe… that an attack on Yugoslavia in 1951 should be considered a serious possibility.” Once finalized, the report made its way into the bureaucratic machine.
A few days later, a State Department official met up with the intelligence whiz whose team had composed the report. What did serious possibility mean? The CIA man, Sherman Kent, said he thought maybe there was a 65 percent chance of an invasion. But the question itself troubled him. He knew what serious possibility meant to him, but it clearly meant different things to different people. He decided to survey his colleagues.
The result was shocking. Some thought it meant there was an 80 percent chance of invasion; others interpreted the possibility as low as 20 percent.
Years later, Kent published an article in Studies in Intelligence that used the Yugoslavia report to illustrate the problem of ambiguity, particularly when talking about uncertainty. He even proposed a standardized approach to the language used for risk analysis — “probable” to indicate 75 percent confidence, give or take about 12 percent, “probably not” for 30 percent confidence, give or take about 10 percent, and so on.
- The Aftershocks, David Wolman
which just kind of blew my mind. Not only do we have massive holes in our cognitive architecture that are essentially probability-based backdoors into rooting our behaviour, but now we don't even know how to talk about them! It's some kind of deliciously evil double-jeopardy situation where:
a) we don't understand and can't grasp probabilities without engaging our slow brains, or what Kahnemann calls System 2, the logical and non-intuitive aspect to our intellect; and
b) even when we do, we can't reliably communicate them!
It looks like the followup to Kent's findings are a more pragmatic attitude: instead of defining new expressions for quantified probability, a more descriptive attitude that looked at what the majority of people understood by certain expressions of probability and the advice to standardise on *those* definitions when you had a quantified probability. You can read more in the CIA's unclassified document Definition of Some Estimative Expressions[3] which is a pretty good idea to what some people, at least, think when you say "probably".
[1] The Aftershocks by David Wolman on Matter / Medium
[2] Risk Perception - Wikipedia
[3] Definition of Some Estimative Expressions - CIA
3.0 Firewall Earth
It's a standard SF trope - invasion/co-option of our planet and species by way of an infovirus or a meme or whatever. So the idea of creating a planetary firewall[1] - physical or informational - is interesting. Imagine that: national missile defence, or Star Wars - for the entire planet. A Dyson Sphere, not for capturing the total energy output of our sun, but because we're scared of what's out there. The Red Scare, but not from just one country, on our doorstep, but the *entire universe* as a possible threat, ready to infect us just by us being in the way of stray EM radiation. Celestial spheres not to explain the movement of the stars, but to protect us from them.
But then, how would you implement the software version? Do you end up with the whole problem of needing to emulate a human, emulate consciousness or ten-odd-billion in order to see whether to let the packets through? Do you nominate a demilitarized zone, a sort of safe human colony out on Europa where humans are free to accept-all packets from the rest of the universe, and watch them from a distance with a sharp stick? Would you have volunteers? (Probably!)
And anyway, what sort of material do you train a Bayesian filter to work on for a universal firewall? Hey, here's a list of previous Outside Context Problems, just make sure no more get through, ok?
It doesn't feel *that* inconceivable, though. Fear motivates so much in the wake of a threat. London's Ring of Steel, so many years ago. NMD. TSA and wholesale terahertz screening of air passengers. But, like I pointed out, we can't assess risk correctly. So a species-terminating risk like infection via alien infovirus? Low risk. Asteroid? Low risk. Shoe-bomber? OMG shut all the borders.
[1] Can You Ever Really Know An Extra-Terrestrial? - Caleb Scharf, Nautilus
4.0 To Clean The House
The Dyson 360 Eye Robot[1] - a product that wants you to know that it can see everywhere and is a robot, but not necessarily that it's a vacuum cleaner. One that riffs off science fiction tropes in its teaser video and recalls Robocop[2], with prime directives and interlaced video and all: every trope you could reasonably think of, thrown into the can for the marketing mix. A launch website that recalls the Mac Pro, hijacked scrolling (of which I'm guilty of, too) and rendered CGI showing cutaways of highly advanced technology. Tank treads, for operating in hostile tactical environments. That blue LED for, well, what other colour should an LED on consumer technology be?
An alternate Patrick Farley Spiders-esque future where the British Government throws just a few million pounds toward Dyson, orders several tens of thousands of robots and uses them to police the Ukraine/Russia border. Another one where Amazon counters with drone cleaning robots - the ideal combination of automation and cheap human labour, robots that can clean your house, humans who guide them to make sure they get that bit under the table that your cleaners always miss, ones that use the always-on camera to identify products in your house and email you offers for substitutes or click-n-save subscribe deals.
Robots. Everywhere and networked.
[1] http://www.dyson360eye.com
[2] Dyson project N223: what new technology is ready for launch? - Official Dyson Video
--
4:50pm. Maciej Ceglowski on in about 25 minutes. My talk notes in a TextWrangler scratchpad, looking forward to doing those image searches and captions later tonight. The usual pre-talk nerves. It'll all be OK. Just send me your notes.
Best,
Dan