Episode One Hundred and Twenty One: Dark Value; The Latest Attempt; Five Eyes

by danhon

0.0 Station Ident

3B to Salt Lake City, a tiny bitstream hanging off BLE to LTE connecting this laptop to the ‘net while I scribble in yet another TEXTAREA.

Watching: Captain America: The Winter Soldier for the nth time, thinking about that Marvel Cinematic Universe freemium mobile MMO that’s sitting around in my head – in other words, imagine what Google’s Ingress could’ve been with the right story…

1.0 Dark Value

I mentioned in yesterday’s episode that I’d been reading an article in The Atlantic about The Invisible Economy[1]. As I can see it, the central thesis of the piece is an inability to value the increasing quality of life that technological progress gains us. Mainly because all of the stuff that we’re getting in the virtual, internet-delivered or mediated economy, is paid for through attention or data, and not through the cold hard stuff that we know of as money, like in the physical world.

The idea is that digital services are saving people money. The collapse of the newspaper industry means that “news” is now a substitutable good, with a free version now available – ie news.google.com, or any online news source without a paywall. Boom – you’ve just saved yourself the cost of a USA Today subscription. The distinction to me is that with the advent of free-to-access online news sources, an individual is “earning” more – when their take-home pay is the same and the GDP isn’t necessarily affected. But, from the individual’s point of view, money saved isn’t necessarily money earned, and there’s an argument to be made for the devaluing of the original service in the first place. Now that news is free to consume and access, that $275 per annum USA Today subscription is harder to justify. But in its place, of course, comes the $160 internet access fee.

It all feels a bit house-of-cards. Supposedly, we are purchasing services worth billions (the way we’re valuing them is in part due to their value to advertisers, which is where the house-of-cards comes in a bit) – but these are all new services that are, in some dark way, generating value and revenue if not moving bits of virtual paper around, that aren’t necessarily reflected in income and GDP.

Does this make us richer? The article talks about the average middle class family spending about $2,500 a year on entertainment and publications, but apparently, you could opt to “entertain yourself using free movies, YouTube videos, [use] streaming services like Pandora and [get news] from Google and save half that expenditure.” If, of course, you believe that a summer blockbuster is substitutable by a YouTube video (sometimes it is, sometimes it isn’t), and if you’re OK with using other free services that may or may not carry what you want. Of course, the same argument holds true for pre-digital technologies, regardless of the convenience factor: you could always got to the library.

Towards the end of the article it feels like we get to the crux of the point. The virtual economy doesn’t affect everyone equally. The lower your income, the greater a proportion of your take-home pay you expend on essentials, like housing, heating, transport, food and healthcare. Each and every one of those costs have been rising, whilst cream-on-top services have been eaten from within by advertising supported services. Zipcar may save money for the squeezed middle class, but it certainly doesn’t do anything for those in poverty, especially when public transportation services are being gutted.

The worry, of course, is that measurement of this dark or virtual economy will lead to over-indexing on it and paying too much attention to it (as if there isn’t enough attention paid to it already). Look, I’m not an economist. But the types of goods and services that are now becoming free at the point of consumption (and, I’d argue, that aren’t necessarily substitutable within their class – for example, when you really really want to read the new Charlie Stross, it doesn’t matter that Project Gutenberg offers free classics) are ones that instinctively feel like the first to be cut anyway.

Talk of technology improving productivity and quality of life feels like it made sense when you could get refrigerators and vacuum cleaners, but I suspect the real tipping point is when technology attacks (or is even able to attack) more bottom-of-the-pyramid needs. Cheaper energy, more affordable transport, better housing – these all feel like things that will drastically improve quality of life and should be measured rather than whether we all have access to free streaming music.

[1] The Invisible Economy – The Atlantic

2.0 The Latest Attempt

A reader pointed me toward Yet Another Ad Agency – Mindshare, this time – launching a pseudo-product unit in an amusing press-release piece of an article over at AdAge[1]. A five person team – led by the agency’s MD of mobile and staffed by two people in London and another two in New York is going to partner with MapMyFitness, the tech team brought in by Under Armour[2].

AdAge’s article talks about the unit experimenting with “fitness trackers and sensors, smartwatches and augmented reality devices, like Google Glass” – at which point you start wondering, okay, what would a smart media agency do in that area? (First, though, you should understand Mindshare’s role as a media agency – they buy media from properties and offering some sort of targeting capability. They then partner with the creative agencies, and if you’re lucky, you get a good combination of smart media placement and creative that works to hook your audience’s attention. In the old, pre-digital days, media agencies would be the ones you’d go to to work out where your tv ads should be, or where your outdoor billboards would be, or what newspapers you should be in).

It’s interesting because the fixation is upon the devices, and not the ecosystem that the devices are embedded in. It’s not clear whether it’s a slip-up by Mark Bergen, the journalist who produced the piece, or Mindshare’s Chief Digital Officer, Norm Johnston, but this para:

“[Chief Digital Officer Norm Johnston] stressed the devices would require “more native content and storytelling” than traditional advertising. “They’re quite intimate and the data shared is very sensitive,” he said. “It’s not about just slapping up ads.””

I’d submit that the devices require anything *but* native content and storytelling. At a push, the “augmented reality devices, like Google Glass” might, but native content in the advertising sense typically means something that looks like it fits in alongside whatever else is on the platform (ie: sponsored Buzzfeed articles, or that time The Atlantic went all Scientologist).

I’m glad Norm Johnston understands that “it’s not about just slapping up ads” – congratulations, Norm! You’re exhibiting a sensitivity to the audience to which your product is going to be exposed! – but part of the deal is that no-one, *no-one* has figured out what the point of these devices are, never mind advertisers. I mean, good job for realising that the devices are intimate – perhaps that will help with understanding that the notification threshold for something tiny that you wear on your wrist is not that great.

This is less about native content or storytelling (brand storytelling on my wrist is totally something I’ve been looking forward to for the last five years, and I’m glad that we’re finally getting the chance to experience engaging moments with brands on a wrist-basis in the imminent future) and more about what utility these devices actually offer.

The impulse here seems to have been something along the lines of “more things have screens now! How can we show our clients that we have a wearable strategy!”, the corollary being that if something can display *anything*, then someone’s going to want to put advertising on it, whether it’s a good idea or even going to be effective.

Which is why as much of this is about the ecosystem around the devices (if you want to be successful, right?) – which is to say what sort of messaging makes sense in the experiences and usage patterns *enabled* by the devices, rather than on-device themselves. Are there opportunities for tactical messaging inside MapMyFitness? Probably. Are there opportunities for a company like Mindshare to utilise activity data to enable better targeting, and what are the opportunities for consumers to opt-out of that data? And: just when did you want ads on your smartwatch in the first place? Sorry, I mean: what’s native content for a smartwatch? (Clue: that previous sentence contains two phrases that should be taken out and summarily shot).

[1] Mindshare to Launch Wearable Tech Unit

[2] Under Armour to Acquire MapMyFitness, One of the World’s Largest Open Fitness Tracking Platforms

3.0 Five Eyes

See: the Five Eyes.

3.1 Eye One

So it turns out that, in America at least, persistent airborne surveillance is something that still squicks people out[1]. In Britain, at least, there’s a qualitative difference between ubiquitous ground-based surveillance and, well, ubiquitous airborne surveillance. I wonder if it’s a predator/prey thing: being watched from above by a small thing that’s circling over you versus eyes on sticks. I also wonder if there’s some sort of atavistic difference between being watched by a multitude of eyes on poles versus on ever-circling eye.

[1] The airborne panopticon: How plane-mounted cameras watch entire cities

3.2 Eye Two

This is the part of today’s episode that makes me feel more like NTK[1] than anything I’ve ever written before.

So there’s a really important thing going on. The main three political parties have agreed, amongst themselves, to support an amendment to existing legislation that will force telecommunications providers (internet service providers and phone networks) to carry out blanket retention of phone calls, texts, and internet browsing history.

The UK government has graciously published explanatory notes[1] – it’s worth taking a look through them, but ultimately, this is the deal: legislation made in haste is frequently bad legislation. Legislation that hasn’t been accorded proper parliamentary scrutiny is bad legislation. Legislation that hasn’t provided for consultation and comment from interested parties and civil discussion is frequently bad legislation.

If you’re a voting British Citizen, then please let your MP know how you feel[2], because we don’t have much time.


[2] Open Rights Group Campaign – Stop The Data Retention Stitch Up

3.3 Eye Three

Here’s a free idea to use the ~1 megapixel eye embedded in most desktops and laptops these days for something somewhat more ambient and less creepy than the whims of a school district administration[1] or just chatting to your mates on whatever passes for ChatRoulette these days.

If we’re to believe Linda Stone, when we’re stressed out and sitting at our desks and anxious – say, when we’ve got a particularly vexing email open in Outlook or GMail, it’s pretty common for us to hold our breath. Stone coined this back in 2007[2], and I remember her talking about it at Foo Camp and getting super excited – it occurred to me back then that you’d try and include some kind of blood pulse oximeter to measure the change in breathing rate through a contact surface (say a touchpad or a mouse whilst your skin was resting on it) and produce some sort of gentle reminder or ambient awareness that you were holding your breath. In a way, you don’t even need the technology – you just need to inculcate a sort of habit to check whether you’re breathing or not.

(In fact, you’re reading email right now. So try and pay attention to how you’re breathing whilst you’re reading this and the next few emails in your inbox).

Of course, we don’t need that now. We have Eulerian Video Magnification[3], which was one of those papers from SIGGRAPH 2012 that felt like yet another piece of future software falling through the time hole of the SIGGRAPH YouTube video trailer – a way of processing live video input using the frankly astonishing processing power we have *literally* lying around to determine things like pulse and, you guessed it, breathing rate, just by using an off-the-shelf webcam.

So there you have it: a webcam that watches you breathe, and reminds you to, as it were, take a stress pill. Or at least get up and walk. Bonus: MIT obviously open-sourced the algorithm, so first one to the Mac App Store wins a few of my dollars and the gratitude that you had the wherewithal and attention span to actually build the thing. Go on, you can probably do it in Swift, too.

(There’s an aside here which is we seem to be building an ever-increasing technology stack to solve problems that, maybe – just maybe – don’t need technology to solve them. I mean, it would probably help if you stood up, instead of sat down. Or if you were in an environment that was a little bit less stressful. In other words, this feels like yet another example of technology allowing us a shiny bauble to attack a problem sideways, nibbling at it, rather than effecting real change at the root cause. But hey, that’s humans, I guess.)

[1] School District Pays $610,000 to Settle Webcam Spying Lawsuits

[2] Diagnosis: Email Apnea

[3] Eulerian Video Magnification

3.4 Eye Four

My son went through a phase of shrieking (with joy, I hasten to add) as he learned to walk, exploring the corridors and rooms of our houses with hands and arms held up to balance himself, yelping with surprise as he would amble around. I would congratulate him on attempting to use echolocation but that he didn’t need to – his eye exam had come back with flying colours (we’ll see how long that lasts) and whilst Batman was indeed pretty cool, his parents were still very much alive and he didn’t need any further Bat affectations.

There’s been talking of different ways of seeing – different ways of experiencing the world. Seeing like a satellite, seeing like a GoPro, seeing Through Glass, as it were. There’s an issue in ownership of eyes – in whether we have communal eyes, for example. Dave Eggers’ The Circle[1] rests in part upon the protagonist valley company developing super-cheap lollipop Dropcams[2] that stream HD quality video without care for wireless data rates.

So parallel to this, I’m interested in how a street sees. How the twitchy window obsessiveness of small neighbourhood watch communities deals with ubiquitous, cheap surveillance. You can imagine seeing a community where instead of paying for extra police patrols, some enterprising soul says “well, we could all chip in and buy a dropcam each for our property and blanket the area in motion-tracked video surveillance…” and make it available to everyone in the neighbourhood. A sort of opt-in panopticon.

These are eyes that don’t know what they’re seeing, that see movement in pre-defined areas, where video is pushed upstream, recorded, imaged, contrast-enhanced, edge-detected, movement tracked, and then notifications auto-pushed to wherever. But these could be eyes that we choose to put in places ourselves, not infrastructure that is placed in areas for us.

[1] The Circle, by Dave Eggers

[2] Dropcam, now a Nest subsidiary

3.5 Eye Five

So I’m still reading Pattern Recognition and there are things that are sticking out at me with the benefit of ten years of hindsight. Gibson undoubtedly wrote a book in the now, and the 2003 now that it was routed in was a post 9/11 era that knew catastrophic, abrupt change, but a sort of pre-psuedo-ubiquitous internet era of less-than cambrian explosion of Valley-based software consumption of the economic patterns of the world. That 2003 knew that the future was volatile, that the only thing they could rely on change and as the way Bigend puts it, the only thing to do about a constant threat of change is risk mitigation. In Bigend’s world of 2003, he’s afraid of the future and won’t face it. His clients are facing inevitable decline, and what he’s doing is trying to smoothen out the ride.

Compare that to the boutique/internationally powerful advertising agencies of today, who are desperately trying to remake themselves and are jealous of the power of the Valley to *remake* the world. Google [x], Facebook and Oculus Rift aren’t risk mitigation strategies. They are attempting to do for capitalism and the economy (such as it is) a sort of bootstrapping for the future, not quite the terraforming for capitalism that’s happening in developing countries where the internet is literally being dropshipped in from high altitude.

But this wasn’t about that. This was about seeing.

We don’t have eyes into the right kind of future, I think. I’m spending a lot of time thinking about smartness and what that means, and what a viable smartness might mean in terms of the networked devices that we build around ourselves. The existing models that we have seem predicated upon some sort of weird 50s boomer home automation – they’re missing Robbie the Robot, but all this business of “wouldn’t it be awesome if the lights automatically came on when you got home and the TV turned to the right channel” and so on strike me as not even mundane in the sense of fantastic technology that we’ve become accustomed to and no longer provokes sensawunda in us, but mundane in the sense of lacking imagination as to how things could truly be different.

There’s a fog in seeing – a sort of inability to think about what will happen in the short-to-medium term. I think Charlie Stross has written about this before – he’s certainly exhibited a tendency to not be able to cope with the fact that, if a singularity is going to happen, things are going to be *too* different in the medium future, and just weird in the near future.

I put this down to a failure of fiction, in a way. The type of home automation and smartness that we’re talking about is something that we’ve seen since the late 80s with, you’ve guessed it, Star Trek: The Next Generations. “Computer, lights” and “Earl Grey, Hot” voice commands are within our reach but oh-so-boring.

I want to see something else, something surprising, something funny. My son’s first attempt at humour was when he stopped nodding along to the song Baa Baa Black Sheep and instead shook his head – the sort of unanticipated surprise (No! No wool!) that jolts you out of familiarity. Lights coming on, music playing, heating done just right – that’s not *smart*. In a way I admit to shifting the goalposts, but that’s only because I feel like we’ve seen this type of smartness for so long.

In fact, comedy and satire – the stylings of Red Dwarf’s Talkie Toaster, Douglas Adams’ sighing doors and Richard Morgan’s AI-run hotels that keen desperately for guests in an almost homicidal way are the old examples, the ways in which we see slightly broken versions of the future where the edges are sharp and not roundrects, where things work, but not quite, in ways unanticipated.

I think part of it is seeing, in a way, what would make a smart home not necessarily smart, but a *home*. What’s smart technology with a patina, a pattern of usage and wear, that feels comfortable as opposed to purely utilitarian. Smartness right now is a lightbulb whose colour you can change without having to get up – if your phone is charged and on.

Again, this is a kind of seeing where Laura Ashley or Target makes smart home, almost throwaway pieces – where the emphasis isn’t on the “smart home” side, but instead the “stuff you want to live with”. Again, it feels like the use case that’s going to make this stuff go mass market isn’t necessarily the remote control, long-pointing-finger aspect, but a lower need. What are smart home objects, for example, that make you *feel* safe?