Episode Sixty Two: Wearables Unworn; Look At What They Want
1.0 Wearables Unworn
Via the Guardian, news from Endeavour Partners in a white paper that one-third of people abandon their wearables after about six months[1]. Endeavour reckons that one in ten US consumers (from households with internet access - at least 75% of all households) have a "modern activity tracker" - presumably a dedicated one, and not including smartphones. Unsurprisingly, they skew male and young, and the majority are focussed on fitness and health. Even worse than the research revealing that one-third have stopped using after six months, more than half who *have* owned one no longer use it.
So here's a bit of anecdata for you: after my initial experiment with self quantification[2], I too stopped using my devices after roughly six months. Turns out that *less* than six months is enough to initiate a habit, or cluster of habits like diet and exercise, but that in my case, any number of events (a family illness, my partner moving away to deal with the family illness, a baby on the way and a job change) all at the same time are reasonable triggers or opportunities to drop out of those habits.
The white paper goes into a bit of detail for three factors that might increase long-term engagement but at this point I start to hear gamification alarm bells going off in my head. For example, there's the tactic of baby-stepping goals and steadily increasing the amount or type of work to introduce new challenges, but right now, that just reminds me of grinding in WoW or yet more devices that will help people ascend the same hedonic treadmill.
One other thing that I'm still a little fixated upon is what happens when wearable devices are successful. From a mental health point of view, one of the things that I realised about myself was that I was only really happy when numbers were trending in the right direction. When blood sugar and weight were coming down, that was great. But the wearable systems that we have at the moment aren't particularly good (and obviously that depends on how you define 'good') for what happens when the numbers are trending in the wrong direction. Or, what happens when you don't need to progress to a goal state, but that your goal state is instead a *steady* state. For me, stereotypical language like the Nike Fuelband includes exhortations to Crush It and Win The Day, which (again, this may be idiosyncratic) imply a relentless striving for betterment that may not be each and every user's goal. Maintenance, as opposed to excellence, requires a different type of design.
[1] http://endeavourpartners.net/assets/Wearables-and-the-Science-of-Human-Behavior-Change-EP4.pdf[2] http://danhon.com/2012/04/28/myself-quantified/
2.0 Look At What They Want
I am, obviously, still thinking about design, empathy and the internet of things. There are a bunch of thoughts flitting about in my head, and in some ways it might just be easier to jot them down:
- Moore's law applied without due consideration or smartness means the measurement of everything that can measured: which may lead to naive optimisation or just a local, peak maxima that's not necessarily the fittest peak. For example: cries for services like Uber to minimise the interaction needed to have with the flesh-printed actuator (er, human driver) that will take you to your destination. Sure, that's a service-level minimisation-of-interaction optimisation but it's one that strikes me that has larger second-order effects. You might say that you can mitigate stuff like that by having "good designers" who can take into account the tradeoffs, but hey, that means that we need a bunch more good designers.
- These devices aren't necessarily anthropomorphized. They may well end up being, because anyone who's trying to maximise getting data out of human beings is going to quickly realise that evolution left an unsecured back door that would be useful for social engineering in the future: we like to help other people and we trust things with faces. We can't help it, and we spot them anywhere.
- In fact, the whole phrase "evolution left an unsecured back door" strikes me as a wonderful way to explain and think about the way we're wired to recognise and respond to a) other humans and b) other human-type things. The whole concept of humans making human-looking things to explicitly take advantage of those back doors is fascinating. But, I expect, not an original thought, and hey, conmen have existed forever.
- But we're not conning you, right? That's the promise. We come from the internet and we're here to save the world. To bend it to our will. We will apply the pure power of the algorithm, which can never go wrong, can never sleep... "Watching John with the machine, it was suddenly so clear. The Terminator, would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die, to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice."
Of course, the Terminator is a fictional machine with stupendously good AI.
- And most people, most companies aren't *actually* trying to bring about some sort of SF-esque dystopia, right? I mean, General Mills is only trying to limit its liability in a litigiously hostile world and "all the other corporations are doing it" - even Dropbox did it the other day, remember - so when we're talking about legally waiving your right to sue when you use an internet connected washing machine, hey that doesn't matter so much because your washing machine is awesome and knows how to order replacement parts.
- The big bad here isn't, I don't think, technology at all. Technology is just the easy target for humans being humans. The promise of course should be that the internet is our best bet so far for unmediated, honest communication between parties (even if those parties are honestly being dishonest, I guess), and that we should try and preserve that. And, to an extent, the internet kind of tries to route around damage like that, or at least is so widespread and so weakly diffuse (weak in the sense that it's so distributed that a vulnerability like heartbleed can fuck us over because we're not centralised enough) that it's kind-of easy enough ish to boost signals that aren't desired by the evil corporations (not all corporations, just the evil ones).
- In that way, the internet is our best hope for user-centric companies and services, right? Because the internet is just a connection machine that just opens a port here and connects it to a port there and stuff just flows between the two. And right now, most - I think? - of that traffic is human-initiated. Like Ev said at XOXO, it's a desire-fulfilment machine (and that's a rather depressing way of looking at it).
- Or, the deal is this: if O'Reilly says that the Internet of Things is made cooperatively with humans (and I believe him - it is), then it's cooperatively made with *people*. And, I hope, the people, services and companies that are going to win are the ones that remember the people part. I don't particularly want to live in a world where I'm considered to be a lowest common denominator fleshy actuator or endpoint bidding for work that is requested by either another fleshy actuator or algorithm. On the other hand, algorithms might make better bosses.
I haven't finished reading Kevin Kelly's What Technology Wants. I have a copy that's been sitting on my beside table for months now. I rather think that if the supposition *is* that technology is our servant, rather than our master, then Kelly needs to have another think about what the Internet of Things and its protuberances like the Quantified Self are doing. Sure, I can *choose* to use something like a FuelBand to maintain my wellbeing, but I should ask what the FuelBand wants, too. Right now, my FuelBand wants access to all of my movement, every second of the day in service of its goal. But its other goals are opaque. It may well want to lock me into an ecosystem. It may well want me to move every hour. The point, as I'm reminded from conversations with friends far smarter than I, is that it's a lot easier to deal with servants when you know what their motivations are and why they're doing things. You can make that interrogation process easy, or you can make it hard, or nigh-on impossible and opaque.
--
Have a good weekend,
Dan