s2e19: Uncanny Valley; Network Du Jour; Streaking Considered Harmful
0.0 Sitrep
Somewhere, in the various kinds of memory that make up this laptop, there is a model of a neural network. That neural network has been trained to read about a hundred and fifty-odd episodes of my newsletter and is trying to teach itself a) how to write English, and b) how to write like me, and thus c) how to write a newsletter that looks like it was written by me. So far, it hasn't done very well:
So the thing about part of this is a little bit of characters have a network that is a bit of life in the villera memory of the generation of something else. Don't look at a connection of the Valley syncate and it's going to take the time to be a problem with a financy. There are a whole bunch of edits of a money bothering what they do incredibly because it can imagine a little bit like Chinese Shoeldoo[c] - the Internet. So what stack me from some problems and the last ten year old Google and I think I might be clear to more so you can't just. Not here. Service is being that much shows something that start than on the internet and the internet's concernnation of the target place to make an organisation is that the real heading the real complex that's a problem that appropriate the algorithm.
I have run into problem number one of machine learning which is this: you end up just having to tweak a bunch of variables and training the model over and over and over again until you end up with a good model. Which is what I'm doing right now, only it looks like I can't get OpenCL working quite right because ¯\_(ツ)_/¯ so each training iteration is taking roughly 0.9 seconds and I'm running around 34,000 iterations. At this point, I'm seriously considering spinning up a GPU instance on AWS somewhere and doing the whole thing over in the cloud which is a bit more cyberpunk of me: instead of having a neural net learning to emulate me writing my newsletter in my own stream of consciousness typing away on my own laptop, it'll be doing it somewhere over in some server where I have no idea where it actually is. That, I suppose, is 2015.
1.0 Uncanny Valley
First: of all the accidents involving Google's self-driving car, they were all caused either by a human taking over from the self-driving system, or from someone else driving into the self-driving car[0]. I wonder if there's a sort of uncanny valley of driving where, yes, the car's driving itself, but it's doing so in a manner that's unpredictable or downright strange to *all the other drivers out there* because it's driving by the book. In that: it's obeying the speed limit and it's turning into the right lane and it's keeping its distance and all of that stuff. In this case, a self-driving car wouldn't be acting in the same way as any other driver's mental model of another car on the road - because it's not a human that's driving it.
This feels a bit like a situation where the Turing Test for a self-driving car is a car that someone else thinks is being driven by a human, but more safely, instead of being driven by a regular human, but not safely. So, via Alexis Madrigal's newsletter, the amusing idea of giving computer vision systems Rorschach tests[1].
[0] Google's self-driving car was involved in its first accident with injury after being rear-ended
[1] Move over, Turing — Medium
2.0 Network Du Jour
When we invented switching exchanges suddenly everything became a switching exchange and the brain was a switching exchange. Then we invented computers and suddenly the brain became a computer with bits of Turing-style ticker-tape inside them. Now we have networks and graph theory everywhere, so now we're looking at our genome as a graph instead of... well, instead of whatever it was we were thinking it was before. Because, I assume, when all you have is a set of map/reduce operations efficiently working on traversing nodes, then everything looks like a nail in a network.
[0] Graph Genome Offers New Reference Map of Human Genes | MIT Technology Review
3.0 Streaking Considered Harmful
I got ill recently, the kind of ill that although not kill-you-dead serious did force me to take things easy for at least two weeks. The thing is, most - I won't say all - of the personal tracking devices, software and services that I was using at the time (and still am, I suppose) don't particularly do a good job of handling someone who was well, getting ill, and then having to rest for a bit.
For starters, it feels like the approach to behaviour modification in a lot of these services it a bit skin-deep. Lark, an app that stands out to me because of how well-written its conversational interface is, is one of the rare apps that I would say is reasonably kind to you and is wanting to take a long term view. In this case, the kind, long-term view is the one that says that it's OK to have an off-day because tomorrow is another chance to do something different. That doesn't stop Lark from recognising progress and building up a good habit when it sees one - if your activity profile is trending in the right direction for a few days, it will tell you. But it will also make a point of not resetting a streak (because it doesn't count them!) to zero just because you had an off day.
I guess the point is this: data is brutal and doesn't know that the universe is unkind, uncaring, and doesn't give a shit about you. You can be an amazing person and have hit your 12k steps per day goal for every single day for over a year - congratulations, you! - and then have it reset to zero due to something completely out of your control. But most software and services that we have right now aren't particularly a) empathetic to that happening, and b) sympathetic when that happens. They just reset you to zero. In which case if you want to get anywhere near your previous record, you kind of have to knuckle down. This just doesn't work for long-term behaviour change. It increases, every single day, the cost of failure - because it's as if you're climbing a staircase that is simultaneously getting narrower with every single upward step that you take.
So: I really don't like streaks. I really don't like systems that purport to be simple (ie: just set a goal to hit every day!) but don't take into account the fact that one day, your goal might be achievable and the next day it just might not be. Or that one day's goal might be 10,000 steps and the next day's goal might be 300 because hey, you just really, really, really needed to spend the day in bed. That's not kind. That's, well, just a bit dickish.
--
Of course, this could be the output of the neural net right now and you wouldn't even know, would you.
Look, a slug.
Best,
Dan