s3e02: Deep Dream for Depression; Unconscious Bias Prophylactic
0.0 Station Ident
8:45pm on Thursday, March 24th, 2016. There's a plane flying overhead and I'm in the den in our house. No music playing, and the Safari window is full screen. Sat on the sofa, feet up on the ottoman. No distractions other than this text window. More of that later, I expect.
1.0 Deep Dream for Depression
I've been lucky enough to have spent the last four to five weeks in intensive treatment for severe anxiety and depression. I say lucky, because at that point, it felt like the only alternative *I* had would've ben somewhat... terminal. So right now, I'm probably sitting here in one of the best of all possible worlds.
For the type of intensive treatment that I was in, you get taught a lot about how we think brains work and how techniques and skills can be learned to effectively re-program your brain. One of the best analogies that I could come up with, that I shared with the group that I was in, was a bit like this:
We remember the Google Deep Dream images from last year. The cthulu-esque nightmarish images of puppies and slugs[1]. At a macro level, this is how those deep dream images were generated:
- set up a neural network that is a few layers deep - so one set of neurons produces an output that goes into another set of neurons and so on;
- train that neural network to be really good at finding, say, dogs, by showing it lots of pictures of dogs and not-dogs and telling it which ones are which
- now, instead of showing it a picture of a dog, show it a picture of something, anything, and tell it, essentially, to try *really really really hard* to find anything that it thinks has any dogginess in it. Get it to highlight that area
- feed what came out of the last step back into the neural network again so it keeps trying harder
If I may unhelpfully anthropomorphise, I think it's a bit like this. You've got a little pet that you've trained to tell you if it sees a dog in a picture. You show it anything - static, say - and ask it to try really hard to find a dog. It will find one! Do you know why? Because its entire job is to find dogginess. It's a dog-pattern-matching machine. Its raison d'etre is not, say, to watch television so its owner can do something else, but it burns, lives, just to see dogginess in the world. That's all it's supposed to do.
Our brains are a bit like this. This shouldn't be surprising, because the design of recurrent and convolutional neural networks of the like that produce Deep Dream images and nearly-English-but-not-quite, or slices of Friends episodes from the possibility space of All Possible Friends Episodes, is based on what we know about our own brains and all the other brains we've had the fortune to take to pieces to try to figure out how they work.
Our brains are really, really, really good at pattern matching and pattern finding. There's a word for this: pareidolia[2]. I first encountered that word when the collective intelligence that was playing the alternate reality game The Beast would find signal in noise, meaning where there was none. An overactive network of some of evolution's fittest pattern-matchers egging each other on, hunting for stimulus.
Negative thoughts about yourself are a pattern. Your brain can be trained to find them. It can practice finding them, and it can get rewarded for finding them. There's no "truth" to a dogginess pattern-matcher. Equally, there's no "truth" to a negative thought/depression finder. It's just its job. And it's really good at its job, so wherever it will look, *it will look really hard* and find what it's looking for. Amp up the gain. Enhance the signal. Derive meaning from noise. But it's not truth. It's just a pattern that's been found. And that pattern doesn't have any inherent truth to it.
In the same vein? It's not quite the killing joke. But, I've come to believe, there are thoughts and patterns of thoughts that are the equivalent of Douglas Hofstader's This Record Cannot Be Played On Record Player X. A sort of incompleteness-theorem for the human brain. A set of thoughts that, if executed, will destroy the hardware upon which they're run.
Good job there's a way to introduce an outside context intervention, right?
[1] Research Blog: Inceptionism: Going Deeper into Neural Networks
[2] Pareidolia - Wikipedia, the free encyclopedia
[3] Gödel, Escher, Bach - Wikipedia, the free encyclopedia
2.0 Unconscious Bias Prophylactic
Ex-colleague Fureigh[1] (former Code for America fellow, now helping to literally fix government at the sterling 18F) tweeted about a thing today that I'd thought about before only Fureigh did what most people don't do, which is follow through with an idea.
Unbias-me[2] is a Chrome Extension that lets you review GitHub and LinkedIN profiles with minimal attribution information. The readme that Fureigh wrote for the extension is fantastic for a number of reasons, not least of which the way they wrote the "Why [does this exist]?" section with a cut-to-the-chase about the fact that unconscious/cognitive biases exist[3]. They just do. It's 2016. Some people might throw their hands up at this and say, well, if we've got unconscious biases then we might as well give up, but those people appear to be ignoring the fact that we do actually have a rational brain somewhere in there that override biases *if we want to*. It's just hard work.
I think unbias-me is fantastic. I remember thinking sometime last year as I was going through job applications on whatever recruitment-screening-as-a-service that we were using at the time that given what I'd read, it'd be great if I *didn't even know the names* of applicants when I was in the reviewing process. With names come biases about race, gender, sex and ability. And with traditional and stereotypical product management excess, just figured that it would be "easy" to offer blinding to people who wanted to blind applicants. Because hey, if enough people started doing it, why *wouldn't* you blind? What have you got to hide, right? Are you really saying that it matters to you whether an applicant is called Jordyn or Olivia? I mean, you don't want to say that, right?
So anyway. More of this please. And good job, Fureigh.
[1] fureigh (@fureigh) | Twitter
[2] GitHub - fureigh/unbias-me: A Chrome extension to let you review GitHub and LinkedIn profiles with minimal attribution information.
[3] Cognitive bias - Wikipedia, the free encyclopedia
--
It's 9:09pm, and around 1,100 words later. Thanks for reading and, as ever, I love to get notes from you, even if they're just saying hi.
Best,
Dan