s3e03: Peaks and Troughs; Singletasking
0.0 Station Ident
1:33pm on Friday 25th March 2016 at the XOXO Outpost[1] in Portland, which is happily going to be my new work home for the foreseeable future. Light music on in the background, typing at a standing desk and a Diet Coke (only the first one of the day, mind).
[1] XOXO Outpost
1.0 Peaks and Troughs
Today, some weak and not-so-weak signals and thoughts about AI and services and the whole general shebang of things-becoming-digital, I suppose.
Over the last couple of days, thinking about Microsoft's Tay[1], in Microsoft's words, "an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding." For whatever reason, Tay behaves *in some ways* a bit like an Eliza bot - mindlessly and *without apparent understanding* repeating what its users tell it. Because this is the internet and because humans are involved, the predictable happened when any sort of community interaction is opened up.
For Tay, this means headlines like "Microsoft terminates its Tay AI chatbot after she turns into a Nazi"[2] which, to be fair, aren't strictly true because it's not like Tay *is* a Nazi, she's just saying Nazi-ish things (to which I acknowledge that if it looks like a Nazi and tweets like a Nazi then it's probably... not a thing that we should anthropomorphize and treat as having an inner life and conscious understanding of its actions, to which the reply to *that* is, well sucks to be Microsoft for being the ones who anthropomorphized the bot in the first place. Stop hitting yourself in the face!).
Conversational interfaces and purported artificial intelligences? Congratulations, you are now firmly skiing down the slope of Inflated Expectations into the Trough of Disillusionment[3] Thanks To Overexcited Communications and Marketing Professionals.
The opposite end of the spectrum (well, no, not really, more of a more benign and humourous example) is someone suggesting that Britain's next Polar Research Ship should be named RSS Boaty McBoatface[4]. This is all, of course, People Dicking About On The Internet, something which Derek Powazek writes well about in his post that's ostensibly about building a chicken coop[5] but is instead an *incredibly* insightful and relevant piece of writing about what happens when you put humans in a network.
The whole business with Tay is perplexing if you think about Microsoft as some sort of command-and-control organism where everyone knows what everyone's doing instead of, well, a 21st century corporation with bits that aren't so co-ordinated. I mean, do you know how much work your brain does to integrate all of the data and input that makes up your consciousness' sensorium and then make that appear like it's all happening at the same time? The amount of time it takes a a signal from your feet to make it to your head isn't insignificant. This is, of course, as good a time as any to link to Venkatesh Rao's post about org charts[6] which for some reason I thought had that map of clocks across CONUS but on preview doesn't. Anyway. Yeah, it does seem a bit weird that Microsoft, the corporate organization that houses the very good danah boyd[7] would do something like Tay because honestly *what do you expect when you ask people to talk to a bot*. I mean, here's Matt Locke talking about what he learned having teens talk to a bot 16 years ago[8].
OK, so while I'm here, Things That Don't Make Sense About Tay:
- Microsoft implied that they were doing some sort of grand unsupervised learning experiment, of which unsupervised learning is *one of the biggest challenges* facing artificial intelligence right now. My understanding is that the biggest and most recent wins we've had in AI are pretty much solely down to a combination of datasets like the MNIST digit set[9] and the recent viability of neural network approaches thanks to frankly insane amounts of GPU silicon.
- Someone at Microsoft and Bing would've known this!
- But hey, conversational interfaces are super hot right now[10], didn't you read that blog post about what's happening in China last year?[11] Even Quartz, a news organization, has one![12]
- Quick, let's release a bot!
But, of course, humans. So ¯\_(ツ)_/¯.
My only irreverent quip, I suppose was that Skynet didn't decide to wipe out all humans after it started learning at a geometric rate back in 1997, it decided to wipe out all humans because it talked to them on Twitter and decided it hated them[13].
[1] Meet Tay - Microsoft A.I. chatbot with zero chill
[2] Microsoft terminates its Tay AI chatbot after she turns into a Nazi | Ars Technica
[3] Gartner's 2014 Hype Cycle for Emerging Technologies Maps the Journey to Digital Business
[4] Man behind RRS Boaty McBoatface disavows his name for polar vessel | Environment | The Guardian
[5] Today I Built a Chicken Coop — Medium
[6] The Amazing, Shrinking Org Chart
[7] danah boyd
[8] What Microsoft could learn from actual teens about designing fake teen chatbots — Medium
[9] MNIST handwritten digit database, Yann LeCun, Corinna Cortes and Chris Burges
[10] Beyond the GUI: It’s Time for a Conversational User Interface | WIRED
[11] Dan Grover | Chinese Mobile App UI Trends
[12] Quartz's amazing news app turns it into a conversation
[13] Dan Hon is typing on Twitter: "It wasn’t Skynet learning at a geometric rate that led to it killing all humans. It’s when it talked to other people on Twitter."
2.0 Singletasking
I'm on a new course of therapy[1] which, after having viewed an introductory video last night (on an old standard-def CRT television on a trolley cart, no less) where I had to try *really hard* to ignore the funky 1990s infomercial production values and concentrate on the actual content, amongst other things emphasizes paying attention and doing one thing at a time. Or, in other words, all those TED talks about mindfulness and meditation and slowing down your thoughts.
To which, there is a nugget that is slowly gnawing away at my brain because the work that I'm about to start doing that pays the bills is going to require me to do lots of: a) thinking, b) reading, c) annotating, d) writing.
There is a pattern that's been installed in my mind that I've examined a bit that's shouting iPad Pro! iPad Pro! at me mainly because I'm trying to be much more mindful of *doing one thing at a time* and just writing and not going insane and checking Twitter and Hacker News and whatever multiple streams I have going on.
Part of considering going singletasking is that although I understand and rationally know that us humans *think* we're good at multitasking but that studies have definitively shown that we're really quite shit at it, there's part of me that's worried that *not* exposing myself to the stupendous amount of stimulus that I currently do is going to affect my ability to notice patterns and pull interesting signals out of the noise. Will I still be able to do that if I'm just doing one thing at a time? Or, I guess, will I just decide to spend some time dipping into the noise and *just doing the noise*?
Anyway. Right now, there's a cascade of thoughts that are saying: wouldn't it be interesting to switch to an iPad Pro as a primary thing-that-helps-me-get-certain-kinds-of-work-done as a sort of extreme of removing myself from the easily distracted environment of a regular 1990s style windows multitasking environment.
Any of you lot using iPad Pros? What're they like?
[1] Dialectical behavior therapy - Wikipedia, the free encyclopedia
--
2:08pm, and around 1,200 words. Have a good weekend, everyone.
Best,
Dan