s3e30: The Generative Surplus 

by danhon

0.0 Station Ident

10:25am on Monday October 24, 2016. I’m writing this not listening to any music but instead have 1992’s hacker-caper-classic Sneakers on in the background. Right now, Bishop is getting some help from the tiger team through his earpiece on defeating a particularly sophisticated combination door lock. In other news, one of the unintended consequences of an attention-based economy is that culture continues to be strip-mined. It’s easier to acquire (and retain?) peoples’ attention with familiar things rather than novel things [citation needed], and when we’re talking about expensive creative projects, that means a TV series reboot of said Sneakers movie, no-doubt justified as relevant these days because have you heard, the Internet exists and continues to stubbornly persist in weaving itself into our lives. Maybe – just maybe – the Sneakers reboot could be a little light of humour and accurately-placed optimism, rather than the hellish landscape shown by Mr. Robot, Westworld, Black Mirror, Scorpion, and, well, everything else.

Anyway, let’s get straight in to things.

1.0 The Generative Surplus / Fractional Creative Generation

Here are the things that have coalesced into something in my head. There are people smarter and more knowledgeable than me that can fill in all the [citations needed], so take all of this with the usual caveat that I’m just an amateur connection-maker and I don’t have any fancy paper qualifications. All I have are my ideas and the words I use to express them. So.

Item: Russell Davies writing about bots and brand taglines[0]. Russell make the core point that I’m going to make here: what happens when we think of generative technologies as tools for creative people. To demonstrate this, he’s made a Twitter bot, taglin3r[1], that creates a generic tagline on the hour, every hour and, embedded in that Campaign article, a Javascript bot that just creates a never-ending stream of taglines. This isn’t even convolutional/recurrent neural net stuff, this is just a process where you do a bunch of brand strategy workshops with a client, “chuck [the words you find] in the bot”, generate (not even procedurally! just… generate!) a few hundred taglines, have a quick look over to remove the ones that are accidentally inappropriate, and then… you don’t even have to choose? Like, you can just put them all in a bunch of banners and see which one sticks. This isn’t *cheating*, right? Is it?

Item: Wonderful writer and newsletter reader Robin Sloan made a thing: a recurrent neural network that’s a text generator, trained on a corpus of old science-fiction stories combined with an extensible text editor[2]. Please do take a look at the link because there’s a gif in there that shows better than tells how his thing works. It’s a responsive, inline, “autocomplete” powered by a RNN for a specific domain. Another creative partnership: Robin describes it like “writing with a deranged but very well-read parrot on your shoulder. Anytime you feel brave enough to ask for a suggestion, you press tab…”Again, “the animating ideas here are augmentation; partnership; call and response.”

Item: the amazing Tumblr collection Object Dreams[2a], reviews written by predictive text of which I honestly can’t tell if they’re compelling because I know a human didn’t write them or just because they are *weird* and *not* human and intriguing just because of those literal words.

Item: [NSFW imagery linked] Yahoo! recently open-sourced a fine-tuned residual network that scores imagery on their not-safe-for-workness[3]. We can do with that project what Google did with Deep Dream and create a *generative* network that works, well, upside down and can run iteratively to make not-safe-for-work images for us. The para here that stuck out for me was this: “The generative capacity of convolutional neural nets are, quite simply, remarkable”. There it is again: generative capacity.

Item: I’ve written a bit about the treatment I’ve had for severe depression. One of the pieces of therapy that I’ve had has been Dialectical Behavioural Therapy, which is, I guess, one of the answers to the question “what happened after cognitive behavioural therapy?” Anyway. One core thing about DBT – and the intensive treatment program that I was on – is trying to persuade people that *thoughts are just thoughts* and that they’re not “true” things. They are just, well, if I were to borrow some terminology from the deep learning side of things – just vectors. Associations and weightings of concepts. Thoughts aren’t, therapists hasten to point out and labour, intrinsically or inherently “true”. They just are. The offered theory is that thoughts just *happen* in your brain. That your brain is an engine for making thoughts and that’s just *what it does*. Your heart is a piece of biological machinery whose job it is to pump blood, your brain is a piece of biology machinery whose job it is to “generate thoughts”. There’s a lot to argue with here, but the part that I *could* get behind is that at least there’s a *part* of my brain which job it is to create thoughts. To generate thoughts. Sure, there’s a different structure which has a job of keeping track of long/short-term goals and executive function, but there’s definitely *something* in there that has the job of receiving input and *generating thoughts*. Throw in all the usual stuff about *learning* to be depressed and well-worn pathways that predispose thinking in a certain direction, then you have a potential situation where a thought-generator (say, an association-maker) is really good at creating associations between *any* input and a resulting depressive output. But, the main thing here is: there’s a bit of the brain that *generates outputs based on inputs* and those outputs are correspondingly pretty complex.

If there’s a bit of our brains that’s a thought-generator, an association-generator that creates associations from inputs, then you can think of that association-generator as a possibility-creator or future-creator. The better your association-generator is at creating associations based on inputs, the more “thoughts” your executive function has the ability to act upon. In a sense, your association-generator is creating degrees of freedom. The more thoughts you can generate, the more potential choices you have. Of course, then you have to cull them down. But I don’t think you can choose outcomes that you’re not able to contemplate or imagine. Or, you know, have thoughts about!

I blurbed out on Twitter yesterday in response to that para about the “remarkable generative capacity of convolutional neural nets” because I’ve been thinking in this way about how *part* of what we do is generate new things. Everything is, as we’ve learned, a remix[5]. I wrote yesterday that “If homo sapiens is one of the fittest species due to its ability to pattern-match, then C/RNNs and their ilk may be an outside context problem.” I mis-spoke. I didn’t mean pattern-match, though that’s part of it. I meant *generate* (and yes, generation is but one part of what potentially makes us one of the fittest species on the planet).

Think of the first generation of the outboard-brain as store-and-retrieve. We have tremendous infrastructure now for offloading storage-and-retrieval of *information* from our brain. We don’t need to *remember* certain information because we can (more-or-less reliably) access and find that information externally. But store-and-retrieve is only part of what our brain does. What does it look like when we’re able to offload (partly – I don’t want to think about fully, yet!) generative capacity?

In my amateur reckoning, we have some of that off-loading in terms of generative capacity already. Steve Jobs implored Bill Gates to try LSD for pretty much that exact reason, I think. We use things like Artefact Cards[5a] and try rituals like brainstorms to nudge and assist that generation process, but, I don’t know, they feel… too sparsely distributed? Not quite here yet? Not as easy to use, not as at-hand, not… culturally acceptable yet? I can imagine a situation where a copywriter could use Russell’s tagline bot to help her come up with a tagline for a brand that does really well – and then being in trouble for admitting to using it? But these days, we’d say that the market would decide: and I’m pretty sure that within a decade or so, *not* using a bot like that would be a bit like stubbornly *not* using Excel.

(An aside: it’s OK to use Excel because Excel isn’t stereotypically ‘creative’, it’s just mindless calculation but holy shit we’ve seen very creative spreadsheets in our time, haven’t we, and we’re just asking Excel to do the dumb stuff)

I got down the wrong path on Twitter yesterday about *spotting* new associations, but I think what I’m more interested in is *making* new associations, in the generative capacity of neural nets – or whatever – in terms of being inspiring or, in other cases, downright providing a solution. Autodesk already has project Dreamcatcher[6] which lets designers input design objectives and functional requirements and then procedurally generates/explores the solution space, then allowing a human to whittle that space down and, ugh, curate it.

This generative capacity is *not going to go away*. I was chastised by Greg Borenstein into remembering that we obsess over the more recent novelty and imagine it to be more novel than it turns out to be, and yes, I take that point. So I can see a middle ground: maybe a massive influx of generative capacity won’t *completely* remake the world. It doesn’t have to. But it certainly seems like it can change the world *enough*, and not in a replace-the-humans way; I don’t feel the need to have to go that far right now. But: what does it mean when that generative capacity is just out there, *all the time* for whatever we want to apply it to? We explained Wikipedia away in the early days as a consequence of cognitive surplus. A cognitive surplus, though, is a fairly big bucket. What does a generative surplus look like? What does a good-enough generative surplus look like?

An argument of course is that we have generative surplus already. There are enough people writing books and getting them out through Kindle publishing. We already have more Tweets than we can read. I’m not saying that those things are bad! But now we’re potentially on the cusp of a world – well, we’re not, really, in some cases – that future has already poked through – where my cognitive surplus that’s diverted into quipping on Twitter is competing with a *bot* that is generatively quipping on Twitter and now you can make an argument that bots are stealing our leisure time, too? I mean, yesterday I was just thinking about what an algorithmically generative surplus would look like from a professional/employment point of view, never mind the fact that a bunch of my favorite twitter accounts and ones that genuinely make me laugh are, to coin a phrase “Designed by a Human Brain, Made in Electrons”.

Sticking the landing? God, I don’t know. Isn’t this just… interesting? And you know, vaguely threatening to some people?

I made a joke a long time ago about unions outlawing generative neural nets. I wonder.

[0] Bots and humans: the new interactive tools that could help creatives
[1] taglin3r (@taglin3r) | Twitter
[2] Writing with the machine
[2a] sweet dreams and goals and objectives and demands
[3] NSFW yahoo/open_nsfw: code for running Model and code for Not Suitable for Work (NSFW) classification using deep neural network Caffe models
[4] NSFW Image Synthesis from Yahoo’s open_nsfw
[5] Everything is a Remix
[5a] Play With Ideas | Artefact Cards
[6] Project Dreamcatcher | Autodesk Research

11:29pm. Bishop is about to open the box inside Playtronics. Let’s hope he manages to get the item.

Your notes continue to be wonderful. If you’re new: hi! If you’re not: hi! And if you’d just like to say hi, say hi :) We’re built to connect.

Best,

Dan