s12e27: Just Put Dr. Ian Malcolm On Our Tombstone Already
0.0 Context Setting
It's Thursday, June 23, 2022 in Portland, Oregon and we're coming up to one of the two holidays that I consistently remember here: the 4th of July, or what I sometimes like to call Rubbing It In Day.
In the past, I've also said something like "hey, how's that going for you?" or make a reference to California coming back to the bosom of the United Kingdom as portrayed in future historical documentary Demolition Man, but it's not like the U.K. is a model of progressive government to aspire to right now either.
1.0 Some Things That Caught My Attention
What is the point of OpenAI, again?
Yesterday I very briefly linked to Amazon's demo of Alexa mimicking voices, which obviously is an interesting idea and now (via The Verge1), there's a video of the demo from the company's conference! The video wasn't available yesterday, so now there's something for me to watch. Exciting!
Before I get started, I've been wondering for a while: what's the point of OpenAI?
Here's what I see OpenAI has done:
- come up with a bunch of language models like GPT-3 that people get to use (over 300, as of March 25, 2021)
- come up with a bunch of approaches to image generation, like DALL·E and DALL·E 2
- and most/least fun of all for someone like me, made it super hard or super inaccessible to play with the models when they're released with a bunch of waitlists, and priority being given to some people (reasonable) and also priority being opaque (feels unfair)
Now what I thought OpenAI was about was making sure that the use of AI was beneficial to humanity and society and that one of those goals was to be a canary in the coalmine of not doing stupid shit like, er, letting you mimic any voice you like without much consideration? Or any, it feels like? Or making it look like such implementation is without ethical, societal consideration?
So. Here's where I need to remember to read what OpenAI says about itself and compare it with OpenAI is actually doing, see if there's any difference, and try to make clear why I'm uncomfortable about the whole thing. Other people who have deep experience, like Timnit Gebru, have also asked what the point of OpenAI is.
OpenAI's goal, from their charter2, is essentially this: build a safe and beneficial artificial general intelligence (more or less "a god"). That's it. That's the goal. There are some caveats or, if you're a pirate, some guidelines for them to follow, like "it's okay if we happen to help others along the way", and some other principles, which we have to remember that in this day and age (and forever, really) are more like "well, we intend to uphold these, but we don't have to). Because they're not laws.
Anyway. I had thought that OpenAI would be about leveling the playing field not just for the application of artificial intelligence, but also for assessing the value and harm and understanding how we (i.e. societies) might choose to apply it. But we don't quite live in an environment like that, so instead what practically happens is this:
- OpenAI researches and develops models
- It charges access, via an API, for those who wish to use the models on-demand
- Sometimes it provides open access to the underlying code and model
- Sometimes it doesn't, for reasons
All of this needs to be read in the primary mission of OpenAI: to create an artificial general intelligence (i.e. a "god", remember), because that primary mission means they'll always evaluate concerns like "consider societal impact" or "advocate for safety" after or in a manner subservient to "summon our god".
Look, I keep saying "summon our god", and that's not entirely fair to a number of people, and yet there are also a bunch of people who really do believe that an artificial general intelligence is sort of a cheat code, or something that will be This One Trick Ensures Your Species' Rightful Place As Inheritor Of The Universe, or This One Trick Will Bring About Peace And Heaven On Earth For Everyone.
And I get it, I come across as a cynic, but I also want to be a long-term optimistic cynic. I see the value of all of this, and I just want it done with the least harm possible.
So to bring this around, here's what happens: OpenAI (and others -- it's not just them) are involved in an arms race to figure out the awesome stuff you can do with machine learning and essentially throwing statistics and burning carbon to solve problems or do things that we don't want to do because they're too expensive to do (i.e. we don't want to pay humans to do them) or we find them distasteful (i.e. we get to create some distance) or we want to operate at scale (i.e. we want results quickly, to as many people as possible at once). So we end up, in general, with shiny "look at what we figured out how to do" demos that are intrinsically and intellectually quite interesting, and also the equivalent of a preschooler getting to show off what they've discovered. And I get that! That's exciting! We discovered a new thing that does something!
But there are accelerating, ratcheting discoveries or considerations about, let me put on my Jeff Goldblum voice, whether we should.
This isn't OpenAI's fault: it's not their (sole) place to be an institution (privately funded) to really advocate for responsible use of AI. That's other peoples' jobs (which raises the question: if everyone thinks it's someone else's job...) It's also a function of an environmen tin the West that I'll call The General Lack of Consequences Before Real Consequences Happen, which is where we could have mechanisms to regulate our collective behavior (government: have you heard of it?) and yet for various interconnected and systemic reasons, we choose not to put costs on those behaviors and instead appear to be encouraging fucking-around-and-finding-out.
What I mean is this: we've known about what I'll handwave call style transfer in voice and that mimicking someone's voice is possible, and if you're a corporation that has a sizeable percentage of all the flops on the planet then you're going to look for stuff for them to do, and to justify collecting even more flops. So you see that it's possible to mimic a voice, you see that you've got a whole bunch of things in a whole bunch of places that are microphones and speakers and really, the pull is irresistible. Why wouldn't you hook up microphones to flops to speakers and then mimic voices? What possible reason would you have to not do that? Who's going to stop you?
In practice? Nobody, that's who. Not regulators (just pay them). Not your customers, because in reality they'll just go along with it. No, a lot of people would have to die, and even then bets are off as to whether you'd suffer enough practical financial consequence to stop doing the thing. And anyway, it's on to the next thing before too long.
So, to come back. If the point of OpenAI was to help figure out the safe application of AI in society through not just making it available, but also making it easy to assess the impacts, then... it's not doing that. I thought it was doing that. It is very much not.
Okay, that's it for today.
If you keep reading past this, there's a a short bit about a supporter drive I'm doing.
How are you doing?
Best,
Dan
Normalize Making Supporter Drives Less Awkward
Look, asking for money is awkward and embarrassing, so I'm trying to model good behavior here.
Every single episode of this newsletter is free, and if you get something out of it, then become a paid supporter, at pay-what-you want...
... or expense it and have your boss pay! Professional development? Go for it. Budget for books and research? Done. Here you go:
Paid supporters of any kind get a free copy of Things That Caught My Attention, Volume 1, collecting the best essays from the first 50 episodes.
For everyone else, you're not left out: free subscribers get a 20% discount of the ebook, too.
-
Amazon shows off Alexa feature that mimics the voices of your dead relatives, The Verge, James Vincent, June 23, 2022 ↩
-
OpenAI Charter, published April 9, 2018 ↩