s5e04: Division by Infinity
0.0 Station Ident
I started writing this on Saturday, 30 September, 2017 (but I’m writing these parentheses over two weeks later on Wednesday, 18 October (and I'm writing *these* parentheses on Thursday, 19 October in Montréal, QC)).
The constant entanglement of atoms (mostly on my exterior, but a fair few, I imagine on the inside, too) that make up my Ship of Theseus recently passed through a reference frame characterized by a local peak in the probability distribution of stroopwafel.
I’m on flight DL179 and you can imagine the chatter over the radio: cruising at 11,000 meters with a strong tailwind of 117 km/h. We’re in the pipe, five by five with a ground speed of 996 km/h (so close to that magical 1,000).
All of which is to say: I’m on my way home. I’ve spent the last (counts) four days in Norway at a retreat organized by Andy Budd and Clearleft to talk about the future of artificial intelligence[0].
Needless to say: I have thoughts.
Well, that’s not really accurate enough. It’s true that I have thoughts. I also have actions, though, because a large number of the thoughts that were had strongly implied the need for immediate action to be taken.
For those who’ve been reading along for a while, it’s probably a trope of mine to say that I didn’t have the time to make a particular episode shorter.
For the avoidance of doubt: I decided not to spend the time to make this newsletter shorter. There are many, many, many thoughts in my head, sticking out in different directions like crystalline shards jammed into a brain the consistency of jellyfish and they’re positively *vibrating* with potential connective energy.
So my strategy is this: I shall start documenting these thoughts because if I don’t, I’ll *never* be in a position to organize my them and express them better.
I’m not doing this in chronological order. These are the thoughts and themes in my head, as and when they make sense to me right now.
[0] The Artificial Intelligence retreat team. | Clearleft
1.0 Let’s start at the very beginning
One thing that’s stuck with me throughout this newsletter (since January 2014! Over 293 episodes! is the concept of the Californian Ideology[0] - the beliefs hovering over the U.S. West Coast like a sort of venture-backed word cloud. These beliefs include:
* some sort of libertarianism (fewer regulations are better)
* prioritization of individual freedom (individual freedoms are more important than the collective, but this falls apart under scrutiny)
* but some collectives are good (namely, whichever ones we’re in, as opposed to the out-group’s collectives)
* a bias toward bottom-up standards, if not outright hostility to top-down standards (more on this later)
* belief in the undeniable positive power of the network: Metcalfe's law as, uh, stated by Wikipedia, isn't about the *number* of unique connections in a network. It says that because connections are valuable (citation needed, etc.) *more* connections make more *valuable* networks
* utilitarianism: as with our success in science, the physical world can be abstracted and understood in the form of equations. Morality and ethics are no less subject to abstraction; mathematical precision than falling feathers and lead weights
* a sort of root-level vulnerability involving how free speech is considered; and
* technology as tool, neutral and separate, tool: tools are (and should be) considered apart from their users;
The Californian Ideology also inherits a bunch of concepts from a related predecessor - the manifest destiny that powered and after the fact described and attempted to justify a peristaltic wave of colonization of the western boarders of the continental United States. One story you can tell about this, is that the people who made it to California are Doing God’s Will By Making The World A Better Place.
This wave of colonization and development resulted in all manners of evil, not least of which there are many things you can justify doing when you’re doing God’s Will.
So, over 3 years and 290-odd episodes worth of newsletters, I’ve developed and thought about the idea of the Californian Ideology being one of the causes of the problems (wider/whiter) society is now having with the products of Silicon Valley.
Here’s a quick example: Connecting People is an Unalloyed Good, therefore it justifies Moving Fast and Breaking Things. Don’t worry. I’m not going to leave this hanging.
And yet...
Something is missing.
(Longtime readers will know that I write this newsletter less for you and more for me: it’s a way for me to think out loud because, well, I learned that I do my thinking through talking. In the absence of being surrounded permanently by a cloud of appropriately stimulating conversation partners (I see you lurking there, Twitter, and I’m not talking about you), this kind of writing has been super helpful. But I’ve also written here about me, because, well, I’m the one having these opinions. Those of you who’ve stuck around and sent notes have strongly said that what makes this newsletter interesting, useful and compelling for you is precisely that mix of personal and professional. So: for the newbies, here’s the warning. Sometimes this gets personal, and I think in this episode, I’ve finally figured out why that can be “good” - or useful, at least.)
I feel something is missing because I'm an optimist: I believe in science, technology and progress. We *have* made progress (my socially progressive, fiscally conservative, neo-liberal friends - and these days, I’m not afraid to count myself as a neo-liberal provided I get to explain what it means to me will point to as many Gates Foundation reports saying that, globally, *things are better*). But for an optimist like me, that progress isn't (and will never be?) good enough.
Because what *I* see is the delta between the-good-that-could-be-achieved versus our current execution. That delta is *painful* to me - it irritates me, gets stuck in my head and makes me want to d something about it.
I know people who’re wired up such that the knowledge that we *could* feed everyone but don’t is painful and motivates them to do something about it - you might feel that way too. I feel that way about what our current tech *could* do for us, but isn’t doing.
Everyone deserves better. We - the well-educated, predominantly middle-class readers of this newsletter - deserve better and of course those who’re less fortunate deserve better, too.
We can do better. We must do better.
Some people might say that this set of beliefs and desires might, broadly, make me a humanist.
[0] Californian Ideology : Things That Have Caught My Attention
2.0 How do you solve a problem like AI?
I bring all of this backstory up because at the start of the retreat, as a sort of grounding exercise, we were asked to think about two questions:
1. What three truths would we [individually] want the world to know and understand about AI?; and
2. What are three questions or problems about AI that would need answering or addressing, for us to achieve the world we want?
First I was worried that these were the kind of questions that I used to face when I was studying law at university. They’re essay questions, right? But I sat down in the middle of a ridiculously beautiful valley and read the questions again and instantly knew that for at least one of the questions, I had an answer that I wouldn’t be able to let go.
These two tasks ended up reinforcing my belief that talking about artificial intelligence in this way (as a distinct concept in its own right) is a sort of category error.
For starters, the *only* reason we’re talking about artificial intelligence in the first place is because we’re concerned about its potential effect in the world.
But if we’re concerned about artificial intelligence, then should we not also be concerned about other technological influence upon the world?
Let’s take a look at the second question. It’s made up of two parts. First, “what are three questions or problems about AI that would need answering or addressing” - but this is subordinate to the qualifier “for us to achieve the world we want”.
The recognition here is that AI is a tool or mechanism that helps us move from one state (the world as it is now) to a desired new state (“the world we want”).
In other words, answering this question has hardly anything to do with AI.
Instead, it has everything to do with figuring out the world we want.
3.0 Technology will save us
Fork, and bring in a new entry point.
Used to be, twenty, thirty, forty years ago, you could be optimistic about technology and the future *in abstract*.
In 1979, the year I was born, Usborne published the Usborne Book of the Future[0]. For people of a certain age in the UK, it became a sort of shared talisman of futuristic optimism. Decades later, people would bond over this book’s presence in their childhood.
In particular, you could be optimistic about computing and networks *in abstract*. Their potential was as-yet unrealized. The dream of instantaneous, cheap or even free video communication was something that was coming, but hadn't arrived yet. We were excited about what the network would bring. For a world that considered digital watches to be a pretty neat idea, we were incredibly excited about the potential for full-on computers on wrist watches[1].
Imagine, then, the people who didn't quite fit in in the present, and who instead were, to various degrees, excited about and able to retreat into imagined futures where *things would be fixed* and *everything would be better*. Better to imagine living in an equitable future of escape and technological intervention than worry about being bullied at school. I’m not the only one who has fond memories of the Stanford Torus[2] - so much so that as a grown-up, I went and got two of them framed[3, 4] - but look at who’s not in those images.
Look: it was the 80s. For us children, we might escape from, well, the reality of growing up into music, comics, satanic rituals like tabletop games or roleplaying D&D. Or an optimistic technological future. And the adults? The adults were living in the spectre of nuclear annihilation. If technology was going to destroy us, then we'd better damn hope that it might prove some way toward our salvation.
And then, as they say, the future happened.
Or, more precisely, a generation that grew up with camera-ready, set-dressed surface-level equitable, optimistic futures went to work delivering them because we had grown up with them. They were better and they were the future we wanted to escape to.
And - broadly - what we told ourselves along the way was that we didn't need to think about the impact, we didn't think about what it might be like *to actually live in this future*, because we took it as read that the future would be good.
Spoilers: it was more complicated than that.
I count myself among those technological optimists.
(And as an aside: notice the power and fallibility of telling a story about this. For all the escapism into technological utopianism, there’s as much art and culture about dystopian technological futures from the same period. The horrible future of Judge Dredd appeared in 1977 too.)
[0] Usborne Book of the Future 1979 (pointlessmuseum) : Free Download & Streaming : Internet Archive, The Usborne Book of the Future
[1] This 1981 Computer Magazine Cover Explains Why We're So Bad at Tech Predictions | Time.com
[2] Stanford Torus Space Settlement
[3] 20x200's Torus Cutaway AC75-1086-1 5725 - 20x200
[4] 20x200's Torus Interior AC75-2621 5718 - 20x200
4.0 d(ROU) Consider The Implementation Details
This is going to sound terribly trite but, to me, it feels super important.
There's a qualitative difference between:
a) technology in the abstract and theoretical; the technology in potentia of imagining a world of global, networked citizens
b) technology in the practical reality; the delivered fact of 2 billion monthly active users on a single social network, 1.15 billion mobile daily active users on a single social network
In scenario (a) you get to imagine that egalitarian vision of humanity unbound, sharing and connecting. Billions of individuals participating in a marketplace of ideas is a good thing because you *don't have any data yet*. (I mean, you don't have any data yet about that particular thing. You have a bunch of historical data upon which you may decide to base priors about the potential value of a future global network, but that’s a whole other rabbit hole...)
In scenario (b) you get confronted with the cold hard reality of, well, 2 billion monthly active users and 1.15 billion mobile daily active users on a single social network.
Regardless of scenario, in the end, we won. Everyone (yes, for certain values of everyone) got connected, the trend’s there, it’s unstoppable now.
Doesn't matter how. (Well, not as much). All that matters is that it happened, and we're here now. I mean, you can circle-jerk what-ifs, like imagine if Apple's Ping was the dominant social network (always good for a laugh), but of all possible worlds, *this* is the one we can influence.
Moore's law, late-stage capitalism, neo-liberalism, globalisation, horrendous real-time location-independent arbitrage in the relative contextual economic value of labour - *we're here now*.
If we were good Bayesians, we'd update our priors.
We'd take a good, hard serious look about all the *optimistic* things we thought about the future and say: well. Is it true? Was it an unalloyed good that we connected everyone? Where did we fall short?
I am, of course, cheating. It’s not fair to compare actual progress with the perfection of unalloyed good. So perhaps, a kinder question is this. So far, has the network, worked?
Was John Perry Barlow overly optimistic when he published a declaration of the independence of cyberspace in 1996?[0] 20 years later, he says that he “would have been a bit more humble about the ‘Citizens of Cyberspace’ creating social contracts to deal with bad behavior online.”[1] (In 2016, Barlow goes on to say “the fact remains there is not much one can do about bad behavior online except to take faith that the vast majority of what goes on there is not bad behavior”[ibid] of which, *holy shit that’s a whole episode in itself).
So here’s a post-rationalization story because we love stories and they hook into us and won’t let go, because stories use some sort of privilege escalation vulnerability/legacy trust vulnerability through oral information transfer channels to update our reasoning and priors about the world without, well, evidence. *Here’s* a story for you: a bunch of people who were predisposed or inclined to withdraw from living in the moment because it was painful retreated to an imagined better, safer future, and went and attempted to create it and this time - *this time* the information hacking tools grew up at the same time as that cohort, and well. Here we are.
The problem here is that now we need to deal with the practical implications of that future, because the practical implications include, to coin a euphemism, “inadequately considered side-effects”.
The NASA illustrations of orbital habitats that some of us dreamed of escaping to? Gated communities - in spaaaaace!
Global networks promising open and democratic exchange of ideas presupposed there would be no bad actors. There were no trolls in the book of the future. (Contra: there *were* trolls in the future, and Vernon Vinge wrote about galaxy-spanning Usenet trolls in 1992’s A Fire Upon the Deep[2].)
Much of science fiction and futurism neatly skirted over the messy *human* details (citation needed). Inequality and inequity had disappeared, a sort of unspoken "a proof of solving exists, but I couldn't fit it in 140 characters". (Unamusingly, between starting to write this and hopefully finishing it, this joke has been superseded by reality)
But hey. We were only children then. What did we know.
[0] A Declaration of the Independence of Cyberspace | Electronic Frontier Foundation
[1] How John Perry Barlow views his internet manifesto on its 20th anniversary
[2] A Fire Upon the Deep - Wikipedia
4.0 Divide by Infinity
(I swear I will get back to ‘the AI thing’ soon. Trust me, it all fits together.)
So why do AFGAM (Apple, Facebook, Google, Amazon, Microsoft) - and some of these more than others - persist in pushing their vision of a more connected future despite evidence that significant roadbumps ("implementation details") continue to mar the journey?
One reason, I think -- and I'm indebted to Cennydd Bowles[0] for helping my thinking on this -- is that while utilitarianism is appealing, in reality, it is at best unhelpful and at worst, is morally problematic, causes harm and exists as a convenient shield or justification for behavior.
(For the avoidance of doubt: I’m clearly an amateur, armchair ethicist. Professional ethicists are available, etc.)
Here's the gist, which is more or less based on my intuition (see! Citations needed!) having worked directly with Facebook, been in meetings with founders and executives and taken an unhealthy interest in their public pronouncements. And, my model is to take Facebook as an example of the current Valley class, albeit one sitting at the top of the pile.
Here's what Mark Zuckberg said on June 27th, 2017:
As of this morning, the Facebook community is now officially 2 billion people!
We're making progress connecting the world, and now let's bring the world closer together.
It's an honor to be on this journey with you.
- Mark Zuckberg, https://www.facebook.com/zuck/posts/10103831654565331
I treat Facebook as a sort of real-world instantiation of Metcalfe's law: connections are good. More connections are better. Facebook have already connected 2 billion people. This is AWESOME! It deserves a SUPER-PUMPED FIST BUMP! Much value has been created, not only for shareholders (who include pension funds for regular people!) but also for the bottom of the pyramid!
And yet:
* On September 14 2017, ProPublica found that Facebook's ad targeting software lets advertisers reach self-identified "Jew Haters": https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters
* On October 28 2016, ProPublica found that Facebook's ad targeting software lets advertisers exclude users by race: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
On the right hand side of the Facebook utilitarian equation is the following[1]:
COSMO (Jesse Eisenberg) has cornered BISHOP at a secret location.
COSMO: Posit: the value of a network is proportional to the square of the number of connected users.
BISHOP: Consequence: people start companies to connect everyone in the world.
COSMO: Result: the networks with the most users become valuable.
BISHOP: Conclusion: networks will do anything to get more users.
COSMO: Bzzt. I've already done that. Maybe you've heard about a few? Think bigger.
BISHOP: The network disrupts the stock market.
COSMO: Yes.
BISHOP: News media?
COSMO: Yes.
BISHOP: Currency market?
COSMO: Yes.
BISHOP: Commodities market?
COSMO: Yes.
BISHOP: Small countries?
COSMO: Twitter. Not me. But, yes.
BISHOP: Large countries?
COSMO: Where were you last November? With luck, I might be able to crash the whole damned system. A better world. No more rich people, no more poor people, everybody's the same, everybody's connected. Together. Isn't that what we said we always wanted?
BISHOP: Cos, you haven't gone crazy on me, have you?
COSMO: Who else is gonna change the world, Marty? Greenpeace?
BISHOP laughs. COSMO is SERIOUS and BISHOP’s face DROPS. Seriously: You *are* crazy...
What I'm saying here is this:
* Facebook treats as a *law*, as an unquestioned truth, that the *value* of a network increases in proportion to the square of the number of connected users
* Facebook's network has been increasing in number of users
* Because value (undefined!) increases, the goal is to capture *all* of the users to deliver *all* of the value
* Any delta of currently realized value versus value in potentia (ie: the value of a network with all possible users) is, effectively, infinite because your right hand side of the utility equation is: all users who may ever exist, connected.
You end up with an infinity on the right hand side. Your utility function with which you evaluate whether you should do things - whether consciously or unconsciously - has a back-door vulnerability because *you let infinities in*. Once you let infinities in, you *can't* have a useful utility function because an infinity means that the ends justify the means.
This is why friends don't let friends play with infinities, kids. They mess you up.[2]
This means it's OK (tolerable? acceptable?) to build a product that (accidentally? unintentionally?) allows people to exclude audiences by race (which, in the case of rental properties, is illegal in the US), or to target audiences by, well, values that society in general might not want to reward or recognize.
What was the harm, anyway? In the Jew-hating ad-targeting instance, ProPublica themselves admitted that the audience would only have amounted to 2,274 people so the product "helpfully" suggested additional targeting criteria to increase the addressable audience. Some might argue that the direct harm would only have been, what, 2,274 people?
In conversation at our retreat, Bowles suggested that the combination of measurable KPIs and utilitarianism leads to a sort of evil (or, I guess “morally and ethically problematic”) singularity. On the one hand, Facebook can point to its fanatical devotion to KPIs (at a high level, ones like MAU and DAU) as being instrumental to its growth and current mind-bogglingly position as world's "most valuable" network. But on the other, what of all the numbers and Key Performance...s for which there are no numerical indicators? What of all the qualitative values? And have we properly interrogated whether we're even considering the *right* numbers? The harm in this case isn't 2,274 people, an *equally valid harm* is the signal of *allowing such targeting* to, oh, 7.6 billion people. How do you account for direct and indirect harms, then? How do you count for externalizations?
Of course, it doesn't matter: the value of the network increases in proportion to the square of the number of connected users and, provided humanity fulfills its - as Bostrom would put it, cosmic destiny (a sort of "science-approved" manifest destiny where the raw materials of the entire addressable universe are available to humanity), then the expected future value of the network is related to the square of practically infinite future users. (Yes, I know the addressable universe is not infinite).
[0] Cennydd Bowles
[1] Sneakers (1992 film) - Wikiquote
[2] dan hon is typing on Twitter: "11) this is why friends don’t let friends play with infinities, kids. They mess you up. Anyway."
5.0 Uncaught Exception: NaN. Continue? y/n
As an armchair ethicist, I think I get it. We have to make decisions, and utilitarian equations help us make them. Sometimes we have to act, and we can’t help everyone. We don’t live in fully-automated luxury communism.
The utilitarian equation provides an answer, but is it a useful one? Is it *always* a useful one?
With my armchair hat on because I love mixing metaphors, the utilitarian abstraction doesn’t work and breaks down far too easily. Any by “abstraction” I mean the utilitarian algebra that allows the mathematical and logical manipulation of symbols that lets us, amongst other things, compare different approaches to ethics.
But! This is all wanking about with abstract symbols. Doesn’t this only become *practical and useful* when we’re able to substitute in usable, useful values for, *v*? (“Siri, search the web for “applied ethics”[0])
In physics, abstraction works because we get *useful* things from imagining there's no atmosphere or there's no friction and suddenly a feather and a lead weight can teach us things about the world. Newton will get you to the moon on an abstraction.
I think the mistake here is because there's a fundamental category error. The utilitarian equation (and by extension, the (imaginary, theoretical) usage of utility functions and symbols like goals and values in evaluating and constraining the behavior of speculative godlike AIs) *wants* to treat the world as if there are mathematical, rational, existing-as-a-fundamental-truth-about-the-universe aspects about the human condition. But there aren’t.
To paraphrase Sean Caroll and his poetic naturalism stance[1, 2, 3]: the maths that describes (and predicts) the behavior of the universe at the quantum mechanical level - core theory - is real and true. Abstractions built upon the core theory of quantum fields, like the concepts of gasses and liquids are useful because they still describe and predict behavior applicable to that layer of, er, reality: the one that we experience on a human scale.
Love and compassion, morality, duty, thinking that we should do something to alleviate the suffering of those in pain? Those are not the same as the forces in the standard model. They are not real in the same way, but they *are* real and useful to humans, which is what we are.
Not only are they real and useful to humans because they guide our behavior and influence the world in which we inhabit, *because* we live in a universe where ethical concepts don’t exist in the same way that the fundamental forces or quantum fields do, *we* get to decide what society we want. As yet, there is no experimental evidence nor theory that allows for the existence of Interstellar’s universe bound by love.
An equation (and thus “algorithms”) will not tell us what we should do and what course of action we should take ab initio because the equation includes terms that require defining at a completely different (and yet real) level of abstraction: an inherently human one.
In other words, and I'm really sorry for this sounding so trite, but I believe it to be *true*: part of what makes humans *human* is our own choices and decisions about our values. And while we can agree that there can be an equation that produces the expected utility of an action, for that equation to work in society, we all (or enough of us) have to agree to at least an order of magnitude on the values used by the equation.
[0] Applied ethics - Wikipedia
[1] Poetic Naturalism - Preposterous Universe
[2] Poetic Naturalism (Sean Carroll) - YouTube
[3] Maybe You're Not an Atheist–Maybe You're a Naturalist Like Sean Carroll | WIRED
6.0 Consider The Spherical Cows
Technologists (caution: generalization) have a tendency to want to represent real things as models and, better, abstract models. This has worked elsewhere: I wrote above that physics has been stupendously successful at *throwing away data* like "friction", imagining how things might behave under such abstractions and in the process learning true (and useful) things about the universe.
But we try to do these things about *human-scale* phendomena, and things - I think - start falling apart. Facebook *kind* of understood this because one of the human-scale phenomena that it was created to deal with (e.g. getting laid at college) meant that it included the radical third option of "it's complicated" for the traditionally modeled binary of relationship status (in a relationship / not in a relationship).
The problem comes when we try to use abstractions in the wrong direction: a necessarily constrained model (like, say, a relational database, or thinking about gender as a BOOL when it’s more of a float or better yet an n-dimensional vector) attempts to reflect a necessarily messy reality and, in varying degrees messes things the fuck up. (Again, notice! These problems are small and affect relatively few people if technologists aren't successful, but I argue become societal problems once a certain percentage of people become users.)
Look: we *think* we know that we're capturing the "important" information about gender, but that's necessarily a human determination and, history shows us, we (the technology community) have not been great at understanding that the information representing gender in the real world is significantly more multi-dimensional than "male" or "female".
Here's another example which might not seem like it's about AI, but is, because it's about how we as a species develop technology and understand its effect on our societies.
Andy Budd, our retreat's instigator, told me about the event in a Twitter direct message. It so happened that the last time we chatted in DM was in 2010 (we are all so old now), when I spoke at dConstruct. I mention this because I think it's just goddamn funny that the thing I spoke about back in 2010 is as true now, if not more so, and relevant to anxieties about accidentally (or intentionally) bringing about an existentially threatening godlike AI.
What I talked about, in part, was this: there had been undeniable benefit to our zeal in digitizing everything we could get our hands on. It genuinely democratized access to information and we continue to reap the benefits. Literature, art, education, music, film, news: all the physical information produced by humans wholesale translated into ones and zeros and shipped around the world at 3.0x10e8 m/s.
But what did we lose? We digitized the content but not the container. Because we thought the *content* was what mattered, right?
I told a story that my wife and I would occasionally get cross with each other because we thought the other were just dicking about on their phone: because you can't tell what someone's doing. Our post-2007 magic window iPhones are designed containers whose containers are the epitome of silent discretion. They say nothing about what they contain, they leak hardly any information about activity. People make jokes about this - the Kindle covers that make it look like you're reading something erudite.
Growing up in a middle class family of academics, every morning my brother and I would see my parents reading the daily newspaper at the table. They were, as we would say now, Consumers of News Content. Think about what I learned over the course of, well at least 30 years: every day my parents would spend time reading something, so it must have been important. The thing they chose to read came in a certain format: large and narrow - broadsheets, not tabloids. They got at least two newspapers - normally The Times, but also The Guardian twice a week. Later, the Financial Times would be added too (oh my god this feels so horrendously and embarrassingly reeking of middle class privilege). Think of all the implicit information there: typefaces. Headline writing. All of these designed for certain purposes and in a way, branding. Then, out of all these cues correlation, causation and inference. *Trustworthy news looks like this*.
But then we abstracted away again, and in the worst case, did so with the ‘feed. Because the ‘feed strips away all of that implicit information, all those “secondary” cues because what’s important is The Content, and The Content isn’t Branding.
The feed is democratic, the feed is for everyone, everyone can be equal in the feed. Equality in the feed isn’t in itself an ignoble goal. But consider what we lost and whether what we lost was useful, or played a useful role.
Here’s another anecdote: I remember working on the team that would launch Facebook’s phone. The phone, Home, was a partnership between HTC, the manufacturer, and AT&T, the exclusive carrier. The two other partners (understandably) wanted a commitment from Facebook that they’d expend marketing effort to push the phone because, well, that helps sell things. Mark, on the other hand, admitted that he was the kind of person who just needed to know the information about the phone and would then make a purchasing decision. As a consumer, Mark just needed to know the feeds and speeds. For him, he knew what information was important and what wasn’t. The packaging - the design, the look, the fact that you might even have a branding campaign (never mind what branding does) were all extraneous.
The ‘feed is partly what we get when we say just the facts, ma’am and we don’t go all Colombo or Sherlock on the crime scene and hoover up every single detail, however seemingly inconsequential or slight: it all matters.
7.0 To Be Continued...
Nearly 6,000 words and I haven’t even covered:
* to be honest, any of the good stuff about actual AI yet
* or even my issues with Nick Bostrom’s book Superintelligence
* I mean just *one* of my issues with Bostrom is his assertion that the limited capabilities of contemporary software pose no existential threat whatsoever. <offended voice>I mean, really!</>[0]
* the fact that deep learning harm is closer and more realisable than we may think: if one group of us is genuinely making government more iterative and agile, then the potential time lag to state implementation of an academic proof-of-concept such as sexual orientation prediction by headshot might be less than 8 weeks from publishing. (Evil universe government technology consultant me, for example, might see the paper published, clone the github repo and pitch to a suitably repressive government delivering a turnkey deep learning prediction of citizens’ sexual orientation alpha via the Department of Motor Vehicles’ convenient driver license headshot API within, say, three sprints? And it doesn’t even have to be *accurate*.
* (The good news is that I have a sort-of fix for at least some of the cases above that I’ll leave you hanging on tenterhooks for)
* our work at the retreat on identifying the stories we tell about AI to find the negative space - the stories that we don’t tell, or haven’t told - about AI
* the fact that many stories involving AI use them as a mirror to hold up to ourselves, or use them to emphasize existing folk stories and fables.
On that last point about the fables, I guess here’s the cliffhanger I want to leave you with. Black Mirror gets a lot of props (deservedly) for being very well executed contemporary explorations of our human failings and how technology plays a role in amplifying our natural tendencies. We’re jealous: tech lets us be more jealous. We prioritize the surface over depth, tech lets us obsess even more about judging books by their covers instead of their contents. Mallory Ortberg wasn’t wrong when she described Black Mirror as “What if phones, but too much”[1].
But she wasn’t entirely right, either - she could've also said: what if humans being human, but too human-y.
Black Mirror uses tech as a lens to explore what it means to be human, and it's only in the later seasons that it starts exploring what that means in a compassionate, kind sense that's not completely dystopic. If we’re genuinely concerned about how to deal with strong non-human intelligence, then we don’t need the stories we’ve always told ourselves *about* ourselves. One way of looking at Black Mirror is that it takes morals like don’t-judge-a-book-by-its-cover and updates them to tech-lets-you-judge-books-by-their-cover-but-too-much. The moral is still, at its core, about not judging books by their covers.
This doesn't mean that morals aren't applicable to AI and "stuff we're doing with computers these days". We can and absolutely apply a moral like don't-judge-a-book-by-its-cover to, say, shoddy machine learning research that seeks to predict criminal behavior based on, say, headshots. Or sexual orientation (predicting sexual orientation based on headshots, that is. Not criminal behavior based on headshots, but I guess one of the characteristics of our current worldline is that the latter wouldn't be that surprising these days either).
Doing the things we’ve always done but “too much” or “be careful what you wish for” but “too much” aren’t what we’re necessarily worried about with potential AI risks because these are human things amplified along one axis and amplifying human predilections and desires . There’s what a human would do, and then there’s the things that can be done specifically because a human isn’t doing them. And yes, right now, it's useful to remember those human biases because right now, our code reflects us. As more of our automation moves inside black boxes, the possibility of unanticipated, non-human biases start creeping in.
One of the things that came up during the retreat was the point that current public discourse around AI still involves things like journalists or commentators calling for an equivalent of Asimov's Three Laws[2] which is a stupendous case of not just not seeing the forest because of the trees but not even noticing that you're on a planet that's bearing photosynthesizing life in the first place. Asimov's entire body of work around his Three Laws is around how they're imperfect. His protagonist, one of the first sentient machine therapists, goes round explaining that all the purportedly "bad" behavior of Robots is down to them trying, ineffectually, to reconcile really shitty governing behavior design.
In other words, Asimov's telling in part the story that strict laws that don't allow for discretion (and here I draw an analogy to those who think smart contracts will save us, and concede that they will, apart from the cases in which they will completely fail, of which we already have some fantastic examples) or flexibility don't work. In other words: perhaps attempting to codify ahead of time the right guard-rails is a bit like the assumption that if we try hard enough, maybe this time we'll gather all the right waterfall requirements and do a good job, *this time*. Narrator: we did not do a good job this time.
[0] dan hon is typing on Twitter: "1/ I take significant issue with Bostrom’s statement that *contemporary* software poses *no* existential threat. https://t.co/CBOfozju4O"
[1] Next On "Black Mirror" - The Toast
[2] Three Laws of Robotics - Wikipedia
—
OK. I am supposed to ship this so you’ve got something to read while I keep writing. In retrospect I realize that some of the advice I'd gotten about this draft was calling for it to be split up and sent in three parts but because writers are their own worst editors, I was unable to do so, so you get the luxury of over 6,000 words of, well... expository one-sided dialogue?
Briefly, before I go because I do want to note these down in some sort of commitment to write about them:
* due to recent advances in display technology (HD, UHD, HFR?) dogs can "watch" television now, which is awesome, also [citation needed]?
* absurdity and dadaism as a response to a horrifying world, and how silly experiments with "artificial intelligence" play into that
* the studio logos ahead of Ghost in the Shell (2017) were notable for including at least two Chinese entertainment groups (Shanghai Film Group, Huahua Media), which felt like a real-world weak signal mirroring science fiction visions of a more Asian future (c.f. Firefly/Serenity, Bladerunner etc)[0]
* no I haven't seen Blade Runner 2049, but see also my previous quip that BLADE RUNNER could also have just been titled CAPTCHA
* engineering institutional investor support for ethical technology companies (because even Mark has to listen to the other shareholders sometimes) by following and accelerating [sic] the socially responsible investing movement's playbook[1, 2] /[sic]
* thinking how Estonia could really fuck things up ("make things interesting") by offering subscription digital "citizenship" to non-residents, granting EU GDPR[3] to anyone, anywhere for a low, low price
* I'm enjoying Star Trek: Discovery! Fuck the continuity haters! Also, an amazing throwaway line about the fragility of Star Trek's post-scarcity economy at that time by Captain Lorca: "[of course] that was before the future came, and hunger and need and want disappeared. [beat] of course, they're making a comeback now, thanks to you."
* for the intersection of people who a) read this newsletter, b) are following cryptocurrencies and each week's new batch of ICOs, c) get GOP campaign emails, or are aware of their content: holy shit, the Trump team would clean up with a Trump PAC ICO
* reasons why Livejournal and Medium, both fundamentally "public content management and publishing systems" ended up being completely different that might be obvious but still are surprising to some tech people
* childlike-creativity-as-a-service
* style-transferring-all-the-things
[0] Ghost in the Shell (2017) - Company credits - IMDb
[1] Socially responsible investing - Wikipedia
[2] Ethics, markets and registers by Richard Pope - IF
[3] General Data Protection Regulation - Wikipedia
--
OK OK I'm really going now. Send notes. I'm very aware that this episode, more than any before, is super ramble-y. But there's a lot in my head, and I'm trying to make it all fit.
My best and love to you all,
Dan