Letter from Utopia: Talking to Nick Bostrom



Here, Bostrom and Andy Fitch discuss applications of his book across any number of fields — from history to philosophy to public policy to practices of everyday life (both now and in millennia to come)



ANDY FITCH: If we start from a working definition of superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” and if we posit this superintelligence’s capacities to include an ability to learn, to deal with uncertainty and to calculate complex probabilistic information (perhaps also to assimilate sensory data and to pursue intuitive reasoning), and if we conceive of ourselves as fitfully approaching an intelligence explosion, one in which intelligent machines will design with increasing skill and speed ever more intelligent machines, and if we hypothesize the potential for this self-furthering machine intelligence to attain strategic dominance over us (the way that we possess such dominance over other species), and we recognize the potential existential risk within such a takeoff scenario (a situation which we must manage with great deftness on our first and only try), we can begin to trace the complicated rhetorical vector of this book — which apparently seeks both to foreground a great urgency, and to counsel a sober, cautious, carefully coordinated plan of long-term deliberative action. So, as we begin to outline Superintelligence’s broader arguments, could you also discuss its dexterous efforts at combining a call to public alarm and a proactive, context-shaping, transdisciplinary (philosophical, scientific, policy-oriented) blueprint for calm, clear, perspicacious decision-making at the highest levels? What types of anticipated and/or desired responses, from which types of readers, shaped your rhetorical calculus for this book?


NICK BOSTROM:  I guess the answer is somewhat complex. There was a several-fold objective. One objective was to bring more attention to bear on the idea that if AI research were to succeed in its original ambition, this would be arguably the most important event in all of human history, and could be associated with an existential risk that should be taken seriously.Another goal was to try to make some progress on this problem, such that after this progress had been made, people could see more easily specific research projects to pursue. It’s one thing to think If machines become superintelligent, they could be very powerful, they could be risky. But where do you go from there? How do you actually start to make progress on the control problem? How could you produce academic research on this topic? So to begin to break down this big problem into smaller problems, to develop the concepts that you need in order to start thinking about this, to do some of that intellectual groundwork was the second objective.The third objective was just to fill in the picture in general for people who want to have more realistic views about what the future of humanity might look like, so that we can, perhaps, prioritize more wisely the scarce amount of research and attention that focuses on securing our long-term global future.Today, I would think of the first of these objectives as having been achieved. There is now much more attention focused on this problem, and many people (by no means all people) now take it seriously, including some technical people, some funders, and some other influential people in the AI world. Today, it’s not so much that the area needs more attention, that there needs to be a higher level of concern. The challenge is more to channel this in a constructive direction. Over the last couple of years the technical research agenda has emerged, so on this alignment problem the goal now is to ramp that up, to recruit some really bright researchers to start working on that, and to make sure it proceeds in the right direction. In parallel now, we need to start thinking about the policies and political critiques that arise or that will arise as we move closer towards this destination.Basically, the approach was to try to lay out the issues as clearly as I could in the way I saw them. I didn’t really have a target audience in mind when I wrote the book. I was kind of thinking of the target audience as an earlier version of myself: asking what I would have found useful, and then whether that would help other people. But as the conversation proceeds, I think that there is a balance that needs to be struck. It’s key for the AI-development community to be on the same side as the AI-safety community. The ideal is that these will fuse into just one community. That requires avoiding this obvious failure scenario, which, fortunately, has not yet materialized. But you could imagine, in another parallel universe, the AI-development community feeling threatened that they are being painted as villains, as doing something dangerous. Then they might begin to close rank and to deny that there could be any risk, so as not to give ammunition to the fear-mongers. That scenario would have made a dialogue impossible. That has not happened, but I think the possibility that the conversation could run off the tracks amid some adversarial dynamic remains a concern, so preventing that from happening remains a priority.

Continue reading here

Bostrom has an intellectual background in physics, computational neuroscience, mathematical logic, and philosophy. He has been listed on Foreign Policy‘s Top 100 Global Thinkers list, and on Prospect magazine’s World Thinkers list. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s