Super-Intelligent Machines – The Ultimate Challenge for Humanity

Published on
22 January, 2018
Macro, Technology
30 minutes

Super-Intelligent Machines – The Ultimate Challenge for Humanity

Featuring Nick Bostrom

Professor Nick Bostrom’s vision of future AI, where machines have become far more intelligent than humans, led tech leaders Bill Gates and Elon Musk to the conclusion that super-intelligent AI is the biggest issue facing humanity and something we potentially won’t survive. Bostrom outlines the roadmap ahead of existential risks, opportunities that can’t be imagined and the ultimate challenge to ensure that super-intelligent machines will do what we intend them to. Filmed on January 10, 2018, in Oxford, UK.

Published on
22 January, 2018
Macro, Technology
30 minutes


  • MO

    Mike O.

    30 1 2018 21:37

    4       0

    Best quote (at the end): "All of humans will be exposed to whatever risks there are ... so it seems like, if all goes well, that everybody should have a slice of the upside".

    Fat chance (sorry ... no offense to the porculent ... well, maybe, just a little to the financial variety).

    Reminds me of the movie Vera Cruz (1954) where $3 million in gold was being discussed (between three conspirators) and the countess (one of the three) says "One million's enough for me" only to have (Burt Lancaster) reply "It ain't for me. I'm a pig!".

    The pigs in this world (you know the kind ... they just had a big pig-fest in Davos) will likely want a bigger cut of any upside to AI (let's see them part with some of the gold they've already siphoned off ... think that'll ever happen?).

    It must be great to be a pig (especially after scoring on a big deal) ... here's a little known fact ... a pig's orgasm lasts 30 minutes (who knew?):

    (Am I getting too cynical in my old age?)

  • AE

    Alex E.

    29 1 2018 05:03

    2       0

    As was stated by the band Blue Oyster Cult: "History shows again and again how nature points out the folly of men" I think this verse speaks volumes, but then, we humans were never any good at replicating God. Just my opinion.

  • PS

    Peter S.

    27 1 2018 17:56

    0       0

    From developing.

  • PS

    Peter S.

    27 1 2018 17:55

    0       0

    This is a good overview. However, h glosses over the notion that there will be differing views of what goals and values should be incorporated in AI.
    There is no way to stop competing ideals from de

  • CS

    Christo S.

    25 1 2018 12:06

    0       5

    Scientists currently need 5 years to reproduce 1% of the brain of a mouse. Ergo 500 years for the complete mousebrain. It's a waste of time.

  • SD

    Stephen D.

    25 1 2018 04:07

    3       0

    100% unemployment is the objective of AI. On the way to that goal the extermination of the Human Race is a possibility if we get it wrong. Truly a thing to make you go hmmmmmm?
    Given that AI is, and will continue to, completely change trading and maybe even investment, it would have been good to have something about this specific area as this is an investing TV Channel

  • RA

    Robert A.

    24 1 2018 21:30

    4       0

    I enjoyed this one. FWIW I think this a great example of where the 30 minute format is quite useful—plenty of coverage of the issues in 30 minutes and to go more in depth would require hours which would be too esoteric to hold this audiences attention, IMO. An hour would have been too long for me, but 30 minutes to get exposed to AI and Super AI conceptually was just perfect. I’m sure others may disagree, but thought it might be helpful to Curator Milton to give my feedback in this regard.

  • CB

    Cliff B.

    24 1 2018 12:41

    1       0

    Great overview of AI

  • GM

    Gavin M.

    24 1 2018 05:09

    4       0

    Nick is clearly knowledgeable in the field and bravo to RTV for providing this video. Unfortunately Nick’s hypothesis falls into the typical linear thinking trap. What happens when the super intelligent machines become self aware? I humbly suggest that at that point all of our attempts to ring-fence the technology and all related bets are then off.

  • hb

    henry b.

    24 1 2018 03:13

    6       0

    where is Michael Green the best interviewer ever on Real Vision??

    Henry B

  • MM

    Michael M.

    24 1 2018 02:26

    4       5

    As I feared the good content that was presented in the first 2 years have been transferred to Think Tank and Macros Insider at a substantial cost. Not the proper way to take care of your initial customers.

  • AF

    Andrew F.

    23 1 2018 13:54

    1       0

    Good information and to be made aware of how quick or slow things can change in the next few moments. Well worth going over again. Thx RV.

  • SL

    Suqin L.

    23 1 2018 12:30

    6       0

    Hope RV can discuss the AI’s development areas, companies in each areas and how can RV’s user invest into it. Especially new development, we can do the empty, try talks in other medias.

  • FV

    Fredrik V.

    23 1 2018 08:40

    2       2

    This is the real value of RV! I would never get 30min with en expert, nor get access to Oxford. But if I did, what would I ask?
    -And as a fellow skåning: Skitbra Nick!

  • SB

    Sergei B.

    23 1 2018 06:05

    4       1

    Echoing some other comments, I think that the interview touches upon a possibly most importing issue of the next 20-50 years or longer, but in being too high-level has wasted an opportunity to engage a brilliant mind of Dr. Bostrom of "Universe is a Simulation" fame in trying to figure out how to actually apply the principles of "alignment" and "agency" to a practical issue. I loved the "Think Piece"s of yester years exactly because they would play out a thesis, an idea, a premise to a sometimes logical and sometimes very unintuitive conclusion. For example, isn't AI already playing a limited role in quant funds and robo-adviser firms? Would Dr. Bostrom characterize this application of AI as cooperative and aligned with the goals of all or just some of the human actors? Don't the markets always/often function competitively and there are always winners and losers? I would think that defining "goals" and "values" for AI is only possible in a narrow sense, that is in a closed system where all or most variables and functions can be defined and described. Any "interesting" system (large enough human group), however, is an open system that is inherently unpredictable and "chaotic" and so impossible to describe, at least by human intelligence. So how can humans program machines to function in systems that humans themselves do not understand? You almost have to have a super-intelligent AI to help out ... It's turtles all the way down I think.

  • GF

    Guillaume F.

    23 1 2018 00:31

    11       14

    Boring interview and clueless IYI... Why? Why not asking real experts like Yann LeCun or Andrew Ng? They would tell you that this super AI / singularity thing is BS.

    Very disappointed. I wish you would go back to the way it was before: more long interviews with real investors like Kyle Bass, Rick Rule, etc and less of this.

  • EF

    Eric F.

    22 1 2018 23:48

    8       0

    As per my comments below this wasn't the greatest interview for me, but it also has to be said as dry as presentation was you've got to focus on the content not the delivery. Also would have been useful to highlight this guy is probably one of the current experts in the field with the - or least one of - the defining books in the field. It's that book that evoked the responses as noted above from Elon Musk and Bill Gates. This is a subject matter expert and this is a likely mega trend.

  • BP

    Brandon P.

    22 1 2018 20:05

    4       1

    Unfortunately, the only thing I see coming from AI will be even more invasive tech monopolies (Google, Amazon, Apple, IBM, Microsoft) lending to the mantra, "the rich get richer" and a renaissance in military weapons tech. Seeing that these entities always utilize new technology decades prior to the retail space, how can we pretend that the upside is worth the risk?

    RV: I suggest a debate on the upside/downside to AI. Found this interview lacking of any actionable insight and instead too much blathering.

  • CL

    Cameron L.

    22 1 2018 17:25

    4       1

    Killer Robots, coming to a future near you !!!

  • BB

    Bill B.

    22 1 2018 15:46

    6       1

    Great to see RV finally step into this area that you seem to have been largely ignoring up until now. Take a look through the guest list at Sinularity Weblog to continue exploring this topic.

  • DG

    Daniel G.

    22 1 2018 14:45

    1       4

    Weaponizing AI? I can't imagine what could ever go wrong with that. Why would people make these things? Just because we can? I find the entire discussion outrageously un-intelligent.

  • CE

    Carol E.

    22 1 2018 14:42

    8       11

    Boring! Presenter presentation sounds like just like most Professors.... not thought through, read from a script, and ridiculous.

  • GO

    Greg O.

    22 1 2018 12:46

    27       12

    First of all, it makes me wonder, what is the purpose of these Monday presentations, when you get someone (in any field) to rumble for 30 minutes (chaotically or not) on some subject, without being challenged at all? There is no substitute for inteligent conversation with other person. I could sit in front of this guy and contradict many if not most of the things he is talking about. Same with many other Monday presentations.

    On to the topic of AI, while it is hard to predict how far can it go and what the ultimate impact of it will be, let's just focus on some simple key issues (in random order):
    1. We as a humans have trouble aligning our interests equally among all (environment, wealth distribution etc) and with AI you can imagine that the divisions might get only deeper.
    2. "human level AI"? Are you nuts? There is no substitute for "creative thinking" and other things like i.e. "fear" that allows us to come up with ideas that the machine will never be able to. Do not confuse acting based on algorithms or trial&error based on observable actions/outcomes with true human "creativity". "Fear" is a very important component in that too as well as general "feelings" that machines lack of.
    3. AI will not solve anything, it will just help us achieving some positive effects (i.e. drop the number of casualties of car accidents, some progress in other areas, as mentioned) while augment many other issues that we can't solve today (i.e. wealth distribution, increase in global unemployment etc)

    Consider simple example: We have gone from using horses and donkeys to cars/trucks/planes that can now operate (almost) fully automated. We solved some problems and created many others. Same with computers etc etc.

    As funny, strange or even stupid as it may sound, this reminds me of 20 or so years ago Terminator movie when they talk about the machines taking over... Which brings us to the important issue: are these fears overblown or justified? One could argue, that without ability for true "feelings" the potential for harm is only as big as the creator's intentions (i.e. you teach or program the machine to accumulate wealth at all cost, but the machine lacks genuine feeling of greed). Similar with true "creativity", that is a source of human genius and, at the same time, the reason of many pitfalls we face as it allows us to use it against many other individuals (wars, wealth etc).

    I don't care for AI, I would be happy without it and in fact I would prefer people focused their energy on solving other issues that plague humanity. But with that being said, the true danger, in my humble opinion, lies not in the AI but in humans themselves. The machines will not take over anything by themselves, they are (and will be) just the extension of the problems we have within.