Super-Intelligent Machines – The Ultimate Challenge for Humanity

Published on
January 22nd, 2018
30 minutes

Super-Intelligent Machines – The Ultimate Challenge for Humanity

The Expert View ·
Featuring Nick Bostrom

Published on: January 22nd, 2018 • Duration: 30 minutes

Professor Nick Bostrom's vision of future AI, where machines have become far more intelligent than humans, led tech leaders Bill Gates and Elon Musk to the conclusion that super-intelligent AI is the biggest issue facing humanity and something we potentially won't survive. Bostrom outlines the roadmap ahead of existential risks, opportunities that can't be imagined and the ultimate challenge to ensure that super-intelligent machines will do what we intend them to. Filmed on January 10, 2018, in Oxford, UK.


  • MO
    Mike O.
    30 January 2018 @ 21:37
    Best quote (at the end): "All of humans will be exposed to whatever risks there are ... so it seems like, if all goes well, that everybody should have a slice of the upside". Fat chance (sorry ... no offense to the porculent ... well, maybe, just a little to the financial variety). Reminds me of the movie Vera Cruz (1954) where $3 million in gold was being discussed (between three conspirators) and the countess (one of the three) says "One million's enough for me" only to have (Burt Lancaster) reply "It ain't for me. I'm a pig!". The pigs in this world (you know the kind ... they just had a big pig-fest in Davos) will likely want a bigger cut of any upside to AI (let's see them part with some of the gold they've already siphoned off ... think that'll ever happen?). It must be great to be a pig (especially after scoring on a big deal) ... here's a little known fact ... a pig's orgasm lasts 30 minutes (who knew?): (Am I getting too cynical in my old age?)
  • AE
    Alex E.
    29 January 2018 @ 05:03
    As was stated by the band Blue Oyster Cult: "History shows again and again how nature points out the folly of men" I think this verse speaks volumes, but then, we humans were never any good at replicating God. Just my opinion.
  • JP
    Janusz P.
    22 January 2018 @ 12:46
    First of all, it makes me wonder, what is the purpose of these Monday presentations, when you get someone (in any field) to rumble for 30 minutes (chaotically or not) on some subject, without being challenged at all? There is no substitute for inteligent conversation with other person. I could sit in front of this guy and contradict many if not most of the things he is talking about. Same with many other Monday presentations. On to the topic of AI, while it is hard to predict how far can it go and what the ultimate impact of it will be, let's just focus on some simple key issues (in random order): 1. We as a humans have trouble aligning our interests equally among all (environment, wealth distribution etc) and with AI you can imagine that the divisions might get only deeper. 2. "human level AI"? Are you nuts? There is no substitute for "creative thinking" and other things like i.e. "fear" that allows us to come up with ideas that the machine will never be able to. Do not confuse acting based on algorithms or trial&error based on observable actions/outcomes with true human "creativity". "Fear" is a very important component in that too as well as general "feelings" that machines lack of. 3. AI will not solve anything, it will just help us achieving some positive effects (i.e. drop the number of casualties of car accidents, some progress in other areas, as mentioned) while augment many other issues that we can't solve today (i.e. wealth distribution, increase in global unemployment etc) Consider simple example: We have gone from using horses and donkeys to cars/trucks/planes that can now operate (almost) fully automated. We solved some problems and created many others. Same with computers etc etc. As funny, strange or even stupid as it may sound, this reminds me of 20 or so years ago Terminator movie when they talk about the machines taking over... Which brings us to the important issue: are these fears overblown or justified? One could argue, that without ability for true "feelings" the potential for harm is only as big as the creator's intentions (i.e. you teach or program the machine to accumulate wealth at all cost, but the machine lacks genuine feeling of greed). Similar with true "creativity", that is a source of human genius and, at the same time, the reason of many pitfalls we face as it allows us to use it against many other individuals (wars, wealth etc). I don't care for AI, I would be happy without it and in fact I would prefer people focused their energy on solving other issues that plague humanity. But with that being said, the true danger, in my humble opinion, lies not in the AI but in humans themselves. The machines will not take over anything by themselves, they are (and will be) just the extension of the problems we have within.
    • IO
      Igor O.
      22 January 2018 @ 14:04
      "We have gone from using horses and donkeys to cars/trucks/planes that can now operate (almost) fully automated. We solved some problems.." Have you seen Dow Jones for last 100 years? What's up with attitude?
    • MN
      Marcus N.
      22 January 2018 @ 14:52
      Hi Greg, Professor Bostrom tied together all the various aspects of the impact and potential of AI that I have been thinking about for some time. I think the entire debate can be drilled down to the idea of 'Human level AI'; if you believe that 'Human' AI must require 'creativity' and the capacity for 'fear', that milestone might never be achieved. If, however, 'Human level AI' merely results in a machine that can think faster and more clearly than you or I about the problem that is right in front of them, then there will be a generation of machines that wake up earlier, work harder, and work smarter than we can. If 'Human level AI' implies some sort of self-awareness, or sentience, then we are in big trouble if their interests are not aligned with ours. It's not necessary to imagine a rogue SKYNET end of days scenario, once conscious AI determines that we are on the wrong track, they will merely manage our newsfeeds until we don't know which way is up; humanity will not know what is true because we will have ceded our sources of truth to an intelligence capable of manipulating our reality. I'm an optimist (really), I think AI will be a force for good, but I am open to the possibility that there may be a few stumbles on the road to success. What Professor Bostrom has brought into direct focus is that there may be only one chance to get this right. One intriguing idea that I heard recently was a discussion about why we are alone in this universe?, why has SETI failed to find any evidence of sentient life beyond this blue ball? One conjecture was that we were special, unique, a never-to-be-repeated fluke, another idea is that we are so early in our development that we are incapable (or not worthy) of contact. The last option is more sobering: the premise is that many systems spawned life that evolved and developed until it reached a break-point and collapsed. I used to think that our own break-point might be a nuclear self-annihilation window, but now am wondering if self-aware AI might deem us 'not worthy'.
    • EF
      Eric F.
      22 January 2018 @ 23:39
      I'm with Igor. Good that this was included to identify a likely mega trend, but could definitely have been tied a little more closely to investing. I did struggle to get much of value out of it TBH, and I hate being critical. Greg, your comments sound like that of a 90 year old! Those darn kids, get off of my lawn! In my day etc. etc. Look at the productivity leap from horse to car and improvements in general quality of life. Of course there have been downsides too and I hope the next wave of car innovation - particularly electric vehicles - tackles a lot of that, and self driving reduces road deaths. AI may lead to objectivity that closes divisions. You talk of fear but I think emotions can also be our weaknesses so AI has the potential to remove or account for that. Creativity - have you not heard about the self learning in Go and chess that has lead to approaches unseen by masters? Your statement that AI will not solve anything is ridiculous, on the level of who needs a car when they have a horse! I for one welcome our robot overlords!
    • TK
      Thomas K.
      28 January 2018 @ 06:01
      You seem to be of the opinion there's something magical about human cognition. It's too early to say for sure, but I sincerely doubt there is anything intrinsically "neural" to creativity or fear that can't be transposed to silicon, or something more abstract like mathematical first principles. As far as we can tell, fear is just a heuristic--a feedback loop that shuts down various parts of high-order cognition to focus the mind on what's considered by the heuristic to be of paramount importance, e.g., a perceived threat. Such a construct would be trivial thing to build into artificial life, or for it to evolve given the right selection pressures. Creativity, and by extension innovation, closely resemble the bumbling trial-and-error of evolution. The space in which we're creative builds upon previous experience--symbolic associations represented as neural pathways. While none of this sounds like the linear thinking typically associated with algorithms, the belief that algorithms must be that linear (i.e., understandable by humans) is a total misconception. The literature is filled with baffled researchers showing great results, but being entirely incapable of explaining why a given neural net/GA/etc. produced the results.
  • PS
    Peter S.
    27 January 2018 @ 17:56
    From developing.
  • PS
    Peter S.
    27 January 2018 @ 17:55
    This is a good overview. However, h glosses over the notion that there will be differing views of what goals and values should be incorporated in AI. There is no way to stop competing ideals from de
  • CS
    Christo S.
    25 January 2018 @ 12:06
    Scientists currently need 5 years to reproduce 1% of the brain of a mouse. Ergo 500 years for the complete mousebrain. It's a waste of time.
    • SD
      Stephen D. | Contributor
      26 January 2018 @ 00:48
      That's not have science works Christo. It's exponential not linear. 5 years for the 1st 1% 5 months for the second 5 days for the 3rd 5 hours for the 4th. We'll be there in no time.
  • SD
    Stephen D. | Contributor
    25 January 2018 @ 04:07
    100% unemployment is the objective of AI. On the way to that goal the extermination of the Human Race is a possibility if we get it wrong. Truly a thing to make you go hmmmmmm? Given that AI is, and will continue to, completely change trading and maybe even investment, it would have been good to have something about this specific area as this is an investing TV Channel
    • EF
      Eric F.
      25 January 2018 @ 05:21
      I agree on the latter half of your comments - some level of AI seems to be taking hold from HFT to text analysis to passive investing, would be interesting to understand more and therefore possibly be equipped to counter / beat these (currently) crude machines. On the first part, what if AI takes eliminates mundane, repetitive jobs in a cheaper more effective way and allows people to engage and prosper in areas of competency, interest and passion? It doesn’t have to be all bad.
  • SL
    Suqin L.
    23 January 2018 @ 12:30
    Hope RV can discuss the AI’s development areas, companies in each areas and how can RV’s user invest into it. Especially new development, we can do the empty, try talks in other medias.
    • EF
      Eric F.
      25 January 2018 @ 05:16
      Agree, definitely worthwhile expanding on this topic and would love to see RV take the ball and run with it a bit more.
  • Cb
    Chris b.
    24 January 2018 @ 02:26
    As I feared the good content that was presented in the first 2 years have been transferred to Think Tank and Macros Insider at a substantial cost. Not the proper way to take care of your initial customers.
    • MC
      Mario C.
      24 January 2018 @ 09:35
      you are comparing newsletters with video interviews... different product. True they were some quality interviewees that have been exclusively reallocated to the other products (Julian Brigden,...) that's to be deplored. But there are still great quality videos here (the Michael Green interviews,... Marc Cohodes)
    • TP
      Tom P.
      24 January 2018 @ 23:24
      Are you saying you didn't buy the $14,000 worth of investment research for the low, low price of $500 per year?!? Credit card / paypal / bitcoin accepted?! For me, I like RV's ability to present the opposing view in a fair way. Occasionally ruffling consensus's feathers, they often handle well. I don't think anyone really expected Global Macro Investor level content for the price of an annual subscription to the Guardian.
  • BP
    Brandon P.
    22 January 2018 @ 20:05
    Unfortunately, the only thing I see coming from AI will be even more invasive tech monopolies (Google, Amazon, Apple, IBM, Microsoft) lending to the mantra, "the rich get richer" and a renaissance in military weapons tech. Seeing that these entities always utilize new technology decades prior to the retail space, how can we pretend that the upside is worth the risk? RV: I suggest a debate on the upside/downside to AI. Found this interview lacking of any actionable insight and instead too much blathering.
    • JC
      John C.
      24 January 2018 @ 22:06
      Honestly it's quite scary that tech giants like Facebook and Google are at the forefront of AI. Given their past actions (stealing our info and selling it, using dubious advertising algorithims that even Raoul & RV discovered were mostly fake or non-effective but quite expensive and cornering the advertising market, using customer tendencies and content to essentially get users addicted to their platforms (see Jesse Felder's stuff on this), Google firing any employees who go against their strange corporate cultural beliefs, etc. All in all a pretty bad track record. Clearly Google, FB and maybe even Apple aren't the "Good Guys" or at least should be viewed at very suspiciously given their track record. Very scary. I think I trust the Chinese more to do the right thing at this point.
  • RA
    Robert A.
    24 January 2018 @ 21:30
    I enjoyed this one. FWIW I think this a great example of where the 30 minute format is quite useful—plenty of coverage of the issues in 30 minutes and to go more in depth would require hours which would be too esoteric to hold this audiences attention, IMO. An hour would have been too long for me, but 30 minutes to get exposed to AI and Super AI conceptually was just perfect. I’m sure others may disagree, but thought it might be helpful to Curator Milton to give my feedback in this regard.
  • CB
    Cliff B.
    24 January 2018 @ 12:41
    Great overview of AI
  • GM
    Gavin M.
    24 January 2018 @ 05:09
    Nick is clearly knowledgeable in the field and bravo to RTV for providing this video. Unfortunately Nick’s hypothesis falls into the typical linear thinking trap. What happens when the super intelligent machines become self aware? I humbly suggest that at that point all of our attempts to ring-fence the technology and all related bets are then off.
  • hb
    henry b.
    24 January 2018 @ 03:13
    where is Michael Green the best interviewer ever on Real Vision?? Henry B
  • AF
    Andrew F.
    23 January 2018 @ 13:54
    Good information and to be made aware of how quick or slow things can change in the next few moments. Well worth going over again. Thx RV.
  • FV
    Fredrik V.
    23 January 2018 @ 08:40
    This is the real value of RV! I would never get 30min with en expert, nor get access to Oxford. But if I did, what would I ask? -And as a fellow skåning: Skitbra Nick!
  • GF
    Guillaume F.
    23 January 2018 @ 00:31
    Boring interview and clueless IYI... Why? Why not asking real experts like Yann LeCun or Andrew Ng? They would tell you that this super AI / singularity thing is BS. Very disappointed. I wish you would go back to the way it was before: more long interviews with real investors like Kyle Bass, Rick Rule, etc and less of this.
    • GS
      Gordon S.
      23 January 2018 @ 07:43
      C.f. also comment below:
  • DG
    Daniel G.
    22 January 2018 @ 14:45
    Weaponizing AI? I can't imagine what could ever go wrong with that. Why would people make these things? Just because we can? I find the entire discussion outrageously un-intelligent.
    • LK
      Lyle K.
      22 January 2018 @ 16:04
      I disagree, I think this is the time to start discussing AI whether you agree or you don't what is being said their is no model for AI its a new technology so my opinion is that we should all get familiar with the terminology and expectations of the technology the easier the transition will be.
    • GS
      Gordon S.
      23 January 2018 @ 07:29
      Take a look at this YouTube video for one possible scenario: Slaughterbots "The video portrays not to far in the distant future of a military firm unveiling a drone with shaped explosives that can target and kill humans on its own. Further in, the video abruptly changes pace, when bad guys get ahold of the technology and unleash swarms of killer robots onto the streets of Washington, D.C. and various academic institutions. The video is aggressive and graphic but outlines if the technology was misused it could have severe consequences - such as civilian mass causality events." (From this ZH article: )
  • SB
    Sergei B.
    23 January 2018 @ 06:05
    Echoing some other comments, I think that the interview touches upon a possibly most importing issue of the next 20-50 years or longer, but in being too high-level has wasted an opportunity to engage a brilliant mind of Dr. Bostrom of "Universe is a Simulation" fame in trying to figure out how to actually apply the principles of "alignment" and "agency" to a practical issue. I loved the "Think Piece"s of yester years exactly because they would play out a thesis, an idea, a premise to a sometimes logical and sometimes very unintuitive conclusion. For example, isn't AI already playing a limited role in quant funds and robo-adviser firms? Would Dr. Bostrom characterize this application of AI as cooperative and aligned with the goals of all or just some of the human actors? Don't the markets always/often function competitively and there are always winners and losers? I would think that defining "goals" and "values" for AI is only possible in a narrow sense, that is in a closed system where all or most variables and functions can be defined and described. Any "interesting" system (large enough human group), however, is an open system that is inherently unpredictable and "chaotic" and so impossible to describe, at least by human intelligence. So how can humans program machines to function in systems that humans themselves do not understand? You almost have to have a super-intelligent AI to help out ... It's turtles all the way down I think.
  • CL
    Cameron L.
    22 January 2018 @ 17:25
    Killer Robots, coming to a future near you !!!
    • CL
      Cameron L.
      22 January 2018 @ 17:30
      think it's science fiction,
    • CL
      Cameron L.
      22 January 2018 @ 17:34
    • CL
      Cameron L.
      22 January 2018 @ 23:50 - Woody Preucil, Senior Managing Director at 13D, discusses the global arms race that is currently underway to achieve supremacy in quantum computing and AI.
  • EF
    Eric F.
    22 January 2018 @ 23:48
    As per my comments below this wasn't the greatest interview for me, but it also has to be said as dry as presentation was you've got to focus on the content not the delivery. Also would have been useful to highlight this guy is probably one of the current experts in the field with the - or least one of - the defining books in the field. It's that book that evoked the responses as noted above from Elon Musk and Bill Gates. This is a subject matter expert and this is a likely mega trend.
  • BB
    Bill B.
    22 January 2018 @ 15:46
    Great to see RV finally step into this area that you seem to have been largely ignoring up until now. Take a look through the guest list at Sinularity Weblog to continue exploring this topic.
  • CE
    Carol E.
    22 January 2018 @ 14:42
    Boring! Presenter presentation sounds like just like most Professors.... not thought through, read from a script, and ridiculous.