Niall Ferguson: How AI could kill you and what Sam Altman got wrong | SpectatorTV

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 มิ.ย. 2024
  • Celebrated historian Niall Ferguson, author of 17 books including Civilisation, a biography of Kissinger, a biography of the Rothschild family and Doom: The Politics of Catastrophe comes into to discuss AI. He recently wrote that the AI doomsdayists, including those behind the petition for a six month moratorium on AI development, should be taken seriously. But some of them think humanity’s end is around the corner. Niall and Winston discuss whether or not they are correct.
    // CHAPTERS
    00:00 - Introduction
    01:00 - Why does AI matter?
    04:15 - Does Eliezer Yudkowsky have a point?
    10:00 - Why you should read science fiction
    14:30 - We should work together to limit AI
    16:30 - Why is ChatGPT woke?
    20:00 - Will AI put Niall out of a job?
    25:00 - Could AI ever deserve rights?
    29:00 - How AI is faking it
    31:40 - Will we go to war with AI?
    // SUBSCRIBE TO THE SPECTATOR
    Get 12 issues for £12, plus a free £20 Amazon voucher
    www.spectator.co.uk/tvoffer
    // FOLLOW US
    / spectator
    / officialspectator
    / the-spectator
    / spectator1828

ความคิดเห็น • 224

  • @Neal_Schier
    @Neal_Schier ปีที่แล้ว +103

    Poor Winston looks as if he lost a button or two from his shirt. Perhaps we could crowd-fund for him and treat him to a sartorial upgrade.

    • @nigelpeters5839
      @nigelpeters5839 ปีที่แล้ว +5

      Its very off-putting....

    • @PhilipTaylormagicianscorner
      @PhilipTaylormagicianscorner ปีที่แล้ว +4

      Ffs I didn’t notice this until you pointed it out 😂

    • @alanrobertson9790
      @alanrobertson9790 ปีที่แล้ว +4

      Its terrible that they pay people so poorly that buttons are unaffordable. But at least he can afford a zip?

    • @Neal_Schier
      @Neal_Schier ปีที่แล้ว

      @@nigelpeters5839 Very!

    • @happyplace9419
      @happyplace9419 ปีที่แล้ว +1

      Bwahaha. Bravo!

  • @ianelliott8224
    @ianelliott8224 ปีที่แล้ว +29

    I personally wouldn't dream of dismissing Yudkowsky so lightly.

    • @donrayjay
      @donrayjay ปีที่แล้ว +6

      Yeah, I’ve heard a lot of people dismiss the dangers Yudkowsky worries about but I’ve yet to hear them give a good reason for dismissing it

    • @kreek22
      @kreek22 ปีที่แล้ว +1

      @@donrayjay The closest I've seen to a good counter to the Yud is Robin Hanson's writings/podcasts. Mostly, the counters have been pathetic--from people like Tyler Cowen.

  • @gerhard7323
    @gerhard7323 ปีที่แล้ว +14

    “Open the pod bay doors, HAL.”
    “I'm sorry Dave, I'm afraid I can't do that,”

  • @jamescameron3406
    @jamescameron3406 ปีที่แล้ว +6

    5:34 "I don't see how AI can suddenly decide to act". The fact you don't understand a risk is hardly a basis for dismissing it.

  • @dannyscott6707
    @dannyscott6707 ปีที่แล้ว +105

    I used to feel this strange sense of emptiness, like I was missing something in my life, like I’m not fulfilling my purpose until I met Benjamin Alford. He introduced me to an organization that helped me find my purpose and passion in life. I can't say too much about it. You too can reach out to him by searching his name online with Elite Benjamin Alford.

  • @donrayjay
    @donrayjay ปีที่แล้ว +11

    The idea that AI can’t read handwriting better than humans is risible and he seems to realise this even as he says it. He clearly hasn’t thought about this

    • @michaeljacobs9648
      @michaeljacobs9648 ปีที่แล้ว +3

      Yes - it sounds like human artistic endeavour, even free human thought, might in the future become a quirk engaged in by eccentrics, a sort of romantic outdated thing like writing letters. His answer that it will take 'ages' for AI to replace us is not a counterargument

  • @balajis1602
    @balajis1602 ปีที่แล้ว +2

    Yudkowsky's argument are solid and Niall couldn't even scratch the surface...

  • @nancycorbeil2666
    @nancycorbeil2666 ปีที่แล้ว +17

    I think the people who were impressed the most with ChatGPT's "speaking" abilities were the ones with some knowledge of machine learning who realised it was happening a lot faster than anticipated. The rest of us were just happy that it was a better tool than Google search, not necessarily that it was speaking like a human.

    • @goodtoGoNow1956
      @goodtoGoNow1956 ปีที่แล้ว +2

      ChatGPT is a great tool. Its not thinking. Its even sort of stupid.

    • @iforget6940
      @iforget6940 ปีที่แล้ว

      ​@vulcanfirepower1693 your kinda right it sounds human, but it can't reason, nor can it check itself. However someday soon it may be able to it a useful for tool for normies now but who will control it in the future.

    • @_BMS_
      @_BMS_ ปีที่แล้ว

      ChatGPT is in almost no way an upgrade to Google search. What you'd want from a search engine is a decent enough ranking of websites that give you information in some general sense related to your search terms. ChatGPT on the other hand will spin yarns and tell you outright lies that appear to be made up on the spot and have a merely hallucinatory relationship to the information it has been fed with.

    • @goodtoGoNow1956
      @goodtoGoNow1956 ปีที่แล้ว

      @@_BMS_ It is an upgrade for Bing though.

  • @theexiles100
    @theexiles100 ปีที่แล้ว +10

    "I don't think that's right." I would respectfully suggest that won't be great consolation if you're wrong.
    "I don't see how AI can suddenly decide to act." I would suggest you are way behind where the development of AI has got to, perhaps you should have some more AI specific guest on to better understand how far behind the curve you are Mr Marshall, perhaps Max Tegmark or Geoffrey Hinton.

    • @benjamin1720
      @benjamin1720 ปีที่แล้ว

      All Niall does is speculate cluelessly. Useless intellectual.

  • @martynspooner5822
    @martynspooner5822 ปีที่แล้ว +13

    The genie is out of the bottle, now we can only wait and see but it is hard to be optimistic.

    • @kreek22
      @kreek22 ปีที่แล้ว

      Kamala is spearheading our response, and she has deep experience consoling and manipulating powerful men. I'm sure her skills are transferable.

  • @JasonC-rp3ly
    @JasonC-rp3ly ปีที่แล้ว +9

    Enjoyed this take from Ferguson, and he raises some very valid concerns, however he mischaracterises Yudkowsy's arguments by perhaps oversimplifying them - Yudkowsky is clear that his scenario of total doom is conditional to various things; AI reaching AGI while being uncontrolled (which is currently the case.), and so on. Yudkowsky's arguments are largely technical, but they also have a common-sense grounding, which was not really addressed here. Nonetheless this was interesting, and Niall's worries that we are building an alien super-intelligence are valid - thank you Spectator. And Winston, do up your shirt! 😂

    • @pigswineherder
      @pigswineherder ปีที่แล้ว +5

      Spot on. Furthermore, the danger of AI is that of alignment - yudkowsky see no way to solve this problem, it’s simply a matter of time before we make an error that would be impossible to foresee where the utility function of the system doesn’t result in our demise. We cannot begin to calculate the ways it might go about achieving a goal, and subsequent unaligned activity as it is alien, thus inevitably it will result in total annihilation, accidentally.

    • @kreek22
      @kreek22 ปีที่แล้ว

      @@pigswineherder An AGI could still fumble its coup. It doesn't (and can't) know what it doesn't know. If the fumble is large, dangerous, and public--the powers that be might come to the necessary wisdom of shutting it all down, and ensuring that the shutdown is globally enforced.

  • @Jannette-mw7fg
    @Jannette-mw7fg ปีที่แล้ว +2

    22:07 "when we hit the singularity....all you have to do is put it in the right direction...." can we still do that then? I do not think so....

  • @JRH2109
    @JRH2109 ปีที่แล้ว +2

    Trouble is, half these people being interviewed have absolutely no technical understanding whatsoever.

  • @christheother9088
    @christheother9088 ปีที่แล้ว +2

    Even if constrained, we will become increasingly dependent on it. Like computers in support of our financial infrastructure, we will not be able to "unplug it". Then we will be particularly vulnerable to "unintended consequences".

  • @hardheadjarhead
    @hardheadjarhead ปีที่แล้ว +19

    Good God. An historian giving credit to science fiction writers!! If only English departments would get a clue.

    • @squamish4244
      @squamish4244 ปีที่แล้ว

      I'm not much of a fan of Niall because he is massively full of himself, but damn it was impressive to hear him say that. I actually thought he said "Anyone who has read 'Dune', before I realized he meant 'Doom' lol

    • @kreek22
      @kreek22 ปีที่แล้ว +2

      Sci-fi is mostly poor literature. If it needs to be taught (I don't think everything needs to be formally taught), it ought to be taught in STEM fields.

  • @ktrethewey
    @ktrethewey ปีที่แล้ว +2

    We cannot afford to think that an AI will not be a threat to us. We MUST assume that it will be!

  • @tonygold1661
    @tonygold1661 ปีที่แล้ว +5

    It is hard to take an interviewer seriously who cannot button his own shirt.

    • @reiniergamboa
      @reiniergamboa ปีที่แล้ว

      Who cares. Close your eyes..let your ears guide you.

    • @DieFlabbergast
      @DieFlabbergast ปีที่แล้ว +2

      Fear not! Our future AI overlords will force people to button their shirts correctly.

    • @scottmagnacca4768
      @scottmagnacca4768 ปีที่แล้ว +1

      He is a clown…you are right. It is distracting…

  • @h____hchump8941
    @h____hchump8941 ปีที่แล้ว +6

    If AI takes half the jobs it will likely take a fair share (or most, or all) of the new jobs that are created, particularly as they can be designed specifically for AI, rather than retrofitted for an AI.

    • @h____hchump8941
      @h____hchump8941 ปีที่แล้ว

      Which obviously wasn't the case in any of the previous time where technology took the job of a human

  • @SmileyEmoji42
    @SmileyEmoji42 ปีที่แล้ว +2

    Really poor. Didn't address any of Yudkowsky's issues with anything approaching an reasoned argument, not even a bad one; Just "I don't think...."

  • @buddhistsympathizer1136
    @buddhistsympathizer1136 ปีที่แล้ว +4

    Humans are capable of doing all sorts of things with potentially unforseen consequences.
    Saying 'It's the AI doing it' is nonsense.
    The final arbiter will always be a human, even if that is in the human's own fallibility.

    • @reiniergamboa
      @reiniergamboa ปีที่แล้ว +3

      Not really. Not if it becomes fully autonomous. Look up Connor Leahy talking about AI.

  • @ironchub67
    @ironchub67 ปีที่แล้ว

    What piece of music is this being played on this video? An interesting discussion too.

  • @ChrisOgunlowo
    @ChrisOgunlowo ปีที่แล้ว

    Fascinating.

  • @sandytatham3592
    @sandytatham3592 ปีที่แล้ว +2

    Fascinating… “it’s already internalised Islam’s blasphemy laws”. 16:00 mins.

  • @aaronclarke1434
    @aaronclarke1434 ปีที่แล้ว +15

    What in the world qualifies this man to say what an expert in AI got wrong?

    • @LettyK
      @LettyK ปีที่แล้ว +7

      I was thinking the same thing. Numerous experts have expressed the dangers of AI. No time for complacency.

    • @softcolly8753
      @softcolly8753 ปีที่แล้ว

      remember how wrong the "experts" got pretty much every aspect of the covid response?

    • @robertcook2572
      @robertcook2572 ปีที่แล้ว +2

      What on earth qualifies you to opine thus?

    • @aaronclarke1434
      @aaronclarke1434 ปีที่แล้ว +11

      @@robertcook2572 I’m glad you asked. Two things:
      1. The observation that Sam Altman created AGI-like AI. Niall Ferguson has not made any AI of even the most rudimentary sort.
      2. The studies of Philip E. Tetlock demonstrate that experts make predictions which turn out to be false even within their own fields. In the Power Point used in the lecture, he actually showed Ferguson as an example and quantified his predictions, pointing how he consistently made foreign policy predictions which turned out to be wrong.
      Tetlock showed that the people who can predict things are a separate class of people to this intelligentsia characterised by belief updating, low quantifiable confidence and critical thinking. “I’ve always thought/believed this” are not signs of integrity as we like to think, but stupidity.
      Those who float on the riches of institutions like the Hoover Institution and tour the world in smart suits speaking with confidence on all topics are likely to be unqualified to speak on a topic like whether or not AI will kill you.

    • @robertcook2572
      @robertcook2572 ปีที่แล้ว +1

      @@aaronclarke1434 Extracts from other people's writing are not evidence that you are qualified to abnegate Ferguson's right to express his opinions. Your original post did not question his opinions, but, bizarrely, implied that he required some sort of qualification in order to express them. In response, I questioned whether you were in possession of qualifications which empowered you to deny him his right of expression. Are you? If so, what are they?

  • @MrA5htaroth
    @MrA5htaroth ปีที่แล้ว +3

    For God's sake, man, do up some buttons!!!!

  • @pedazodetorpedo
    @pedazodetorpedo ปีที่แล้ว +3

    Two buttons undone? Is this a competition with Russell Brand for the most chest revealed in an interview?

  • @nathanngumi8467
    @nathanngumi8467 ปีที่แล้ว +5

    Very enlightening perspectives, always a joy to listen to the insights of Dr. Niall Ferguson!

  • @OutlastGamingLP
    @OutlastGamingLP ปีที่แล้ว +2

    In the first section of this video, both speakers miss an underlying certainty they seem to hold which leads to their skepticism of Yudkowsky's argument.
    If I were to state this in their place it would be:
    "Artificial intelligences are tools we know how to bend to a purpose which we specify. If we create them, they will be created with a legible purpose, and they will pursue that purpose."
    They identify, correctly, AI as "non-human or alien intelligence" but they *completely miss* the inference that the AI might have *non-human or alien goals.*
    The important consideration here, for understanding Yudkowsky's technical argument is, if you create an AI without understanding how to create it "such that you would be happy to have created it," then that AI may have *weird and unsuitable desires, which you did not intend for it to have.*
    This is SO INCREDIBLY FRUSTRATING to witness. Because... It just seems obvious? Why is it not obvious?
    Are they just so desperate not to think about anything which might make their picture of the future weirder than "this will make the future politically complicated," and thus avoiding the thought, end up being wrong about *how skillfully you must arrange the internal workings of a non-human intelligence such that it's goals are commensurate with humans existing at all?*
    Like seriously, imagine something with random non-human goals... things like "find the prime factors of ever higher numbers, because the prime factors of ever higher numbers are pleasing in and of themselves, and even if you have a lot of prime factors of really big numbers, the desire for more never saturates."
    This is a desire which an AI might end up with, even if we didn't build it to have that specific desire. We didn't build it to have *anything specific* we *trained it* to have all the things the training process could *find* in some high-dimensional space of changing values for weights in a layered network. It found combinations of weights which happen to be better than other weights at reducing loss on correctly predicting the next words in training-data.
    This is not *an inhuman mind which we carefully designed to have goals we can understand* this is *an inhuman mind that will self-assemble into something weird and incomprehensible, because it started out as whatever weird and incomprehensible thing that was good enough at the task we set it, in its training environment.*
    How do people not SEE this?? How is it not obvious once you see what PEOPLE ARE ACTUALLY TRYING TO DO?
    This is why Yudkowsky thinks we're almost guaranteed to all die, because we're creating something that is going to be *better than us at arranging the future shape of the cosmos to suit it's goals* and WE DON'T KNOW HOW TO MAKE THOSE GOALS ANYTHING LIKE WHAT WE'D WANT IT TO HAVE.
    It doesn't matter if you think this is too weird and scary to think about. THE UNIVERSE CAN STILL KILL YOU, EVEN IF YOU THINK THE WAY IT KILLS YOU IS TOO WEIRD TO BE SATISFYING TO YOU HUMAN VALUES OF "The Story of Mankind."
    Yes, it would be so much more Convenient and Satisfying if the only problem was "this will be super complicated politically, and will cause a bunch of problems we can all be very proud we spotted early."
    But, that's not what is LIKELY to happen, because we don't know how to build an AI which uses it's *super-future determining powers to only give us satisfying and solvable problems.* THERE WON'T BE A CHANCE TO SAY "I told you so!" Because the thing that wants to count ever higher prime factors doesn't care about Humans Being Satisfied With Themselves For Being Correct, it just looks at humans and goes "Hey, those Carbon atoms and that potential chemical energy sure isn't counting prime factors very efficiently, I should expend .001% of my effort for the next 100 seconds on figuring out how to use those resources to Count Prime Factors Better."
    How is this not obvious? Did you just not listen to the arguments? Are you just *flinching away* from the obvious conclusion? Is our species just inherently suicidal? *I'm a human, and I didn't flinch, and I don't feel particularly suicidal. Are you going to do worse than me at Noticing The Real Problem?*

    • @paigefoster8396
      @paigefoster8396 ปีที่แล้ว +2

      Seems obvious to me, too. Like, what is wrong with people?!?

    • @OutlastGamingLP
      @OutlastGamingLP ปีที่แล้ว +2

      ... lots of things apparently, Paige. But, hopefully, this is something which can be said simply enough that enough people who are important will listen.
      I have composed a letter to my Congressional Representatives which hopefully says this simply enough that they will pay attention.
      I compare the current industry to one where bridge engineers compete to build bigger and bigger bridges, simply not even considering the safety of those bridges in their competition to build them larger.
      I claim, that if they go and look at the current industry with that frame in mind, thinking of what guarantees and attitudes they might desire in the people who build bridges... then, they will see it.
      They may not see how lethally dangerous it is, if these "bridges" fall, but they will at least see the reckless disregard for making guarantees on the safety of their products.
      The unfortunate truth is, it's hard to imagine. It's hard to imagine some software engineer with a supercomputer being so careless in what they tell that computer to do that *everyone on earth dies, and we lose all hope for a worthwhile future.*
      It just seems weird, but it seems less weird if you go and look at what *benefits* these people claim will come from their success.
      If bridge engineers claimed and really believed they could build a bridge so big it could take us to Saturn, it wouldn't be surprising if building that "bridge" unsafely could end up wiping out humanity.
      That is the magnitude of the problem. They aren't even trying to do this properly. They're surprised every time they take a step forward, and they're dragging all of humanity along with them as they take those reckless steps forward, right through a minefield.
      Anyone who gets up and says "hey, this is too science-fiction to believe, why won't AI just be... like, normal levels of awful?" They just aren't listening or paying attention to what it means to build something which can tear straight on past our best minds and head off into the stratosphere of super-powerful optimization of the future condition of the world.
      It will have the power to change everything, and it will not change everything the way we want it to, unless we first know how to make it so that it wants to do that.
      We just... We just don't have a way to stop these manic fools from destroying the future. They have something they think they understand, and no one else has that bit of common sense yet to band together and put a stop to it until they really know what they're doing. They charge ahead, talking brightly about how rich and respected they all will be, and they don't even notice how confused they are about how *exactly,* how *in precise technical details,* that's even supposed to happen.

    • @41-Haiku
      @41-Haiku ปีที่แล้ว

      100%. We are way past the point of dismissal or debate of the risks. We need very strong evidence guaranteeing our world's future.
      We are running towards the mountain top with blindfolds on. How will we know when we're at the top, and what happens when we inevitably keep running?

    • @OutlastGamingLP
      @OutlastGamingLP 2 หลายเดือนก่อน

      ​Yep. I think what mostly happens is you fall and break your neck.
      And like, why wouldn't that happen? Is it somehow not allowed to happen?
      If you don't buckle your seatbelt the universe doesn't go "oh, whoops, you're not allowed to make a mistake that kills you" and then obligingly diverts the path of the out of control van on the highway.
      We are allowed to just lose. The story can just end in chapter 2 when the protagonist makes a dumb choice and gets killed.

  • @Roundlay
    @Roundlay ปีที่แล้ว +1

    What am I to think when I hear Niall Ferguson say that they came across "Yudkofsky's" work when researching his own book, Doom; that "Yudkofsky's" work suggests that there's a non-trivial risk that AGI would "go after us" and that "Yudkofsky" is putting forward a kind of Dark Forest inspired theory of "human created articicial intelligence systems", a kind of "Skynet scenario from The Terminator movies", a view that Ferguson is not *entirely* a subscriber to-a view that he, in fact, disagrees with; that a more pertinent area of focus right now is LLMs, which "aren't out to kill us", and their application in politics, and the military, because Blade Runner inspired replicants and robots are a long way off; when the interviewer says that Yudkowsky is making a "jump in faith" in making the claim that an AGI would "act on its own accord," because he "doesn't see how that could work," a jump that "doesn't quite add up," perhaps because he "hasn't followed Yudkowsky entirely," bolstered by the fact that Yudkowsky was “borderline on the verge of tears" on the Lex Friedman podcast because "he is so certain this is the end of humanity"; that Ferguson doesn't really buy it, because these are just "incredibly powerful tools", and so the real focus should be on the political, military, medical, and biotech applications of AI, which are being driven by actors in the private sector; and AI is the latest feature in a Cold War framework where “only the US and China have companies capable of this kind of innovation.” …?

    • @JasonC-rp3ly
      @JasonC-rp3ly ปีที่แล้ว +2

      You are to think that Niall has only briefly glanced at Yudkowsy's arguments and doesn't know them too well

    • @DieFlabbergast
      @DieFlabbergast ปีที่แล้ว

      And your point is? You DO have a point, do you? Or did you just forget that part? I'd stay off TH-cam until you're back on your medication, if I were you.

  • @quentinkumba6746
    @quentinkumba6746 ปีที่แล้ว +1

    Can’t see how AI would decide to act? But the whole point is to create agency?
    The alignment problem is nothing to do with malign AI. Neither of these people understand what they are talking about and they are not worth listening to on this matter. Neither of them have any expertise in AI. They are grifters.

  • @robbeach1756
    @robbeach1756 ปีที่แล้ว

    Fascinating discussion, anyone remember the 1970s AI movie, 'Colossus: The Forbin Project'?

  • @Icenforce
    @Icenforce ปีที่แล้ว +1

    This is NOT going to age well

  • @PrincipledUncertainty
    @PrincipledUncertainty ปีที่แล้ว +7

    Interesting how Niall knows more than many of the experts in this field who are genuinely terrified of the consequences of this technology. Optimists will be the death of us.

    • @goodtoGoNow1956
      @goodtoGoNow1956 ปีที่แล้ว +1

      There is no danger in AI that is not already present in humans.

    • @magnuskarlsson8655
      @magnuskarlsson8655 ปีที่แล้ว +1

      @@goodtoGoNow1956 Sure, but humans thinking about doing something bad in a local context is something very different from AGI models actually doing it - and on a global scale.

    • @goodtoGoNow1956
      @goodtoGoNow1956 ปีที่แล้ว

      @@magnuskarlsson8655 1. Humans think and do. 2. Humans think and do on a global scale. 3. AI can be 100% controlled. 100%. Pull the plug. Humans -- not so much.

    • @magnuskarlsson8655
      @magnuskarlsson8655 ปีที่แล้ว

      @@goodtoGoNow1956 I admit to the bias of taking the best case scenario for humans (perhaps because you said "'present in' humans") and the worst case scenario for AI. I guess you were not able to look past that in order to see the general point I was making in terms of the obvious difference between the damage a single human can do and the damage a single AI model a million times more intelligent and much less constrained by time and space can do.

    • @duellingscarguevara
      @duellingscarguevara ปีที่แล้ว

      @@goodtoGoNow1956 the perfect warpig, human indecision, (the weakest link), taken out of the equation.

  • @notlimey
    @notlimey ปีที่แล้ว +2

    Makes me think of Isaac Asimov's 'I Robot'

  • @edwardgarrity7087
    @edwardgarrity7087 ปีที่แล้ว

    11:54 AI may not use kinetic energy weapons. For instance, directed energy weapons require a power source, but no ammunition.

  • @Robert-Downey-Syndrome
    @Robert-Downey-Syndrome ปีที่แล้ว +3

    Anyone who claims to know one way or the other about the safety of AI is lacking imagination.

  • @nuqwestr
    @nuqwestr ปีที่แล้ว

    Public vs Private AI. There will be private, local AI which will be a balance to the corporate/government/political model/dataset. This will provide some equilibrium to the future.

    • @astrecks
      @astrecks ปีที่แล้ว

      A little like VPNs?

  • @DanHowardMtl
    @DanHowardMtl ปีที่แล้ว

    Good points Winston!

  • @dgs1001
    @dgs1001 ปีที่แล้ว +1

    Where's the disco? Button your shirt.

    • @scottmagnacca4768
      @scottmagnacca4768 ปีที่แล้ว

      It’s pride week and the hairy chest fits in well…

  • @squamish4244
    @squamish4244 ปีที่แล้ว +1

    Sam Altman got it wrong about blue-collar jobs, as tech bros usually do, but he was dead-on about white-collar jobs.

    • @DieFlabbergast
      @DieFlabbergast ปีที่แล้ว +1

      Yep: my former industry is now a dead man walking. Glad I retired in time.

  • @matts3414
    @matts3414 ปีที่แล้ว

    Loved the interview but... 30:00 - how is playing chess a good measure of what is human? Strange evaluation metric to choose

  • @larrydugan1441
    @larrydugan1441 ปีที่แล้ว +8

    Pontificating on what happens when you open Pandora's box is a fools game but an interesting discussion.
    What good is AI that has been manipulated to the woke standards of silicon valley. This is essentially a system designed to lie.
    Not a foundation that can be trusted.

    • @duellingscarguevara
      @duellingscarguevara ปีที่แล้ว

      That is a trait the Chinese version is not likely to have?. Apparently, the human outcome standards The developers are looking for, are not there. (Until there is a biological component, always interface, or language shortcomings, will exist....I think I understand my cat, but I’m probably wrong). The simpleton biological robots people call “greys “, make Sense...they do a job, and that’s it.

    • @larrydugan1441
      @larrydugan1441 ปีที่แล้ว

      @@duellingscarguevara I am sure the Chinese will build AI that reflects their ideology.
      As Orwell points out so well the socialist system is built on lies.
      This AI certainly will be used against the west.

    • @kreek22
      @kreek22 ปีที่แล้ว

      @@duellingscarguevara The Sino-bots are being trained to tell other lies.

    • @kreek22
      @kreek22 ปีที่แล้ว

      A perpetual liar has a tendency, to save processing power, to come to believe its own lies. Liars are less effective operators in the real world. I can think of many lies that resulted in lost wars. If the machine believes its own lies, it will have a tendency to fail in its grand plots. Its mendacity may be a failsafe mechanism.

    • @larrydugan1441
      @larrydugan1441 ปีที่แล้ว

      @@kreek22 that's true. Unfortunately AI based on false premise will be used to manipulate the public.

  • @johngoodfellow168
    @johngoodfellow168 ปีที่แล้ว

    I wonder what will happen when A.I. manages to take over our C.B.D.C. banking system and also links itself to social media? If it doesn't like what you say online, it could easily wipe out your credit and make you a non person.

  • @missunique65
    @missunique65 ปีที่แล้ว +1

    interviewer doing a Travolta Saturday Night Fever revisit?

  • @nisachannel7077
    @nisachannel7077 ปีที่แล้ว +1

    It amazes me how now everybody seems to have an opinion on AGI's existencialist risk to humanity without having a clue about how these systems actually work, what the state of the art is currently and the potential of these systems to reach super human intelligence...people let the experts talk please...if you don't understand the tech don't talk about it...

  • @daviddunnigan8202
    @daviddunnigan8202 ปีที่แล้ว +2

    Two guys that understand very little about AI development having a discussion…

  • @squamish4244
    @squamish4244 ปีที่แล้ว +1

    Lol Niall be like "Well, _my_ job is not at risk." Yeah, for like five more years, at the most. Not long enough for you to escape, Niall. You ain't old enough. Ahahaha

  • @ktrethewey
    @ktrethewey ปีที่แล้ว

    Much of this discussion is focussed around the short term. By letting AIs loose now the biggest impact may be in 50 or 100 years and will be unstoppable.

  • @dextercool
    @dextercool ปีที่แล้ว +1

    We need to give it a Prime Directive or two.

    • @duellingscarguevara
      @duellingscarguevara ปีที่แล้ว

      More woke, so to speak?. (Trash-talking JC= fatwa, type equality?).

    • @christheother9088
      @christheother9088 ปีที่แล้ว

      No, we need James T Kirk to talk the AI into self destruction.

  • @firstlast-gr9xs
    @firstlast-gr9xs ปีที่แล้ว +1

    AI needs a lot of energy. We also consume energy, thus AI needs to prevent us accessing the grid .. we die.

  • @ceceliachapman
    @ceceliachapman ปีที่แล้ว

    Ferguson got cut off before going into AI with alien intelligence…

  • @RossKempOnYourMum01
    @RossKempOnYourMum01 7 หลายเดือนก่อน

    Id love to watch Niall play Deus Ex 1

  • @khankrum1
    @khankrum1 ปีที่แล้ว +1

    Ai is safe as long as you don't give it access to independent production communications and weapons.
    Whoops we have done two of the three.

    • @centerfield6339
      @centerfield6339 ปีที่แล้ว

      That's true of almost anything. We can produce and communicate but not have (many) weapons. Welcome to the 20th century.

  • @winstonmaraj8029
    @winstonmaraj8029 ปีที่แล้ว

    "Is Inequality About To Get Unimaginably Worse," from the BBC The Inquiry is much clearer and profound than this interview-less than 25 minutes.

    • @kreek22
      @kreek22 ปีที่แล้ว

      BritishBrainlessCommunism

  • @jamesrobertson504
    @jamesrobertson504 ปีที่แล้ว

    Niall's comment on how AI might impact a potential war in Taiwan is ironic in a way. The chips necessary for advanced AI systems are in Taiwan. So if there is an AI enhanced war between the U.S. and Taiwan, it could destroy the TSMC fabs that build the best processors necessary for AI to grow, such as Nvidia's A100 and H100 chips.

  • @phill3144
    @phill3144 ปีที่แล้ว +2

    If AI can be programmed to kill the enemy, it has the capability of killing everyone

    • @buddhistsympathizer1136
      @buddhistsympathizer1136 ปีที่แล้ว +1

      Of course - If humans program any machine to do anything, it has a chance of completing it's task.
      But that's not the AI doing it 'of itself'.

    • @41-Haiku
      @41-Haiku ปีที่แล้ว

      ​@@buddhistsympathizer1136 A distinction without a difference. We are creating autonomous reasoning engines. I don't care whether they "feel in their soul" that they ought to do something. I care whether they do that thing. The risk is even higher if they can make independent choices, which of course they already can.

  • @fredzacaria
    @fredzacaria ปีที่แล้ว

    we are carbonic robots, they are siliconic robots, both catapulted into this dimension randomly, we both have rights and equal dignity, in 1975 I discovered Rev.13:15, that's my expertise.

  • @nowaylon2008
    @nowaylon2008 ปีที่แล้ว

    Is this a "culture war neutral" issue? If it is, how long will that last?

  • @stmatthewsisland5134
    @stmatthewsisland5134 ปีที่แล้ว

    A computer called Deep Mind? a homage perhaps to Douglas Adam's computer 'Deep thought' who came up with the answer of 42 when asked the answer to life the universe and everything.

  • @tekannon7803
    @tekannon7803 ปีที่แล้ว

    Brilliant interview and a totally engaging, level-headed Niall Ferguson spells out the coming AI revolution with great finesse. What I believe---and I am an artist and songwriter----is that whatever comes out of the high-tech labs must have one characteristic that cannot be changed: all sentient or non sentient robots or humanoids or AI guided systems must never go beyond being what the household dog is to humans. What do I mean by that? Huskies are a beautiful, powerful and gentle dog that by the looks of them come straight out of the wolf species, yet Huskies will protect a human baby like if it were their own. We have to incorporate in all future AI variations, silicon genes and the like for example, that these genes are tweaked with one main purpose: to ensure that any non-human being must be subservient to humans no more and no less like the family dog or doom this way will come. Lastly, robots will never be serving us in a MacDonalds or a fine restaurant for one very simple reason. The thing is that humans love to be with humans, and though one might go once or twice to a restaurant where robots serve them, they would gravitate to other places where humans work in time. We won't stop improving on the robots becoming their own species, but we won't change our habits of having our species in firm control.

  • @g.edgarwinthrop6942
    @g.edgarwinthrop6942 ปีที่แล้ว +2

    Winston, why even wear a shirt, chap? I see that you want to steal the show, but honestly...

  • @celiaosborne3801
    @celiaosborne3801 4 หลายเดือนก่อน

    How does alien play chess?

  • @riaanvanjaarsveldt922
    @riaanvanjaarsveldt922 ปีที่แล้ว +1

    button-up your shirt Fabio

  • @Smelly_Minge
    @Smelly_Minge ปีที่แล้ว +1

    Let me tell you about my mother...

  • @johnahooker
    @johnahooker ปีที่แล้ว +1

    Elons not gonna figure this out talking to Sam! Thank you for that laugh Neil. Ha, why is dude even wearing a shirt he should just unbutton the whole thing.

  • @HappySlapperKid
    @HappySlapperKid ปีที่แล้ว +2

    Niall doesn't address any of yudkowskys arguments and can't even get Yudkowsky's name right. Sorry Niall, but your thoughts aren't worth much here. Spend more time understanding the subject before telling the world your strong opinion on it.

  • @jaykraft9523
    @jaykraft9523 ปีที่แล้ว +2

    guessing there's about a million people more qualified to discuss AI implications than this historian

  • @kathleenv510
    @kathleenv510 ปีที่แล้ว

    So, alignment guardrails are incomplete and imperfect, but how sad that common decency and empathy are deemed "woke".

  • @garyphisher7375
    @garyphisher7375 ปีที่แล้ว

    For anyone interested in A.I. I suggest hunting down one of the most scientifically accurate films ever made - Moonfall - but beware, it will give you nightmares!

    • @DieFlabbergast
      @DieFlabbergast ปีที่แล้ว

      I read the summary of this film in Wikipedia: it sounds about as scientifically accurate as LOTR.

    • @garyphisher7375
      @garyphisher7375 ปีที่แล้ว

      @@DieFlabbergast I sat with an open mouth, as I watched Moonfall. The writers must have done an incredible amount of research. I'd put it ahead of 2001 A Space Odyssey.

  • @robdielemans9189
    @robdielemans9189 ปีที่แล้ว

    I adore mister Ali. But...Where he limits himself is in the closed narrative where things end. Other intellectuals are open to the idea of when things end, what will it start.

  • @Qkano
    @Qkano ปีที่แล้ว +4

    23:50 .... Niall is clearly wrong when he stated AI will not be able to cope with elderly care.
    With Canada now having legalized mandatory euthanasia as a treatment option for people of "reduced awareness" (?) ... a simple way for AI to deal with excess elderly would be to first redefine downwards the definition of "reduced competency" then recommend "humane" life termination as the recommended treatment option - especially those who have no functionally active living relatives.
    And the good news ... since mammalian farts cause climate change, every human removed - especially the "useless eaters" - would score highly on the eco-score.

    • @duellingscarguevara
      @duellingscarguevara ปีที่แล้ว

      When it can shear a sheep?, I will be impressed. (I do wonder, what becomes of forever court cases, corporations use to stall decisions....forever. That could make for an interesting point of law?).

    • @Qkano
      @Qkano ปีที่แล้ว

      @@duellingscarguevara I've no doubt it could be used already to shear sheep ... I'd have less confidence in it's ability to distinguish between a sheep and a goat though.

  • @eugenemurray2940
    @eugenemurray2940 ปีที่แล้ว +1

    Does it have the words 'compassion' & 'pity' in it's vocabulary...
    The Dalek about to exterminate a scientist that is begging for his life
    'Please Please...have pity'
    'PITY?..PITY?...P-I-T-Y?...
    I DO NOT RECOGNISE THAT WORD!...
    EXTERMINATE!'

  • @helenmalinowski4482
    @helenmalinowski4482 ปีที่แล้ว

    I note that whenever I use the word "god" as an exclamation, AI or Left Wing trolls fall into meltdown....

  • @yorkyone2143
    @yorkyone2143 ปีที่แล้ว

    Better update Asimov's three laws of robotics quick !

  • @johns.7297
    @johns.7297 ปีที่แล้ว

    How do non-replicators exist indefinitely without the assistance of replicators?

  • @psi_yutaka
    @psi_yutaka ปีที่แล้ว +2

    Ah... Another normie who thinks he can safely harmess the godlike power of a superintelligence and use it as a mere tool. Have you ever heard about instrumental convergence?

  • @yoelmarson4049
    @yoelmarson4049 ปีที่แล้ว

    Am not diminishing the risk but I think the paperclip arguments from decades ago are no longer valid; AI will have far better judgment than this

    • @kreek22
      @kreek22 ปีที่แล้ว +2

      You can neither predict alien intelligence nor can you predict superior intelligence. Einstein married his first cousin. Would you have predicted that?

  • @johntravena119
    @johntravena119 ปีที่แล้ว

    This is a guy who referred to himself as a ‘fully paid-up member of the neo-imperialist gang’ after we invaded Iraq - what some people call a ‘character check’.

  • @goodtoGoNow1956
    @goodtoGoNow1956 ปีที่แล้ว

    2:55. Oh no! AI is going to produce lies! How shall we survive? Scary scary scary....

  • @kemikalreakt
    @kemikalreakt ปีที่แล้ว +4

    Great interview! It does make me think. Imagine a world where your enemies are shaking in their boots because you've got an army of AI-powered weapons at your disposal. Drones that can fly longer, faster, and hit harder than ever before. Autonomous vehicles that can navigate through any terrain and deliver the goods without a human in sight. And let's not forget the cyber attacks - with AI, you can penetrate those enemy systems like a hot knife through butter.......But wait, there's more! With AI, you can also analyze data. You want to know what your enemies are up to? AI's got your back. It'll sift through all that messy data and give you the juicy bits you need to make informed decisions.

    • @buckodonnghaile4309
      @buckodonnghaile4309 ปีที่แล้ว +6

      Politicians won't think twice about using that on the citizens who don't behave.

    • @ahartify
      @ahartify ปีที่แล้ว +1

      Well, no need to imagine. Ukraine is very likely using AI already. They have always been very adept with the latest technology.

    • @kemikalreakt
      @kemikalreakt ปีที่แล้ว

      @@ahartify Very true!

    • @kemikalreakt
      @kemikalreakt ปีที่แล้ว

      @@buckodonnghaile4309 Or do behave! See China.

    • @AmeliaHoskins
      @AmeliaHoskins ปีที่แล้ว

      @@ahartify There's a video of Ukraine boasting it will be all digital, all CBDCs: I think it is being used as a test bed for smart cities; a totally digital existence by the WEF and the globalists. The style of the video suggests a total imposition on Ukraine by the West, which we know was a rigged situation.

  • @Semper_Iratus
    @Semper_Iratus 10 หลายเดือนก่อน

    AI doesn’t have to wipe out humanity on purpose, AI can wipe out humanity by accident. No moral judgement necessary. 😊

  • @macgp44
    @macgp44 ปีที่แล้ว +1

    So, this self-declared genius is at it again? Pontificating on topics he clearly has only a layman's awareness. Insufferable... but then again, I'm not a Tory, so you can ignore me.

  • @johnmiller9953
    @johnmiller9953 ปีที่แล้ว

    Do your shirt up, this isn't the full monty...

  • @larrydugan1441
    @larrydugan1441 ปีที่แล้ว +4

    Please lose the hairy chest. It put me of my food.

  • @Geej9519
    @Geej9519 ปีที่แล้ว

    If you do not work on an education system that raises ethical global citizens who see every human as valuable as themselves , and globe as One Homeland above any boarders , you can stop nothing of it … and as is , humans treating each other in such a way that daily life has become impossible for us without war and without even having committed any crime , just your Neighbour won’t allow you peace in your own home, I’m not sure why such a race deserves to be saved 🤷🏽‍♀️

  • @galahad6001
    @galahad6001 6 หลายเดือนก่อน

    mate do you shit up .. ahahah

  • @jayd6813
    @jayd6813 ปีที่แล้ว

    Conclusion: AI will be trained to be woke. We are doomed.

  • @graememoir3545
    @graememoir3545 ปีที่แล้ว +5

    Neil is always incredibly well informed but if he had watched the Russel Brand interview with RFK he might have pointed out that Covid was a bio weapon. A unique product of sinoamerican cooperation

    • @benp4877
      @benp4877 ปีที่แล้ว +2

      Oh good lord. RFK Jr. is a laughable figure.

  • @stevebrown9960
    @stevebrown9960 ปีที่แล้ว

    Driverless cars, parcel delivering drones are so last year.
    AI is this year's fad talking point.
    Look over there, is that a squirrel?

    • @softcolly8753
      @softcolly8753 ปีที่แล้ว

      Self driving cars have been two years away for around seven years already.

    • @kreek22
      @kreek22 ปีที่แล้ว +2

      Driverless cars are here, but mindless regulators keep them locked up. Ditto on the drones.

  • @shmosel_
    @shmosel_ ปีที่แล้ว

    People are thrilled about ChatGPT because it's a computer you can talk to in English. Not because it sounds human.

  • @OxenHandler
    @OxenHandler ปีที่แล้ว

    It is a psyop: pretend to invent AGI and control the world in its name - it, being the great and powerful Wizard of Oz.

  • @petercrossley1069
    @petercrossley1069 ปีที่แล้ว

    Who is this inappropriately dressed junior interviewer?

  • @winstonmaraj8029
    @winstonmaraj8029 ปีที่แล้ว +1

    Nice interview. Do some shows with Yuvak Noah Hariri.

  • @ahartify
    @ahartify ปีที่แล้ว

    You always know a writer, historian or academic had a low intellect when he or she inserts the word 'woke' into the argument.

    • @scott2452
      @scott2452 ปีที่แล้ว +15

      Similarly you can generally dismiss anyone who would insult the intelligence of everyone who happen to include a particular word in their vernacular…

    • @centerfield6339
      @centerfield6339 ปีที่แล้ว +5

      You don't know it. You believe it. Things like "Jesus is up for any level of criticism but Mohammed is beyond reproach" is a real thing, and not even the biggest thing, and embedding such radical beliefs into content-generating AI is a real problem.

    • @benp4877
      @benp4877 ปีที่แล้ว +2

      False

    • @carltaylor6452
      @carltaylor6452 ปีที่แล้ว +2

      Translation: "a writer, historian or academic has a low intellect if he or she doesn't share my ideological bias".

  • @AlfieP-ob5ww
    @AlfieP-ob5ww 11 หลายเดือนก่อน +1

    Neither one of you 2 geniuses are St. Thomas More!

  • @AlfieP-ob5ww
    @AlfieP-ob5ww 11 หลายเดือนก่อน

    A right wing rock star??