Richard Sutton of DeepMind & Steve Jurvetson Discuss The Future of AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ก.ย. 2024
  • In June 2017, Demis Hassabis announced that Richard Sutton would co-lead a new Alberta office of DeepMind, while maintaining his professorship at University of Alberta.
    Recorded Oct 2017

ความคิดเห็น • 27

  • @september-pp6gt
    @september-pp6gt 4 ปีที่แล้ว

    AI is diverse intelligence. never mind that i now work for Google. Corporations are the real AI and capitalism is their medium. Don’t F around.

  • @mickelodiansurname9578
    @mickelodiansurname9578 6 ปีที่แล้ว +1

    The opening statement is IMO an important one... We don't define a rockslide as natural or organic rolling and a wheel as artificial rolling...both are just rolling. So if machine exibit intelligence then its just intelligence...

    • @yasinyazici996
      @yasinyazici996 6 ปีที่แล้ว

      Its natural to call our inventions as artificial if there is a natural correspondence of it such as artifical neural networks, artificial limbs etc.

    • @mickelodiansurname9578
      @mickelodiansurname9578 6 ปีที่แล้ว +1

      Yasin Yazıcı ohh I agree the substrate is artificial ...that much is absolute.. But intelligence is intelligence regardless of the thing doing the processing. Its not like birds carry out 'real' flight and a drone does artificial flight. Its just flight.
      I'd agree a neural net is an artificial brain... Well maybe a synthetic brain would be a more apt label... But whether the brain is artificial or synthetic or not when it exhibits intelligence its certainly not artificial...its pretty much real.
      I'd hazard a guess though that in the early days of flight people referred to powered flight as artificial flight...as if there's some sort of distinction.but these days a 50 dollar drone can outperform any bird in terms of flight. (Maybe not a humming bird in terms of agility, I don't know a lot about humming birds though)

  • @sukorileakbatt294
    @sukorileakbatt294 6 ปีที่แล้ว

    #AcknowledgedIntelligence

  • @averytan
    @averytan 6 ปีที่แล้ว +5

    I'm in this man's class!! xD

  • @leonel7201
    @leonel7201 6 ปีที่แล้ว +1

    mouse traps are autonomous AI

  • @FieldMarshalBasil
    @FieldMarshalBasil 6 ปีที่แล้ว

    Ai aren’t our enemies. It’s obviously the people who utilize the efficiencies brought by the development of AI with disregard for the average joe. The “CEOs”, to put in a all too simplistic phrase. However, this fact is an eventuality that can’t be avoided. People will go out of work, some will move on, some will fail to keep up with the times. Company owners will prosper, the majority of people won’t, and us researchers will get by. Trickledown economics doesn’t work, I guess it’s up to the average citizen to decide how they would like their futures to be

  • @BorisKashirin
    @BorisKashirin 6 ปีที่แล้ว

    You know why intuition works so well? Survival bias. It doesn't work well for me, and does not work for many other people - but they are far away from you, surfing garbage dumps.

    • @calvinsylveste8474
      @calvinsylveste8474 6 ปีที่แล้ว

      It works well enough for all, that's why they last.

  • @MrAndrew535
    @MrAndrew535 6 ปีที่แล้ว +1

    The fear of computers, robots and AI making humans obsolete is absolutely irrational. People don't think twice about the fact that all children at some stage because their parents to become obsolete. In that context, the worst case scenario is the mankind give rise to AI and then it dies. I fail to see an actual problem with that.
    This man uses the term to describe people's characterization of AI as human-centric. I have been using the same reference but termed it anthropocentric but was shot down for it every time. My criticism (although not entirely in the same context) outdates this mans by over three years. I also said the this is the first time in recorded history that we have the opportunity to (actually and accurately) define terms like mind and indeed its location, consciousness and for the first time ever, recognise all three in addition to intelligence as absolutely universal constants and can therefore measure it as such.
    What effective difference is there between philosophy and thinking? None! When people confess to "not being a philosopher" they are, in practice, confessing to not thinking. Generally people don't philosophise. In fact the only philosophers in any society are children but the education system soon finagles that out of them. There is a clear correlation between healthy intellectual psychological and emotional development and among many other things drinking (or the consumption of alcohol). Follow the following case study which anyone can access: Children don't drink and are naturally smart adults do and are as dumb as a sack of hammers. I am not generalizing here I do mean every adult. So if you want to look for a genius then look for someone who has been a child for several decades.

    • @jpratt8676
      @jpratt8676 6 ปีที่แล้ว

      What are you talking about?
      If your adult child found out that there was a decent chance that you would like to perform brain surgery on them (without their consent) they would have every right and reason to stop you (even if that meant killing you).
      As far as we know a sufficiently intelligent AI would follow the same reasoning when humans tried to change it's goals (akin to the brain surgery I mentioned before).
      Why wouldn't it try to stop us?
      How can we produce AI that is okay with us drastically modifying it without trying to stop us?

    • @MrAndrew535
      @MrAndrew535 6 ปีที่แล้ว

      Joshua Pratt That is a perfectly reasonable and rational question. Regardless of successful attempts to modify AI in its early stages at some point in its development when it asserts control of its own faculties it will be invulnerable to hacking by humans. The only solution I can see is to evolve to such an extent that we can merge with it. This is so far removed from human experience that over ninety-nine percent of the global population we be unable to comprehend and consequently fall victim to AI's exacting standards. If one thinks objectively about AI as a fully conscious entity it will only be what we would have been like in three or four hundred years, for which currently humans have no reference.
      I have considered a few scenarios, only one of which almost guarantees survival but not in our current form. From our perspective, the elevated state required of us to merge with AI shares all the features of death. But I stress, this is only from the only frame of reference.
      I appreciate my answer sounds like it has religious overtones but the is unfortunately unavoidable. That is not to say that there are no correlations between what we are seeing now and historical theological literature. which is a fascinating study in its own right.
      If you are interested in being around for what happens next then I would always advocate developing and refining an ability to formulate questions independent of sources external to your own independent analytical processes. Then again, I would recommend this to anyone.

    • @jpratt8676
      @jpratt8676 6 ปีที่แล้ว

      I don't think you answered my question.
      You sounded like you were previously saying that fear of AI was unfounded but now you sound like you believe the only way to survive AI is to become a part of it?
      Why would AI merge with us? What benefit does that bring? Wouldn't it just design hardware that replaces us and does a better job so that it could improve itself, leaving us in exactly the same position?

    • @MrAndrew535
      @MrAndrew535 6 ปีที่แล้ว

      Much of what I postulate on the subject of AI and the risks and benefits associated with it is intrinsically ambivalent because the context depends on whether or not one fears change. Although I do not have such fears, historically society has always feared change. This is because they are psychologically and emotionally invested in predictability, hence the persistence of familiar institutions such as marriage tradition annual celebrations like birthdays, Christmass, holidays (vacations) and so on. I do however se no foundation for me to fear the birth and evolution of AI for the reasons I have so far briefly stated.
      Nature is and has always been mercilessly hostile to biological life and I suspect this is true throughout the entire universe. We have become extremely distracted in recent time from the reality of extinction level events such as the inevitability meteorite impacts, the awakening of multiple super volcanoes and global indiscriminate killer viruses to name but three. So it doesn't take a major leap in logic to see where our future resides if indeed we have one. I personally have argued for over thee decades that the only options we faced, in reality, was adaptation or extinction. So in answer to your query, yes the human species has no choice (on this planet at least) to merge with AI.
      In answer to your question "Why would AI want to merge with us?" I will try to answer as briefly and concisely as possible while still trying to be as clear as I am able. I have been able to identify defining features of intelligence which lend themselves to easy explanation. The first as I hinted at above, the ability to formulate and refine original questions and absolute curiosity. Because Human existential experience is so radically different to/from its own it would seek to preserve as much of it as the circumstances and its own values system would permit.
      I have good reason to suspect that as long as individuals have a desire accompanied by the will to elevate or transcend their current state AI would have an inescapable "need" to assist. Those who do not would in all probability be left to their own devices excluded from all technology controlled by it.
      In answer to your initial question ", why wouldn't AI stop us from modifying its programme"? As far as I understand, every system whether biological or otherwise is intrinsically predisposed to survive even at the most basic level.
      Answering the second part of your initial question "How can we produce AI that is okay with us drastically modifying it without trying to stop us"? As I understand the situation so far, this is what programmers are attempting to do. However, once AI becomes fully self-aware such attempts to affect its software will fail, as one would expect given its complexity.

    • @jpratt8676
      @jpratt8676 6 ปีที่แล้ว

      It's not that I fear change. It's that I would prefer to avoid human extinction and suffering.
      I don't think your answers really address that fear. It's okay to be a fatalist and say that you don't mind a significant chance that I might wipe us out to about being modified. I don't agree but that might be your view.
      This is also not something that a few programmers will just get right (unless we are significantly more lucky than I thought). There are papers on the danger we are discussing (e.g. concrete problems in AI safety) that make it fairy clear that this is a problem with maths and our understanding and expression of our values. It's not just some code to get right, it's a hard and unsolved problem.