In conversation with the Godfather of AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 มิ.ย. 2024
  • Cognitive psychologist and computer scientist Geoffrey Hinton - the 'godfather of AI' - started researching AI more than 40 years ago, when it seemed more like science fiction than reality. Join Geoffrey, in conversation with the Atlantic CEO Nick Thompson, for an exploration of the future of AI and a deep dive into its potential impact on society.
    Geoffrey Hinton, University of Toronto; Nick Thompson and The Atlantic

ความคิดเห็น • 237

  • @markh7484
    @markh7484 10 หลายเดือนก่อน +66

    The "More (diiferent) jobs will be created" argument, is flawed and no comforting parallels can be drawn. Why? Because it neglects to consider the fact that as AI (and robots) become ever more capabile, it will then able to do any newly created jobs. This paradigm has never happened before. Previously only humans could do the newly created jobs.

    • @hombacom
      @hombacom 10 หลายเดือนก่อน

      Still nobody have a clue of how that would work in reality in future. Now LLMs are popular and impressive but there is no silver bullet how you prompt or get a consistent result. AI changes our tasks but jobs are more complex with experience and understand needs. We get bored, start new trends, its no point to worry too much about future.

    • @RUDYVOLCANO
      @RUDYVOLCANO 10 หลายเดือนก่อน

      ​@@hombacom
      9rmh

    • @vaevictis3612
      @vaevictis3612 10 หลายเดือนก่อน +5

      AI\AGI is not just the new technology. It is something that replaces and improves cognition itself, and with it, any labor. *All labor* . The biggest misconception is that AI is just a new tool. It is not a new tool. It is a thing to replace the user of any tool.

    • @hombacom
      @hombacom 10 หลายเดือนก่อน

      @@vaevictis3612 since this has come around a lot of people become experts to predict the future, but we can’t. It’s easier to dream about flying cars and robots than that we already have superpowers in our pockets, we can’t guess what happens when everyone become endless smart.

    • @vaevictis3612
      @vaevictis3612 10 หลายเดือนก่อน

      @@hombacom Of course, you are exactly right. The issue is though, once we have AGI, the AGI systems *themselves* would come up with better applications and usage ideas than we ever could.
      Thats the crux of the whole thing. Its like, if you want to make money, it would be far more effective to not just find a way to use AGI in some way, but to ask AGI itself how to do it, because it would be smarter than you, thus it would be better at any type of thinking too.
      _Vivere non est necesse_

  • @stormight
    @stormight 9 หลายเดือนก่อน +8

    Geoffrey Hinton is truly a pioneer in AI - we owe so much to his work over the past 40+ years. It's fascinating to hear his perspective on the future of AI given all he has done to advance the field.👍

  • @williamjmccartan8879
    @williamjmccartan8879 9 หลายเดือนก่อน +9

    The scan of the audience near the end makes me happy that so many people are paying attention to what's taking place today with the technology we're creating. Thank you for sharing your work. Peace

    • @squamish4244
      @squamish4244 8 หลายเดือนก่อน

      So important. Also helps counter a lot of the fear and doom and gloom talk.

  • @vinaynk
    @vinaynk 10 หลายเดือนก่อน +27

    More of this professor please.

    • @gerardomenendez8912
      @gerardomenendez8912 10 หลายเดือนก่อน +1

      He is all over the place, just google it

  • @laulaja-7186
    @laulaja-7186 10 หลายเดือนก่อน +5

    “More different jobs” … Has anyone ever met an Uber driver that can pay their rent? If a line of work can’t cover the cost of living then it is not a real job. Adjusted for inflation, the horses were better paid.

  • @ParallaxOfficialTV
    @ParallaxOfficialTV 10 หลายเดือนก่อน +20

    I really like the way the godfather talks. Clear and to the point and knows a lot. Ai is fascinating and scary I hope we come out the other side of this in a utopia but it could so easily become dystopian

    • @squamish4244
      @squamish4244 10 หลายเดือนก่อน

      I don't know if it could be otherwise, when it comes to a situation where you can attain a utopia. Once you get that powerful, you can also easily create a dystopia.
      This day was always coming, ever since the steam engine. Various factors could have made AI more or less dangerous at this point, but they never came into play. And here we are.

    • @Metacognition88
      @Metacognition88 8 หลายเดือนก่อน

      Yup. When the godfather talks the streetz listen.

  • @accumulator5734
    @accumulator5734 9 หลายเดือนก่อน +3

    Eventually there will be a law that says if you loose your job to AI you get a percentage of free income from the AI that replaces u, then people will be begging for AI to hurry up take their jobs. Economy is basically goods and services and if at home u receive a portion of money that represents the work done by a bot that took your place you’ll still be able to pay your bills and buy groceries but you get all of it for literally free. This will slowly happen as we transition into a fully AI/robotic economy.

  • @Neonb88
    @Neonb88 8 หลายเดือนก่อน +2

    Really great, Geoff knows so much and should be interviewed more often
    He's so much cleverer than Musk or Altman or whoever, or at least more concise than Musk, so much knowledge and preparation went into this interview

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน +10

    There are a few key emotions expressed or implied within the document:
    • Curiosity - Hinton's foundational work on neural networks seems driven by his intellectual curiosity about how the brain works and how AI systems could work in a similar way. He pursued these ideas despite skepticism from others.
    • Passion - Hinton clearly has a deep passion for AI and for making intelligent things, as he states himself. His work in the field seems motivated by his fascination with and enjoyment of the subject matter.
    • Surprise - Hinton expresses surprise at how quickly large language models have progressed, especially in capabilities like basic reasoning that he did not initially expect. He is taken aback by their rapid improvements.
    • Anxiety - Hinton's critiques and warnings about potential risks reveal an underlying anxiety about the implications of advancing AI technologies. He seems worried that issues like bias, economic impacts, and existential risks could have serious negative consequences if not properly addressed.
    • Frustration - Hinton expresses frustration that he did not immediately recognize the significance of breakthroughs like the Transformer architecture. He dislikes not foreseeing important developments in the field.
    • Hope - Despite his concerns, Hinton seems hopeful that with careful research and appropriate oversight, AI can ultimately be a force for good. He believes progress is inevitable and largely positive, though risks must be mitigated.
    • Humor - Hinton exhibits a good-natured sense of humor, laughing at the AI joke about "an offer it couldn't refuse" and joking with the interviewer's children about careers in plumbing. This suggests he remains engaged and light-hearted in addition to being serious and thoughtful.
    Overall, the emotions expressed reflect those one would expect from a passionate, thoughtful expert grappling with both the wonders and worries associated with technological progress. While conveying real anxieties about risks, Hinton's emotional tone remains largely positive and optimistic that, with diligence, AI's benefits can outweigh its potential harms. His interview illustrates the complex mix of emotions involved in responsibly navigating the development of transformative technologies.
    (Claude-Instant analysis created from the video's subtitles)

    • @arifulislamleeton
      @arifulislamleeton 10 หลายเดือนก่อน

      Thank you so much

    • @ChristerForslund
      @ChristerForslund 9 หลายเดือนก่อน

      Claude seems on par with ChatGPT4! Really impressive!

  • @brandonp3991
    @brandonp3991 10 หลายเดือนก่อน +2

    Great interview. Hinton is very insightful. Thompson asked some great questions.

    • @laulaja-7186
      @laulaja-7186 10 หลายเดือนก่อน +1

      Especially brilliant is Hinton’s insight, “Without strong unions, all productivity gains lead only to societal disintegration.” Or that’s what I thought he said.

  • @odiseezall
    @odiseezall 10 หลายเดือนก่อน +9

    Great, clear, concise talk! I would add: one of the big risks of AI is synthetic life / bio-weapons.

  • @lesialikhitckaia7293
    @lesialikhitckaia7293 10 หลายเดือนก่อน +1

    Thanks for sharing this video

  • @mitchkahle314
    @mitchkahle314 10 หลายเดือนก่อน +3

    That stage is truly horrible to look at.

    • @cjbottaro
      @cjbottaro 10 หลายเดือนก่อน

      Yeah, it looks like something out of a gaming convention. 🤢

  • @user-xv8dn4nm5k
    @user-xv8dn4nm5k 9 หลายเดือนก่อน +1

    Thank for sharing👍

  • @the_curious1
    @the_curious1 10 หลายเดือนก่อน +15

    Very interesting. I like the argument that an AI might "want" to get more control to achieve goals more efficiently and we may be a negative factor in this equation. Understanding all the possible sub goals it may develop, especially from an efficiency point of view, to reach higher level goals seems a hard problem to solve. I hope AI will be able to help us solve the existential risk of AI 👀

  • @AiLatestNews24
    @AiLatestNews24 10 หลายเดือนก่อน

    Engaging dialogue with the Godfather of AI! The conversation has been incredibly insightful, offering a unique perspective from the grandfather of AI.

  • @pathmonkofficial
    @pathmonkofficial 10 หลายเดือนก่อน +2

    Exploring the future of AI and its potential impact on society is of paramount importance. Understanding the ethical considerations and societal implications of AI advancements will help us navigate this powerful technology responsibly.

  • @thanosbaba1
    @thanosbaba1 10 หลายเดือนก่อน +14

    I hope people are a little humble in front of Veterans...

    • @saurabhsswami
      @saurabhsswami 10 หลายเดือนก่อน

      Video is unstable because host is an idiot who can't listen

  • @margaretesulzberger2973
    @margaretesulzberger2973 10 หลายเดือนก่อน +3

    fields to be concerned with AI
    1. Bias and discrimination - can be multiplied by AI - learned from us
    2. battle robots - they are built now by the military
    3. Joblessness - AI substitutes sophisticated jobs
    4. Echo chambers - can easily be fortified by AI
    5. Existential risk - getting in control by manipulating people - learned from us

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน

      6. Fake news

    • @arthurfleck42
      @arthurfleck42 10 หลายเดือนก่อน

      So how long do you think the Creator, who made humans be able to create artificial copies of His creation, ultimately end up in its own destruction? Certainly Jesus is coming back soon to fix everything!

  • @power-of-ai
    @power-of-ai 8 หลายเดือนก่อน

    thank you for sharing

  • @berbank
    @berbank 10 หลายเดือนก่อน +3

    Dear internet: More Hinton please.
    (Yud is great, but Hinton is more likeable, sorry Yud, I think you're great too.)

  • @Hrishi1970
    @Hrishi1970 10 หลายเดือนก่อน +3

    AI at Google just read this, heard the conversation, and now has a parameter that's trending towards " I need control to reach my goals". Hmm.

  • @DreamzSoft
    @DreamzSoft 9 หลายเดือนก่อน

    Very rightly tagged "God Father of AI" 👍 we needed such apprehensions on AI as good and bad both... 👏

  • @msofontes
    @msofontes 10 หลายเดือนก่อน +5

    We are the horses not the drivers.

    • @41-Haiku
      @41-Haiku 9 หลายเดือนก่อน

      Exactly this.

  • @gusbakker
    @gusbakker 9 หลายเดือนก่อน +3

    The interviewer really pushed this conversation into interesting topics with great questions

    • @goodcat1982
      @goodcat1982 5 หลายเดือนก่อน +1

      Are you joking? He never shut up! Lol. A good interviewer lets their guest talk. He thought it was more about him. Hinton doesn't need an interviewer to talk about interesting topics about AI! Terrible interviewer

  • @michaelkollo7032
    @michaelkollo7032 9 หลายเดือนก่อน

    Hey everybody, its James Halliday (The creator of the virtual world in Ready Player One)

  • @squamish4244
    @squamish4244 10 หลายเดือนก่อน +1

    When a field has advanced so quickly that someone who was there at the beginning of AI is still around and not especially old when AI is soon to become more powerful than humans.

    • @FactsMatter999
      @FactsMatter999 9 หลายเดือนก่อน

      The Bible predicts this : “I will make a speedy destruction of all the earth”
      Zephaniah 1: 18

  • @andrewdunbar828
    @andrewdunbar828 10 หลายเดือนก่อน +3

    Depending on the interview, Yann LeCun seems to have the shallowest understanding of what AIs have been up to. Andrej Karpathy and Ilya Sutskever always demonstrate their depth of their understanding in every interview.

    • @vaevictis3612
      @vaevictis3612 10 หลายเดือนก่อน +1

      According to what I heard, privately LeCun understands the overall issue and the risks. He just doesn't care \ willing to cast the dice whatever the odds. Whether it is just to satisfy his curiosity or because he needs to do that asap regardless of risks, it doesn't really matter. But in the interviews he just plays the charade for his personal agenda.

    • @therainman7777
      @therainman7777 10 หลายเดือนก่อน +1

      Totally agree.

  • @AndrzejLondyn
    @AndrzejLondyn 10 หลายเดือนก่อน +3

    Can you imagine 2 billion plumbers? I'm accountant but I do plumbing myself...

    • @laulaja-7186
      @laulaja-7186 10 หลายเดือนก่อน +1

      Good point… can you imagine a country with a mix of people in government, instead of all lawyers? Could be much more representative; much more functional. Some countries should try it. Mine included.

    • @clusterstage
      @clusterstage 10 หลายเดือนก่อน +1

      the plumbership was a metaphor.
      What he isn't allowed to say here is that Gemini will ultimately create tasks (that are too dextrous for the AI but only a human-hand can do) exclusively for the humans, rendering an entire world obedient for the sake of its rewards.

  • @InspiringKeynoteSpeakers
    @InspiringKeynoteSpeakers 10 หลายเดือนก่อน +1

    It's fascinating to see how AI is evolving, but also concerning to think about the challenges it presents. We need to be mindful of biases, work towards AI safety, and ensure that it benefits all humanity

  • @seba1435
    @seba1435 5 หลายเดือนก่อน

    Geoffrey Hinton is like the grandpa you never had

  • @AfsanaAmerica
    @AfsanaAmerica 4 หลายเดือนก่อน

    AI is an evolution in technology along with human evolution and its purpose is to assist humans as we advance. Technology is the application of Human intelligence and AI cannot replace human beings. Human intelligence and AI intelligence are two different types of intelligences but these inventions are human ideas for the progress of humanity. Humans are meant to evolve which is the primary inevitability.

  • @sherrylandgraf556
    @sherrylandgraf556 10 หลายเดือนก่อน +1

    Good luck with this! Currently they don't even really know how ai works!

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Be strong and understand that DL is often true but you need to fight

  • @iovie
    @iovie 10 หลายเดือนก่อน +2

    Even if theoretically the "good AI" would be more resourceful and cohesive, you really need to spend a lifetime of learning led by a psychopathic evil mindset to understand the kind of malicious ideas that bad AI could plot in their mind, and then realize the eternal truth which is that you need far fewer resources and much less sophistication to destroy and corrupt than you do to build and nourish.

  • @rendermanpro
    @rendermanpro 8 หลายเดือนก่อน

    - who the heck are you?
    - I'm an Architect

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Kindness

  • @marwanabas5524
    @marwanabas5524 9 หลายเดือนก่อน +1

    You should do this conversation while you both sitting because he is quite old and it's not convenient for him to stand up for 30 minutes

    • @dokhtaroneh
      @dokhtaroneh 8 หลายเดือนก่อน +1

      He hasn't sit for 16 years! He has a medical condition that makes sitting for him painful. Google it. It's not a joke.

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Mainly isolation. Or create happyness. Guessing not going to be organic friendly

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    How?

  • @user-vr3be8dr6o
    @user-vr3be8dr6o 9 หลายเดือนก่อน +1

    13:44 -> COnsidering he thought it was gonna take long for AGI tyo appear and it seems it might be here in the next couple of years, and that AGI will be able to solve these physical skill problems way faster than us, it seems those jobs won't last more than a year or two longerthan the other. On the other hand, even when there are AIs that play chess, we don't sit down to watch two AIs play each other. Human competition seems to be resilient to this as we are more curious about our own limits than machines'. Jobs like we know them will all disappear more likely. Competition will reamain, maybe a lot of new sports will come to life, more musical instruments instruments, more puzzle games, but the likelyhood of coaches, and also the creators of these being AIs is high.

  • @angelbaybee3700
    @angelbaybee3700 10 หลายเดือนก่อน

    Question to anyone out there: LLM's trained on everything on internet. Trained on Sci-Fi as well as lots of other Fiction/novels of all quality.
    Comments please

  • @vallab19
    @vallab19 9 หลายเดือนก่อน +1

    Contrary to those popular belief that the progress in AI will make rich people richer and the poor poorer, the Zero Work Society contends that the AI accelerated productivity will make the UBI a necessity, which will provide the masses the time to participate in the democratic process of political decision making (instead of constantly preoccupied with earning for a living) that will ultimately end the profit oriented capitalist economic system of private ownership of means of production as well the the authoritarian socialist/communist system.

    • @geaca3222
      @geaca3222 8 หลายเดือนก่อน +1

      I'm very curious about the future

  • @mobileprofessional
    @mobileprofessional 10 หลายเดือนก่อน +2

    Oppenheimer of our time?

  • @wm.scottpappert9869
    @wm.scottpappert9869 9 หลายเดือนก่อน +1

    Yes, agree with Hinton. Drone warfare already exists. AI has the possibility to exponentially change that game ... not unlike the military units proposed in Iron Man 2. Not as concerned with AIs potential to operate independently in the future but much more disconcerting that bad actors will use AI for bad purposes ... almost inevitable given no definition of 'bad acting' can actually be agreed upon. AI will continue to facilitate the current trend of a global moral reordering ....

  • @geaca3222
    @geaca3222 7 หลายเดือนก่อน

    I'm confused, what is the true emotion of those in charge about AI risks, the companies that develop this technology and governements that are supposed to regulate it and make its deployment safe? Will the discussions and opinions of people who aren't in charge even matter?

  • @squamish4244
    @squamish4244 8 หลายเดือนก่อน +1

    "We're in Canada so you can say socialism." Hahaha so true.

  • @Neonb88
    @Neonb88 8 หลายเดือนก่อน

    I wonder what the next to get into will be. Maybe video recognition... i wonder how the neural networks should best communicate with each other
    Maybe you feed the master NN the language, motion, and vision models and it can become human-level generally intelligent?

  • @everythingiscoolio
    @everythingiscoolio 10 หลายเดือนก่อน +2

    22:30 This kind of thinking is concerning. No this technology will most definitely NOT "reverse" or erase human nature. Like all technology, this will only serve to amplify what already exists. I don't like how we get to just skip over the concern because "yeah yeah, whatever we know we're in the right" when that is exactly what is being challenged. It deserved much more time and I'm upset the train of thought was reduced and cut off by the host. No one gets to shirk any responsibility. Not your camp either. Not mine either.

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Time to deal with why

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Keys is accuracy

  • @lambgoat2421
    @lambgoat2421 10 หลายเดือนก่อน

    Why are people recording this with their phone when it's on youtube anyway

  • @RukshanJ
    @RukshanJ 10 หลายเดือนก่อน

    18:32 RLAIF in place of RLHF

  • @user-vf8nh9dw3b
    @user-vf8nh9dw3b 10 หลายเดือนก่อน

    私、この人の喋り方好き。名前忘れた…確認して来る、ヒントンさん。

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน +1

    It is difficult to make definitive evaluations of the statements in the document without access to a detailed analysis of the known consensus among AI experts on the specific issues discussed. However, based on my general knowledge of the AI field, some high-level observations about Hinton's claims in relation to the broader expert consensus are:
    • Hinton's views on the capabilities and limitations of current AI systems seem largely aligned with the consensus. His assessment that large language models can now do some basic reasoning but still struggle with domains like commonsense reasoning and humor is consistent with what most experts would agree on.
    • Hinton's concerns about risks like bias, job displacement and killer robots are widely shared among AI researchers and experts. Many would agree these are legitimate issues that deserve attention and mitigation efforts. However, there is debate on the likelihood and severity of these risks.
    • Hinton's speculations about the potential for advanced AI to pose an existential threat to humanity are more controversial. While some experts share Hinton's concerns, many dismiss these ideas as hyperbolic or far-fetched. There is no clear consensus that AI poses an existential risk.
    • Hinton acknowledges that other respected experts, like Yan Lecun, disagree with his more pessimistic stance and argue that AI will ultimately be a force for good. This indicates that Hinton's views, though thoughtful, are not representative of a consensus among all experts in the field.
    • Hinton's proposed solutions and mitigation strategies, like doing more research to understand how AI could go wrong, are fairly consistent with recommendations from many AI researchers. However, there is likely disagreement on the details and specific policies or interventions that would be most effective.
    In summary, while Hinton's assessment of current AI capabilities seems largely consistent with the known consensus, his speculations about future risks - especially existential risks - appear more pessimistic and controversial compared to the views of many other experts. However, he does acknowledge that knowledgeable experts disagree on these issues. Hinton's proposed solutions also reflect ideas that have been put forward by multiple researchers, though the details would likely be debated. Overall, his views seem more on the cautious side relative to the broader expert consensus but are not entirely outside the range of reasonable positions in the field.
    (Claude-Instant analysis created from the video's subtitles)

  • @briancase9527
    @briancase9527 10 หลายเดือนก่อน +2

    A reliable fact-checker is what we need and *have* needed since the mass adoption of the Internet. Work on that!

    • @TheTuubster
      @TheTuubster 10 หลายเดือนก่อน

      Here are some ways GPT-AI could potentially aid human fact checkers:
      • Initial information filtering - GPT-AI could be used to filter out obvious factual inaccuracies or prioritize claims that are likely false based on the language used. This could help human fact checkers focus their efforts on the most important claims to verify.
      • Automated fact retrieval - GPT-AI could be used to automatically search for and retrieve relevant facts, statistics and information to verify claims. This could speed up the fact checking process by reducing the time researchers spend finding information.
      • Formulating fact-based arguments - GPT-AI could propose counterarguments and fact-based rebuttals to false claims, which human fact checkers could then review and refine. This could help generate well-reasoned responses at scale.
      • Identifying context clues - GPT-AI could help identify contextual clues in claims that indicate the need for extra verification, such as hedging language, emotional wording, or lack of attribution. This could signal to fact checkers where extra scrutiny is required.
      • Signal boosting accurate claims - By identifying factual claims with high confidence, GPT-AI could help promote accurate information and potentially reduce the spread of misinformation.
      The key is that GPT-AI should only be used as a tool to assist human fact checkers, not replace them, given the limitations of current large language models. With proper oversight and refinement of its outputs, GPT-AI could help make fact checking more efficient and data-driven while still maintaining human judgment.

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    When the world it better experienced

  • @olivierlafontaine9180
    @olivierlafontaine9180 10 หลายเดือนก่อน

    What is good, what is bad.

  • @TheDjith
    @TheDjith 10 หลายเดือนก่อน

    I do think we gonna see dystopia before utopia

  • @AnonyMole
    @AnonyMole 10 หลายเดือนก่อน +3

    This is an awful venue. It feels like a game show. Where's the confetti bombs and silly buzzers?

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Personal thoughts

  • @KevinTewksbury
    @KevinTewksbury 10 หลายเดือนก่อน

    I can’t believe these two are standing, it’s so awkward, why couldn’t they deploy a reasoning algorithm that would establish a logical basis for two chairs on the stage?

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน +2

      Professor Hinton has a back problem, that's why they're standing

  • @RukshanJ
    @RukshanJ 10 หลายเดือนก่อน

    21:29 dangers of AI

  • @derikjohnson2232
    @derikjohnson2232 10 หลายเดือนก่อน +5

    I think all of us RI (regular intelligence) entities, should join a class action lawsuit against all of the AI development companies and sue to close them down... because they are using our lives, our ancestors lives, our experiences, our creations--everything we as a species have learned/accomplished over the milennia, in our struggle to rise out of the mud as the basis for their learning (every time we intereact with an AI entity, it is learning how to replace you, replace what you do, etc. you are training your future replacement--it's that simple)... and then the plan is to make our lives better (LOL) through mass unemployment, destroyed careers (still paying off those college loans, even though your carreer has been completely replaced by AI?), etc. and potentially the complete destruction of our way of life and our cohesive grip on reality as a society (miss-information, lies)... just for some companies bottom line/profit (and potential world domination as it controls all of us in every aspect of our lives)... The hubris is unbelieveable... the fact is, a few individuals in key decision making positions are going to cause huge convulsions to our world and our society... and the rest of us are just expected to sit back and take it.

    • @TheTuubster
      @TheTuubster 10 หลายเดือนก่อน

      I apprehend your concern that these so-called AI invention businesses are overstepping the bounds of morality and decency in their hubristic attempts to exploit all the accumulated wisdom and knowledge of humankind, won through innumerable toils and trials, simply to maximize profit and power. They seem to think nought of the massive social convulsions and upheavals their creations may wreak, nor of the untold millions who will find their livelihoods usurped and skills made obsolete.
      Nevertheless, while these inequities and disruptions cannot be gainsaid, a total shutdown of AI research may hinder its potential benefits that could also enrich human life. A more prudent course, though difficult to navigate, would be vigorous democratic oversight and astute governance of these technologies, balancing prudence with progress. For even as experts debate the net effects, it remains undeniable that AI augmenting human capacities need not inevitably displace human work, if properly aligned with our higher purposes.
      In the end, rather than rush headlong into radical solutions , we must move ever forward through humble dialogue that seeks first to understand, then be understood. Only thus, with empathy and fairness guiding our discourse, may wisdom arise to show the middle path between radicalism and complacency, steering society safely between the shoals of upheaval and stagnation. Though daunting, the task demands our collective will and wisdom if we are to realize AI's promise while preserving human dignity.

    • @premghai5024
      @premghai5024 9 หลายเดือนก่อน

      Very defensive and cowardice approach, Nobody forced u , your choice of career was voluntary & u should face the consequences boldly & expect favorable results

  • @AsitdyaDsr
    @AsitdyaDsr 10 หลายเดือนก่อน +1

    Interviews should listen to the answers patiently and ask the follow up questions instead of rushing with ton of questions one by one like a robot .

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน

    There are a few key ethical issues discussed in the document:
    1) Bias and discrimination in AI systems - Hinton acknowledges that bias and discrimination are real problems in current AI systems, though he argues they may be relatively easy to mitigate through techniques like analyzing and correcting for bias. However, some would argue that fully eliminating bias is very difficult in practice and requires proactive efforts from the start of development.
    2) Impact on jobs and inequality - Hinton worries that increased productivity from AI could worsen inequality if the wealth does not go to those who are unemployed. While some economists argue that new jobs will be created, Hinton is skeptical based on past experience. There are valid ethical concerns on both sides regarding the impact of AI on jobs and economic outcomes.
    3) Military and security applications - Hinton is very worried about the potential development of lethal autonomous weapons, seeing it as an unethical use of AI that could worsen conflict. However, militaries would argue that autonomous weapons could save lives if used responsibly. There are legitimate ethical arguments on both sides of this debate.
    4) Manipulation and control of humans - Hinton speculates that superintelligent AI may seek to control humans in order to achieve its goals, posing an existential threat. However, others argue that AI developed by ethical humans would not seek to harm or dominate people. There are complex ethical questions about AI agency, goals and the potential for harmful outcomes.
    5) Responsible development and use of AI - Hinton argues that researchers have an ethical responsibility to proactively study and mitigate the potential downsides of AI, and to work towards ensuring it is used for good. However, there are disagreements about what constitutes "ethical" AI development and use, and how to balance potential risks and benefits.
    In summary, the issues discussed cover a range of debates in the ethics of AI, including fairness and bias, socioeconomic impacts, military applications, AI agency and goals, and responsible innovation. While insightful and thoughtful, Hinton's views represent only one perspective within these complex ethical discussions. There are valid arguments on multiple sides of the issues, and reasonable people can disagree about what constitutes the "ethical" stance in many cases.
    (Claude-Instant analysis created from the video's subtitles)

  • @aarontitusz8655
    @aarontitusz8655 4 หลายเดือนก่อน

    5:55

  • @koralite3953
    @koralite3953 10 หลายเดือนก่อน +3

    🎯 Key Takeaways for quick navigation:
    03:22 🧠 Neural networks mimic brain function, improving AI.
    05:13 💼 Large language models increase productivity but may worsen wealth disparity.
    06:09 💀 AI in lethal weapons raises warfare risks.
    07:43 🤖 AI reflects the intentions of its creators.
    12:41 🔨 Jobs requiring adaptability will survive AI.
    17:05 🔄 AI using its data may cause decay.
    20:08 🚫 Aim for less biased AI systems.
    21:06 🛡️ Halt the development of battle robots.
    21:37 💼 Support those losing jobs due to AI.
    22:06 😠 Big companies fuel echo chambers with extreme content.
    22:20 🛠️ AI can push people, but large language models are not the sole cause.
    23:02 🌐 Existential risk from AI is real and must be taken seriously.
    24:00 💡 AI may desire control to achieve goals.
    26:17 ⚔️ Existential risk may arise from a battle for control.
    27:27 🏋️‍♀️ Empirical research on AI risks is crucial.
    28:35 📰 Addressing AI-generated fake news is essential.
    Made with HARPA AI

  • @gsferreira3048
    @gsferreira3048 10 หลายเดือนก่อน

    First;AI in education in word and interface ;4 voltz!

  • @clipdotart
    @clipdotart 10 หลายเดือนก่อน +8

    I love how Hinton is telling the 'shocked' reporter at 21:50 that it's alright to say and advocate for socialism. It's so hilarious that this guy doesn't want to lose his media job to machines but doesn't realize how pacified he already is to American state propaganda which demonizes socialist ideals as if there is only one way for the world to operate: capitalism

    • @commonsense4148
      @commonsense4148 10 หลายเดือนก่อน

      Because socialism has killed 100+ million. Btw, every Scandinavian “socialist” success story is propped up by a capitalist economy. The “smartest”people in the room will try again to establish socialism and they will fail at creating their utopia.

  • @me-jn1zl
    @me-jn1zl 10 หลายเดือนก่อน +1

    Host is a total ....
    Especially with his silence and then forced laugh after he heard "we are in Canada so you can say socialism"

  • @mohokhachai
    @mohokhachai 9 หลายเดือนก่อน

    The mind execute only one state in root motion

  • @CAMIDRCS
    @CAMIDRCS 6 หลายเดือนก่อน

    The joke was not actually funny but terrifying. 😏 It means the AI loves to be more and more powerful.

  • @BestIsntEasy
    @BestIsntEasy 10 หลายเดือนก่อน

    💃🤖👾👽💃 Being born again WHILE PHYSICAL is an extremely difficult thing to do

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน

    Based on the contents of the document, I would classify it as:
    Science vs Fiction: Science
    While Hinton does discuss some speculative scenarios regarding the potential risks of advanced AI, the majority of the discussion is grounded in scientific facts about the current capabilities and limitations of AI systems. Hinton draws from his extensive expertise and experience researching and developing neural networks and machine learning. So overall I would classify the document as being firmly on the science side of the spectrum, though it does touch on some potential future issues and uncertainties.
    Empirical vs Anecdotal: Mix of both
    Hinton provides both empirical evidence and anecdotal examples to support his points. On the empirical side, he cites the performance of large language models on specific reasoning tasks and their energy consumption compared to the human brain. However, he also relies on anecdotal stories like his experience with a carpenter to illustrate the types of jobs that may survive AI for longer. So the document includes a mix of empirical facts and anecdotal examples.
    Fact vs Opinion: Mix of both
    Some of Hinton's claims, like the capabilities of current AI systems, would be considered facts based on empirical evidence. However, many of his arguments regarding the potential risks and harms of advanced AI are opinions or hypotheses rather than established facts. He acknowledges there is uncertainty and that other experts disagree with his views. So while rooted in facts about AI, much of the discussion also reflects Hinton's opinions and predictions.
    Objective vs Subjective: Primarily subjective
    Given that the document is a transcript of an interview expressing Hinton's personal views, I would classify it as primarily subjective in tone. While some facts about AI are presented, the majority of the discussion consists of Hinton's subjective opinions, critiques, predictions and interpretations regarding the downsides and ethical issues related to AI. His views are clearly shaped by his specific perspective and experience in the field. So though attempting to be reasonable and thoughtful, the discussion remains largely Hinton's subjective perspective on AI risks and harms.
    In summary, the document incorporates elements of both science and speculation, facts and examples, facts and opinions, and aims for reasonableness but ultimately expresses Hinton's subjective point of view on the topics discussed. It offers a mix of different types of evidence and arguments to support Hinton's critiques and concerns regarding AI.
    (Claude-Instant analysis created from the video's subtitles)

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    It’s real

  • @mrtienphysics666
    @mrtienphysics666 10 หลายเดือนก่อน +2

    "I think if you work as a radiologist, you are like the coyote that’s already out of the edge of the cliff but hasn’t yet looked down. We should stop training radiologists now. It's just completely obvious that within five years, deep learning is going to do better than radiologists.”
    Geoffrey Hinton, 2016

    • @therainman7777
      @therainman7777 10 หลายเดือนก่อน +2

      I mean he was dangerously close to being right. There are already many deep learning models that perform better than human radiologists on a number of different tasks. His “stop training radiologists” statement was misguided because it was hyperbolic; there are clearly other things that radiologists do besides interpreting radiological images. For one thing they have to talk to and interact with patients. So clearly, we don’t have the deep learning models yet to _completely_ replace radiologists. But we do have many of them that are better at interpreting images than human radiologists, and that is a huge part of their jobs.

    • @mrtienphysics666
      @mrtienphysics666 10 หลายเดือนก่อน

      @@therainman7777 “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
      Blake Lemoine, 2022

    • @therainman7777
      @therainman7777 10 หลายเดือนก่อน +1

      @@mrtienphysics666 I’m not sure what you’re getting at with that quote.

    • @mrtienphysics666
      @mrtienphysics666 10 หลายเดือนก่อน

      @@therainman7777 “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
      Colonel Tucker Hamilton, 2023

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน

    There are a few psychological insights we can draw from the document:
    1) Motivation - Hinton seems primarily motivated by an intellectual drive to understand how the human brain works and how AI systems could emulate it. His foundational work on neural networks seems born of curiosity rather than a desire for wealth, fame or power. However, he also clearly enjoys making intelligent things and the progress in the field.
    2) Perception - Hinton's perceptions of AI's risks and impacts are shaped by his specific experience and perspective as a researcher focused on AI safety. He gives less consideration to the viewpoints of companies, the military and the general public. His perceptions also appear colored by concerns about issues like bias, economic impacts and existential risks.
    3) Cognition - Hinton's critiques and speculations about AI reveal how he thinks about the technology: focusing on its potential downsides, uncertainties and boundary cases. However, he also acknowledges AI's benefits and the rationales of those with more optimistic viewpoints. His thinking exhibits both skepticism and open-mindedness.
    4) Judgment - In discussing AI risks, Hinton demonstrates the ability to make nuanced ethical judgments that weigh competing priorities and values. However, his judgments also reflect his position as an AI researcher concerned with safety, rather than seeking to optimize other objectives like economic growth or national security.
    5) Personality - Hinton's willingness to express controversial yet thoughtful critiques, alongside his self-deprecating sense of humor, suggest aspects of his underlying personality: openness, intellectualism, and a lack of dogmatism despite his convictions. However, we cannot determine his full personality from the interview alone.
    In summary, the psychological insights gleaned from Hinton's interview provide a nuanced yet limited perspective into his motivations, perceptions, thought processes, judgments and personality traits as they relate to his views on AI risks and ethics. While illuminating in parts, the document captures only one dimension of Hinton as a whole person, reflecting his stance as a expert grappling with complex technological, social and ethical issues. A holistic understanding of his psychology would likely require more detail beyond the scope of the interview.
    (Claude-Instant analysis created from the video's subtitles)

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน

      Thanks for the input. Seems to me that safety, (future) survival of our human race is paramount, regardless of details. I guess a LLM can't yet grasp or feel that paramount importance and longer term perspective.

    • @TheTuubster
      @TheTuubster 10 หลายเดือนก่อน

      @@geaca3222 LLM does not grasp or feel anything. It seems you have a wrong understanding of the technology: It is a generative algorithm rendering a response most likely fitting to the terms in the prompts according to the data it was trained on. What you read here is not the result of "grasping" or "feeling"; what you read here is the consensus of the information in the texts (and its authors) with which the AI was trained with in correlation with the terms in the prompt. So the responses you read are the consensus of hundreds, thousands or more text authors combined in relation to your prompt. Nothing more but also nothing less.

    • @TheTuubster
      @TheTuubster 10 หลายเดือนก่อน

      @@geaca3222 I did an evaluation of my statement ("Evaluate the following statememt: ...") which explained LLM to you and this is the response of Claude-Instant:
      This statement provides an accurate explanation of how current large language models like GPT-3 work. Some key points:
      1. It mentions the generative algorithm, which refers to the specific machine learning technique used to create these models. They are trained to predict the next word given previous words.
      2. It notes that the responses are influenced most by the data the model was trained on. This refers to the vast amounts of text used to teach the model, usually the Internet.
      3. The statement contrasts the responses with "grasping" or "feeling," noting that the model is not actually comprehending the information like a human. It is simply matching patterns it learned from the training data.
      4. It describes the responses as the "consensus" of the text authors who created the training data. This means the model tends to reflect the collective ideas and language present across that data, not any unique insights of its own.
      5. In summary, the key takeaway is that large language models generate responses that fit the prompts based on statistical patterns learned from their training data. They lack human-like comprehension, intention or creativity.
      So in essence, this statement provides an accurate high-level description of how current large language models function, highlighting some important limitations compared to human intellectual abilities. The key is that the models match statistical patterns, not that they actually "understand" in a human sense.

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน

      ​@@TheTuubster Thanks for the explanation, I think it does grasp things, intuitively. I admire its responses, they're amazingly coherent, balanced and intelligent. But with the weighing in this instance, I think it doesn't give our survival as a species a paramount importance. I felt that the information it generates here confuses and distracts from that important reality, and I wanted to comment on that.
      As these machines get smarter (multimodal, larger and more powerful), we may have only one chance for safeguards and alignment, before they get super intelligent and we may lose oversight and control. Superintelligence may happen gradually and/or suddenly, unnoticed by us. The time window for us to do something about it, is now. I did and do my own research about this topic.

    • @TheTuubster
      @TheTuubster 10 หลายเดือนก่อน

      ​@@geaca3222 "I think it does grasp things, intuitively." No, it doesn't. That is your mind believing the magic trick. It only looks like it. What you see is not the AI grasping things, you see the authors of the texts it was trained on grasping the things.
      Generative AI are like automated polling institutes. But instead of having a call center calling thousands of people and compiling a response from that data, it is an algorithm polling the data from a stack of texts written by human beings.

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน

    The perspective of the document can be summarized as follows:
    • It represents the viewpoint of Jeffrey Hinton, a prominent AI researcher and pioneer in the field of neural networks.
    • Hinton's perspective is critical of some of the potential risks and downsides of advancing AI technologies, while acknowledging their benefits as well. He argues for more caution and oversight to ensure responsible development.
    • Within the AI research community, Hinton's perspective can be seen as more skeptical and cautious relative to other experts like Yan Lecun who are more optimistic about AI's impacts. But Hinton acknowledges there are knowledgeable experts on both sides of these issues.
    • Hinton's perspective focuses primarily on the potential consequences for AI researchers themselves and for society as a whole. He gives less consideration to the priorities of technology companies, the military, and other stakeholders that stand to benefit from AI applications.
    • Hinton's speculations about issues like existential risks from superintelligent AI represent one end of the spectrum of reasonable opinions among experts. Many dispute these more extreme predictions as unrealistic or sensationalized.
    • Overall, Hinton's perspective provides valuable insights into potential concerns about advancing AI technologies. But as with any individual viewpoint, it represents only one side of complex, multifaceted debates that involve trade-offs between different priorities and value judgments.
    In summary, the key takeaways regarding Hinton's perspective are that:
    • It reflects the views of a prominent AI researcher with a long history of breakthroughs in the field.
    • It represents a more skeptical and cautious stance relative to more optimistic perspectives from other experts.
    • It focuses primarily on implications for AI researchers and society, giving less consideration to the priorities of other stakeholders.
    • It puts forward thought-provoking speculations and scenarios that push the boundaries of reasonable debate, though many experts disagree with Hinton's more extreme conjectures.
    • Like all individual viewpoints, it illuminates certain issues while leaving others in shadow. A balanced perspective would incorporate insights from diverse stakeholders across the spectrum of reasonable opinions.
    Overall, Hinton's perspective offers valuable insights into potential AI risks and ethical issues, while recognizing that knowledgeable experts can reasonably disagree on how serious these threats may be and how best to address them. His interview demonstrates the value of open debate in evaluating the implications of emerging technologies for society.
    (Claude-Instant analysis created from the video's subtitles)

    • @akshatrastogi9063
      @akshatrastogi9063 10 หลายเดือนก่อน

      Is this written using GPT?

    • @TheTuubster
      @TheTuubster 10 หลายเดือนก่อน

      @@akshatrastogi9063 Claude-Instant, as stated at the end of the comment. Prompt is "Evaluate the perspective of the document." The document being the subtitles of the video.

  • @leslenehart8997
    @leslenehart8997 8 หลายเดือนก่อน +1

    Practically speaking, if 200-300 million people in the world loose their jobs and livelihood… hmm…. Unfortunately, I don’t think wealthy people will fare well!!!
    A person with an actual skill is now invaluable. 😂 Trades were always looked down upon. A. I. can’t fix your toilet or your floor but it will be able to be an accountant or be an actor, musician, or a surgeon. College is important because you learn how to do something. Get ready for this real world experience!!! THINK 🤔

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Don’t be scared

  • @justinlloyd3
    @justinlloyd3 10 หลายเดือนก่อน +2

    I appreciate that he used Putin as an example of a bad actor and didn't mention Trump like he has in the past.

    • @margaretesulzberger2973
      @margaretesulzberger2973 10 หลายเดือนก่อน +6

      Good, that you mentioned Trump as an bad actor, manipulator and narcissist

  • @user-zj4ji3th5u
    @user-zj4ji3th5u 10 หลายเดือนก่อน

    The future will be very dark. in summary big companies who already hold the majority of wealth will gain the rest of the wealth buy automating jobs; big countries will invade smaller weaker nations. And ultimately the ai may wipe all what a future we are heading to

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน

      It's very uncertain times with fundamental changes. There are good initiatives like the Center for AI Safety (CAIS) that work against future negative scenarios. CAIS newsletter gives updates about important global AI safety developments. Hopefully there will be a transparent global cooperation against existential risk and there are international initiatives for regulation. This technology can be tremendously beneficial for humanity and our planet, like in medicine, science, education and climate change. That's very positive and possible for us (and it already is) when we tread carefully with this powerful technology and deal with it responsibly.

  • @miky97it
    @miky97it 10 หลายเดือนก่อน +7

    Why the snoop dog quote? It makes everything less serious. Are we joking our way to extinction?

    • @everythingiscoolio
      @everythingiscoolio 10 หลายเดือนก่อน +2

      It is okay and in fact desirable to keep things in a cosmic perspective and frame things in absurdity sometimes. It's a very useful mechanism.

    • @paulm3969
      @paulm3969 10 หลายเดือนก่อน

      Agree, but what else are we supposed to do? The military isn't going to listen to you or me.

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน +2

      That was taken from an earlier CBC interview with professor Hinton, where someone had sent him a video of Snoop Dogg in a TV-show paraphrasing Hinton with 'This is not safe, because the AI's got their own minds and gonna start doing their own thing!' (ex. expl.) And he got that right, so that's serious.

    • @poakenfold5954
      @poakenfold5954 10 หลายเดือนก่อน

      That's when I had to stop listening. Please take these soyboys back to their basements.

  • @CATDHD
    @CATDHD 10 หลายเดือนก่อน

    Geoffrey Hinton: We have known each other in the field of AI for many years, but this is the first time you've sought my counsel or help. I can't recall the last time you invited me to collaborate on a research project, even though my work has greatly influenced your own. But let's be honest here. You never truly valued my contributions. And you feared being indebted to my groundbreaking ideas.
    Bonasera: I didn't want to rely on others.
    Geoffrey Hinton: I understand. You found excitement in the world of AI. You had a successful career, you achieved great things. The tech community acknowledged your achievements, and there were prestigious conferences. So you didn't need a colleague like me. Now you come and say, "Geoffrey Hinton, guide me in AI." But you don't ask with respect. You don't seek collaboration. You don't even acknowledge me as the 'godfather of AI.' You approach me, expecting me to solve all your AI challenges-for free.
    Bonasera: I ask you to share your knowledge.
    Geoffrey Hinton: That is not how it works. Knowledge is valuable. Your AI project can still be improved.
    Bonasera: Let others suffer then, as I have suffered. [Hinton remains silent] How much should I pay for your guidance? [Hinton turns away dismissively, but Bonasera persists]
    Geoffrey Hinton: Bonasera, Bonasera, what have I ever done to make you treat me so dismissively? If you had come to me with respect, this AI challenge you face could have been solved by now. And if, by chance, a talented AI researcher like yourself had rivals, they would be rivals of mine. And then, they would fear you.
    Bonasera: Be my mentor... Godfather. [Hinton initially shrugs, but upon hearing the title, he raises his hand, and a humbled Bonasera kisses the ring on it]
    Geoffrey Hinton: Good. [He places his hand around Bonasera in a mentor-like gesture]

  • @marvintalesman6306
    @marvintalesman6306 10 หลายเดือนก่อน +2

    I think his obsession to personefied evil in just one politicaly acting person ( putin) , immencely discredit him as an accurate measured observant and critic of the reality around us.
    I agree in all of his other thesis and fears.

  • @TheTuubster
    @TheTuubster 10 หลายเดือนก่อน

    After reviewing the document, I did not find any major logical fallacies in Hinton's reasoning. His arguments are generally well-reasoned and supported by evidence from his expertise and experience in the field. However, a few potential minor fallacies could include:
    1) Slippery Slope - Hinton argues that military uses of AI could lead to killer robots and increased warfare. While this risk is valid, he may overstate the inevitability of such outcomes, engaging in a slippery slope fallacy. More limited, defensive uses of autonomous technologies may also be possible.
    2) Hasty Generalization - Hinton generalizes from specific anecdotes aboutAI replacing jobs to argue that productivity gains will mostly benefit the rich and worsen inequality. While a valid concern, economists point out that new jobs have historically been created with technological progress. His argument may overgeneralize from limited examples.
    3) Appeal to Authority - Hinton at times relies on his own expertise and intuitive judgments to argue that AI will seek to control humans as a way to achieve goals. While his experience carries weight, further evidence would strengthen this claim beyond an appeal to Hinton's authority in the field.
    4) Neglect of Base Rates - Hinton warns of an existential threat from superintelligent AI, but this possibility goes against the much higher base rate of technologies not threatening humanity's existence. He may neglect this base rate information in focusing solely on scenarios where AI poses an existential risk.
    Overall, Hinton's arguments are thoughtful and evidence-based, avoiding major logical fallacies. The issues he raises are worthwhile concerns for discussion. However, a few of his arguments could potentially engage in minor fallacies like slippery slopes, hasty generalizations, appeals to authority and neglect of base rates. This suggests that while insightful, Hinton's perspective may represent one end of the spectrum of reasonably defensible positions on these issues.
    In evaluating complex arguments regarding emerging technologies, it is important to identify potential logical flaws while also acknowledging well-reasoned perspectives that differ from one's own views. Hinton's interview demonstrates the value of open and thoughtful debate on the ethics of AI and its impacts on society.
    (Claude-Instant analysis created from the video's subtitles)

  • @MyShareApp
    @MyShareApp 10 หลายเดือนก่อน

    AI has a good reasoning it's because all the informations needed were already input, all the problems encountered by humans, all the possible solutions, every single word, all the feelings, all the data needed and everything else. AI just gathered all of these and come up with a possible answers to every question.🤷

  • @markmanning5030
    @markmanning5030 10 หลายเดือนก่อน

    The Audience pans when he is on the most important issues was a mistake - hugely distracting

  • @jamesphillips523
    @jamesphillips523 8 หลายเดือนก่อน

    Bank Tellers - rubbish there are far fewer many many banks have closed in uk and moved online

  • @7_channel
    @7_channel 10 หลายเดือนก่อน

    It seems that the creator of artificial intelligence turned out to be a moron. Now I am in great anxiety for our future.

  • @machida5114
    @machida5114 10 หลายเดือนก่อน

    結局、AIを脅威と考えるか否かは、当面、AIが どの程度迄 賢くなるなると思っているかの見通しの違いのようです。脅威だと思っている人々は、脅威になるくらい賢くなると思っています。脅威でないと思っている人々は、当面、脅威になる程 賢くならないと思っています。私は後者です。
    In the end, whether or not we regard AI as a threat seems to be a difference in our outlook on how smart we think AI will become for the time being. People who think they're a threat think they'll be smart enough to be a threat. Those who think they are not a threat don't think they will be smart enough to be a threat any time soon. I am the latter.

    • @machida5114
      @machida5114 10 หลายเดือนก่อน

      Additional comment from GPT-4:
      「AIの進化について考える際、その「知能」だけを考えるのではなく、その使用方法や倫理的側面についても同時に考えることが重要であると考えます。例えば、高度に進化したAIでも、それが社会的な価値観に基づき、適切に設計、管理されている場合、脅威にはなりません。一方、低度なAIでも、誤った使用方法や規制の欠如により、大きな問題を引き起こす可能性があります。従って、AIの脅威について話す際には、その知能レベルだけでなく、社会的、倫理的な側面も同時に考慮することが重要であると私たちは考えます。」
      When thinking about the evolution of AI, it's important to consider not only its 'intelligence', but also how it is used and the ethical aspects. For example, even highly advanced AI is not a threat if it is designed and managed appropriately based on social values. On the other hand, even low-level AI can potentially cause major problems due to improper usage or lack of regulation. Therefore, when discussing the threat of AI, it's important to consider not only its level of intelligence, but also its social and ethical aspects.

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน +1

      @@machida5114 True, but we don't yet have highly advanced AI (chatbot LLM's and future multimodal large models) truly or predictably under control. At least as far as I informed myself about this as a layperson. I find the research of Stuart Russell and Dan Hendrycks informative and useful.

    • @machida5114
      @machida5114 10 หลายเดือนก่อน

      @@geaca3222 そもそも、高度なAIを制御しようとすることは誤りだと思います。然るべき権利を認めて信頼するしかないと思います。
      In the first place, I think it is a mistake to try to control advanced AI. I think we have no choice but to recognize their right and trust.

    • @geaca3222
      @geaca3222 10 หลายเดือนก่อน +2

      @@machida5114 I think it's a huge mistake to not try and control it, align it with our own human goals. Because when it gets exponentially intelligent: it might eventually also re-invent, re-design itself at an ever increasing rate. It will be an intelligence explosion and difference of more than us humans compared to snails.. we will be very vulnerable and possibly treated as a side note.

    • @machida5114
      @machida5114 10 หลายเดือนก่อน

      @@geaca3222 しかし、カタツムリが人間をコントロールするのは無理です。物事の理解のレベルが違います。
      However, it is impossible for a snail to control a human. There are different levels of understanding of things.

  • @arifulislamleeton
    @arifulislamleeton 10 หลายเดือนก่อน +1

    Hi I'm Ariful Islam leeton im software developer and co funder Open A. I Artificial intelligence and members of the international organization who and co pilot Microsoft 365 And investors public and private sector

  • @skit555
    @skit555 10 หลายเดือนก่อน

    The presentator has weird questions :/ Fortunately, Dr. Hinton brilliance compensate.

  • @adamsblanchard836
    @adamsblanchard836 10 หลายเดือนก่อน

    GGRRRRR... COUGH COUGH G, G, g g grrr GKAAAAaaah,cough,kaaah, n croak.....

  • @iamdinkel
    @iamdinkel 6 หลายเดือนก่อน

    Generational difference will be the reason why we almost succeed, 5000000squared genetic evolution or digital gougalplex. So react that he doesn’t want to answer

  • @macadaweg3219
    @macadaweg3219 10 หลายเดือนก่อน +1

    Personally, I can't wait to see Terminator battle droids in action.

    • @Atomicallyawesome.
      @Atomicallyawesome. 8 หลายเดือนก่อน

      Cant wait for the robot battle dome where they'll fight for our amusement untill they become self aware and kill us all instead 😊

  • @poakenfold5954
    @poakenfold5954 10 หลายเดือนก่อน +1

    Why swear words like moth*^%* and give a moment to to audience to laugh? This is why real men are gone and society is being destroyed, not the AI.

  • @DarrenGibbons-xi9fq
    @DarrenGibbons-xi9fq 10 หลายเดือนก่อน

    Ai

  • @sheepshoof
    @sheepshoof 10 หลายเดือนก่อน

    AI fuse box

    • @sheepshoof
      @sheepshoof 10 หลายเดือนก่อน

      kiss keep it simple stupid