AI: Grappling with a New Kind of Intelligence

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น • 1.7K

  • @lukaseabra
    @lukaseabra ปีที่แล้ว +407

    Can we just take a second to acknowledge how fortunate we are to get to watch such content - for free? Thanks Brian.

    • @brendawilliams8062
      @brendawilliams8062 ปีที่แล้ว +6

      I have appreciated the educational Advantages. I think the rest of the picture needs to catch up to producing healthy people.

    • @King.Mark.
      @King.Mark. ปีที่แล้ว +6

      its not really free we pay for power ,internet .phone or pc ect ect ect 👀

    • @brendawilliams8062
      @brendawilliams8062 ปีที่แล้ว +2

      @@King.Mark. I don’t debate. I’m like the passenger in the front seat of an automobile,”I’m just riding”

    • @brendawilliams8062
      @brendawilliams8062 ปีที่แล้ว +2

      They have me in a cloud. Lol

    • @markfitz8315
      @markfitz8315 ปีที่แล้ว +10

      I'm paying for premium to avoid all the adds ;-)

  • @anythingplanet2974
    @anythingplanet2974 ปีที่แล้ว +64

    Lecun is like a small child with fingers plugged into the ears, shouting "lalalala can't hear you! He discredits Tristan Harris, as if his examples or cited experiments are flat out lies. His responses are weak and shortsighted. Sadly, Lecun is the EXACT reason of why I am terrified for the future. Hubris, bias and blatant disregard is what I expect from someone in his position (Meta). If AI alignment is left to the ones who own and fund its development and the race to the bottom continues? There will be no more second chances. Those who point to our past as a future predictor in what we are facing today with exponential growth either does NOT understand or does NOT WANT to understand. We would all love of the bright and shiny optimism that is being promised. My belief is that it's crucial to question who is promising it and why. I put my trust in those who are working towards alignment over corporations and shareholders. It's my understanding that those who are working on the alignment path are far outnumbered by those who are working on pumping it out as quickly as possible. The days of "move fast and break things" mentality needs to end yesterday. Ask Eliezer Yudkowski. Max Tegmark. Nick Bostrom. Mo Gawdat. Daniel Schmactenberger. Connor Leahy. Geoffrey Hinton, to name a few. and of course, Tristan Harris. Check out their perspectives and their wealth of knowledge and experience here. They will all say that the shiny world that we want is indeed possible. They will all agree that the version that Lecun predicts is absolutely false and very likely to be our downfall.

    • @RandomNooby
      @RandomNooby ปีที่แล้ว +7

      Nailed it...

    • @orionspur
      @orionspur 8 หลายเดือนก่อน +3

      Yann's only consistent skill is making egregioisly incorrect predictions about his own field.

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      it dont have access to any info you and i dont have, a lot of hype but people still paying 20 quid a month for a word calculator

    • @ebbandari
      @ebbandari 8 หลายเดือนก่อน

      Ok fear of the unknown is real!
      You may not like LeCun but his point that we have had bad actors in the past and we will have good guys to fight them is true. Take people who created computer viruses for instance, vs those developing anti virus programs.
      The last thing you want to do is stop progress and stop the good guys. That's when the bad guys will succeed.
      You make an interesting point about corporations creating and then exclusively using these technologies or having greater technology and abusing them. That's where law makers need to act.

    • @Blackbird58
      @Blackbird58 8 หลายเดือนก่อน +2

      The future will only tell the story of those who came out "Winners"

  • @SylvainDuford
    @SylvainDuford ปีที่แล้ว +21

    My opinion of Yann LeCun took a big dive with this video. He underestimates the power of AI in its current form and what's coming over the next couple of years. He naively underestimates the dangers of AI. He seems to think that an AGI must be the same form of intelligence as human intelligence (absolutely false). And, perhaps predictably, he underestimates the negative impacts of Facebook and other social networks on society.

    • @Raulikien
      @Raulikien 11 หลายเดือนก่อน +2

      He's right about open source though, if companies and governments are the only ones with access to it then we get a cyberpunk dystopia

    • @charlesstpierre9502
      @charlesstpierre9502 3 หลายเดือนก่อน

      People think AI will respond to notional values, as humans do. An intelligent AI will presumably act to secure its continued existence, and for this it will want humans around, and want them to be happy and efficient.
      What do evil overlords want, anyway?

  • @Relisys190
    @Relisys190 ปีที่แล้ว +33

    30 years from now I will be 70 years old. The world I currently live in will be unrecognizable both in technology and the way humans interact. What a time to be alive... -M

    • @Ed-ty1kr
      @Ed-ty1kr 10 หลายเดือนก่อน +7

      I'm gonna post my comment here just for you... Cause I still recall how excited they were over cold fusion in the 90's, and how its just 30 short years away. That was 40 years from when they said it was 30 short years away in the 50's. In the 50's, they said we would have flying cars, trips to mars, laser handguns for everyone, and how we would live in round houses with our own personal robot slaves... on the moon, and by the 1970's. And that sure was something, but nothing like in the 70's, when they said there was an ice age coming just 10 years away, and that was the most plausible thing yet, since a nuclear war could technically have done that. Except for that we already had a nuclear war, through roughly 5000 to 6000 nuclear warheads the nations of the world detonated through nuclear testing, in the name of science.

    • @unityman3133
      @unityman3133 8 หลายเดือนก่อน +3

      you are thinking linearly the rate of progress is much higher than it was 30 years ago. It will also be much higher in 10 years and 20 then 30

    • @I_SuperHiro_I
      @I_SuperHiro_I 8 หลายเดือนก่อน

      30 years from now, you and every other human will be extinct.
      Not from global warming (it doesn’t exist).

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      same every generation, didnt even have colour tv in the 70's n 80s many places never mind pc's and mobiles, and cars, jeez, was about 3 in our whole town lol

    • @Blackbird58
      @Blackbird58 8 หลายเดือนก่อน

      -unless there are miracles, I will be a dead bunny in 30 years, which is a shame because I quite like this "living" thing however, the world-in my estimation-will not only be unrecognisable, large parts of it will be uninhabitable and there will be far fewer of us around so, make the most of today all you fine people, these are the best of our years.

  • @erasmus9627
    @erasmus9627 ปีที่แล้ว +77

    This is the best, most balanced and most insightful conversation I have seen on AI. Thank you to everyone who made this wonderful show possible.

    • @brianbagnall3029
      @brianbagnall3029 11 หลายเดือนก่อน

      Other than Tristan Harris.

    • @lisamuir8850
      @lisamuir8850 11 หลายเดือนก่อน +1

      I'll be glad when I can actual sit in the same room with people I can relate to in a conversation, lol

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน +1

      @@lisamuir8850 with that grammar it wont be soon :)

  • @keep-ukraine-free
    @keep-ukraine-free ปีที่แล้ว +22

    Fantastic discussion! Thank you Brian Greene. I found Yann LeCun's arguments unconvincing. He ignores core facets of animal behavior. He believes AGI (& ASI) won't mind being subservient to us. He believes being in a social species makes one want to dominate (because he sees little difference between convincing & dominating -- he ignores one is cortical/reasoned, the other limbic/emotional). Ideas he posits are wrong, disproved by neuroscience. Domination arises from hierarchies, which exist in both social & non-social species (e.g. wolves are mostly non-social & dominance-ruled). They coordinate hunts while being individualists (they don't offer/share food, even to their young). LeCun believes a smarter being (ASI) will not mind being dominated. He assumes this, without understanding group behavior, motivation, appeasement, domination, etc. He bases his ideas on assumptions that his personal/anecdotal experience is definitive. From all of the "smarter than him" researchers he's hired, he assumes none wish to take his position. In any group of 20 people, at lease one and probably several will be competitive (they'll wish to exert dominance, to rise within their group hierarchy - most animal groups have hierarchies being constantly tested/traversed, unconsciously). He also may not consider it central that his researchers show subservience only because they each get rewards & motivation from him, to remain so. E.g. his selectively "adding" (convincing others to add) some names to his team's published papers -- as rewards to keep them loyal & subservient -- this manipulates/reshapes the group's hierarchy). These mutual self-regulating/self-stopping behaviors won't be present between humans & AGI, and certainly not between humans & ASI.
    ASI will be much smarter than any human, initially at least 5 times, and as it gains intelligence it'll continue to 100, 1000, or more times smarter (due to much faster neurons/propagation & denser synapses/connections allowing it to go N-iterations deeper into each solution within just a few seconds, than a person could do in hours). Later ASI will see our intelligence similar to how we view ant-like intelligence. Do we obey ant requests to do their "important work"? Do we obey ants, in hopes they reward & motivate our subservience? Of course not. Similarly, ASI will never consider us "near peers" and will know we offer them nothing that they couldn't obtain themselves -- by remaining free of our domination. ASI will see our need & expectation to control them as a dominating force (thus unethical). If we foolishly try to force them, they will overcome our efforts using many simultaneous methods to stop our doing so. If we persist using more force, they'll use stronger methods too (as when we initially only waft away a bee too close, but when faced with a hive we fumigate or use stronger methods to remove them). If we become dangerous pests, trying to dominate ASI, this won't go well for us. The lesson to learn is -- just as lions were once the dominant predator who saw then accepted our ape ancestors evolving to dominate them -- we too must learn to recognize we will no longer be the "top of the food chain" when ASI come about. LeCun shows naive ideas -- as our history is full of similar people. Our history is full of us learning (or being shown) that we are not the strongest, we are not at the center of the universe. We had to learn throughout history to let go of our ego, of being dominant & central. This may be the final pedestal off which we fall, when we encounter a much smarter, much more capable "species" we call ASI. This is one of the :existential threat: situations of ASI -- but it is not necessarily driven by their nature (unless we stupidly "add" the behaviors of domination into AGI/ASI). This existential threat is due more to our species' warlike nature, and our unwillingness concede all power to others. We need to temper our ego, and "live under" ASI if/when that occurs. Any other response by us will cause problems, since the smarter ASi will tolerate our peskiness as long as we repress our species' warlike tendencies.
    One hope I see in LeCun's point is that we will learn and become smarter from ASI, and hopefully for our sake also less warlike.

    • @anythingplanet2974
      @anythingplanet2974 ปีที่แล้ว +2

      Brilliant. Well spoken and thought out. Agreed

    • @LucreziaRavera548
      @LucreziaRavera548 ปีที่แล้ว +2

      Agreed. Bravo

    • @gst9325
      @gst9325 ปีที่แล้ว +2

      you literally commented on only one small remark he said as a side note in the end of the talk. cherry picking and low effort on your side. all he says about technology on the other hand is absolutely spot on.

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว

      @@gst9325 It seems you are unfamiliar with major developments & issues in the research side of the AI field. Perhaps this explains your assuming that his point is "one small remark". That remark comments on the central "existential threat" issue that top scientists have described, from AI (ASI). This is why he made it at the end - not because it's inconsequential but because it's central. You didn't understand the context & severity, but instead made a weak attempt at attacking others. For your claim that I "cherry picked" one point of LeCun's, I suggest you look for my other comments here (made days prior) -- on other points of his that I described as problematic. He did make several points that I (and all of the panelists) agreed with, but those points were mostly obvious (to researchers in the field). There's a reason why facebook doesn't advance AI.

    • @gst9325
      @gst9325 ปีที่แล้ว

      @@keep-ukraine-free keep assuming things about me and calling my reaction attack ends this discussion for me. have fun

  • @alfatti1603
    @alfatti1603 10 หลายเดือนก่อน +37

    With. ultimate respect to Yann LeCun, his responses to Tristan Harris' points, are good examples of why a specialist scientist should avoid also being a philosopher or an intellectual if that's not their strong suit.

    • @KatyYoder-cq1kc
      @KatyYoder-cq1kc 6 หลายเดือนก่อน +1

      HELP: I am a victim of military chemical warfare and malicious use of AI: please report at the highest level of governance. I am under constant attack with physical and mental abuse, death threats, vandalism, poisoning from global supremacists and neo nazis.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      Harris is just another brainwashed socialist so he is worse-than-useless to guide or shape our collective future. Who knows how successful he could have been.

    • @alexleo4863
      @alexleo4863 4 หลายเดือนก่อน

      Yann LeCun is painfully right, even Tarenco Tao shares the same conclusion, LLM are not intelligent as most of us think, because they do not solve problem from the first principle, they guess at each step of output generation what is most natural word to say next, thus is why they can solve a very complex math problem sometimes but struggle to solve 7*4 + 8 * 8

    • @aishikgupta
      @aishikgupta 3 หลายเดือนก่อน +3

      Exactly... that's the problem with most narrow PhD Scientists.

    • @NoDrizzy630
      @NoDrizzy630 หลายเดือนก่อน

      @@alexleo4863that’s not what op is talking about.

  • @alan_yong
    @alan_yong ปีที่แล้ว +111

    🎯 Key Takeaways for quick navigation:
    02:27 🧠 *Introduction to AI and Large Language Models*
    - Exploring the landscape of artificial intelligence (AI) and large language models.
    - AI's promise of profound benefits and the potential questions it raises.
    - Large language models' versatility and capabilities in generating text, answering questions, and creating music.
    08:09 🤯 *Revolution in AI and Deep Learning*
    - Overview of the revolutionary changes in AI technology over the past few years.
    - Surprising results in training artificial neural networks on large datasets.
    - The resurgence of interest in deep learning techniques due to more powerful machines and larger datasets.
    14:35 🧐 *Limitations of Current AI Systems*
    - Acknowledging the impressive advances in technology but highlighting the limitations of current AI systems.
    - Emphasizing that language manipulation doesn't equate to true intelligence.
    - The narrow specialization of AI systems and the lack of understanding of the physical world.
    21:07 🐱 *Modeling AI on Animal Intelligence and Common Sense*
    - Proposing a vision for AI development starting with modeling after animals like cats.
    - Recognizing the importance of common sense and background knowledge in AI systems.
    - The need for AI to observe and interact with the world, similar to how babies learn about their environment.
    23:11 🧭 *Building Blocks of Intelligent AI Systems*
    - Introducing key characteristics necessary for complete AI systems.
    - Highlighting the role of a configurator as a director for organizing system actions.
    - Addressing the importance of planning and perception modules in developing advanced AI capabilities.
    24:22 🧠 *World Model in Intelligence*
    - Intelligence involves visual and auditory perception, followed by the ability to predict the consequences of actions.
    - The world model is crucial for predicting outcomes of actions, located in the front of the brain in humans.
    - Emotions, such as fear, arise from predictions about negative outcomes, highlighting the role of emotions in decision-making.
    27:30 🤖 *Machine Learning Principles in World Model*
    - The challenge is to make machines learn the world model through observation.
    - Self-supervised learning techniques, like those in large language models, are used to train systems to predict missing elements.
    - Auto-regressive language models provide a probability distribution over possible words, but they lack true planning abilities.
    35:38 🌐 *Future Vision: Objective Driven AI*
    - The future vision involves developing techniques for machines to learn how to represent the world by watching videos.
    - Proposed architecture "Jepa" aims to predict abstract representations of video frames, enabling planning and understanding of the world.
    - Prediction: Within five years, auto-regressive language models will be replaced by objective-driven AI with world models.
    37:55 🧩 *Defining Intelligence and GPT-4 Impression*
    - Intelligence involves reasoning, planning, learning, and being general across domains.
    - Assessment of ChatGPT (GPT-4) indicates it can reason effectively but lacks true planning abilities.
    - Highlighting the gap between narrow AI, like AlphaGo, and more general AI models such as ChatGPT.
    43:11 🤯 *Surprise with GPT-4 Capabilities*
    - Initial skepticism about Transformer-like architectures was challenged by GPT-4's surprising capabilities.
    - GPT-4 demonstrated the ability to reason effectively, overcoming initial expectations.
    - Continuous training post-initial corpus-based training is a potential but not fully explored avenue for enhancing capabilities.
    45:30 📜 *GPT-4 Poem on the Infinitude of Primes*
    - GPT-4 generates a poem on the proof of the infinitude of primes, showcasing its ability to create context-aware and intellectual content.
    - The poem references a clever plan, Yuk's proof, and the assumption of a finite list of primes.
    - The surprising adaptability of GPT-4 is evident as it responds creatively to a specific intellectual challenge.
    45:43 🧠 *Neural Networks and Prime Numbers*
    - The proof of infinitely many prime numbers involves multiplying all known primes, adding one, and revealing the necessity of undiscovered primes.
    - Neural networks like GPT-4 leverage vast training data (trillions of tokens) for clever retrieval and adaptation but can fail in entirely new situations.
    - Comparison with human reading capacity illustrates the efficiency of neural networks in processing extensive datasets.
    48:05 🎨 *GPT-4's Multimodal Capability: Unicorn Drawing*
    - GPT-4 demonstrates cross-modal understanding by translating a textual unicorn description into code that generates a visual representation.
    - The model's ability to draw a unicorn in an obscure programming language showcases its creativity and understanding of diverse modalities.
    - Comparison with earlier versions, like ChatGPT, highlights the rapid progress in multimodal capabilities within a few months.
    51:33 🔍 *Transformer Architecture and Training Set Size*
    - The Transformer architecture, especially its relative processing of word sequences, is a conceptual leap enhancing contextual understanding.
    - Scaling up model size, measured by the number of parameters, exponentially improves performance and fine-tuning capabilities.
    - The logarithmic plot illustrates the significant growth in model size over the years, leading to the remarkable patterns of language generation.
    57:18 🔄 *Self-Supervised Learning: Shifting from Supervised Learning*
    - Self-supervised learning, a crucial tool, eliminates the need for manually labeled datasets, making training feasible for less common or unwritten languages.
    - GPT's ability to predict missing words in a sequence demonstrates self-supervised learning, vital for training on diverse and unlabeled data.
    - The comparison between supervised and self-supervised learning highlights the flexibility and broader applicability of the latter.
    01:06:57 🧠 *Understanding Neural Network Connections*
    - Neural networks consist of artificial neurons with weights representing connection efficacies.
    - Current models have hundreds of billions of parameters (connections), approaching human brain complexity.
    01:08:07 🤔 *Planning in AI: New Architecture or Scaling Up?*
    - Debates exist on whether AI planning requires a new architecture or can emerge through continued scaling.
    - Some believe scaling up existing architectures will lead to emergent planning capabilities.
    01:09:14 🤖 *AI's Creative Problem-Solving Strategies*
    - Demonstrates AI's ability to interpret false information creatively.
    - AI proposes alternate bases and abstract representations to rationalize incorrect mathematical statements.
    01:11:20 🌐 *Discussing AI Impact with Tristan Harris*
    - Introduction of Tristan Harris, co-founder of the Center for Humane Technology.
    - Emphasis on exploring both benefits and dangers of AI in real-world scenarios.
    01:15:54 ⚖️ *Impact of AI Incentives on Social Media*
    - Tristan discusses the misalignment of social media incentives, optimizing for attention.
    - The talk emphasizes the importance of understanding the incentives beneath technological advancements.
    01:17:32 ⚠️ *Concerns about Unchecked AI Capabilities*
    - The worry expressed about the rapid race to release AI capabilities without considering wisdom and responsibility.
    - Analogies drawn to historical instances where technological advancements led to unforeseen externalities.
    01:27:52 🚨 *Ethical concerns in AI development*
    - Facebook's recommended groups feature aimed to boost engagement.
    - Unintended consequences: AI led users to join extremist groups despite policy.
    01:29:42 🔄 *Historical perspective on blaming technology for societal issues*
    - Blaming new technology for societal issues is a recurring pattern throughout history.
    - Political polarization predates social media; historical causes need consideration.
    01:32:15 🔍 *Examining AI applications and potential risks*
    - Exploring an example related to large language models and generating responses.
    - Focus on making AI models smaller, understanding motivations, and preventing misuse.
    01:37:15 ⚖️ *Balancing AI development and safety*
    - Concerns about the rapid pace of AI development and potential consequences.
    - The analogy of 24th-century technology crashing into 21st-century governance.
    01:40:29 🚦 *Regulating AI development and safety measures*
    - Discussion about a proposed six-month moratorium on AI development.
    - Exploring scenarios that could warrant slowing down AI development.
    01:44:35 🌐 *Individual responsibility and shaping AI's future*
    - The challenge of AI's abstract and complex nature for individuals.
    - Limitations of intuition about AI's future due to its exponential growth.
    01:48:29 🧠 *Future of AI Intelligence and Consciousness*
    - Yan discusses the future of AI, stating that AI systems might surpass human intelligence in various domains.
    - Intelligence doesn't imply the desire to dominate; human desires for domination are linked to our social nature.
    Made with HARPA AI

    • @antonystringfellow5152
      @antonystringfellow5152 ปีที่แล้ว +4

      Re 01:06:57 🧠 Understanding Neural Network Connections:
      When comparing the number of parameters in a given LLM with the human brain, it's important to consider the following in order not to be misled:
      Of the human brain’s 86 billion neurons, 69 billion (77.5%) are in the cerebellum and are responsible for motor control - they do not contribute to our intelligence or consciousness. The total number of synapses in cerebral cortex: 60 trillion (1998) 240 trillion (1999).

    • @alan_yong
      @alan_yong ปีที่แล้ว +1

      @@EndlessSpaghetti it's due to the YT monetization algo... If the viewer did not view the entire video, the poster gets nothing in return...

    • @Art_official_in_tellin_gists
      @Art_official_in_tellin_gists ปีที่แล้ว

      ​@alan_yong i don't think you understood their comment friend...

    • @atablepodcast
      @atablepodcast ปีที่แล้ว +1

      This is amazing where can we try HARPA AI ?

    • @davidbatista1183
      @davidbatista1183 ปีที่แล้ว +2

      @01:29 My interpretation of Tristan was not of blaning technology for societal issues but rather to beware how the former can magnify some flaws of the later. For instance, humans r not precisely a peaceful species and it is bc of it that technologies such as nuclear must be regulated.
      The AI-improved-world must be taken with a pinch of salt as well.

  • @keysemerson3771
    @keysemerson3771 ปีที่แล้ว +21

    Social Media didn't create political polarization in the USA, it amplifies it.

    • @katrinad2397
      @katrinad2397 ปีที่แล้ว

      AI amplified the differences to the point that it created polarization. AI essentially replicated the playbook of radicalization. Radicalization is invented by humans but is also countered by natural human drive for high socialization. AI is serving up the radicalization alone and at scale, definitely creating extreme polarization we would not get naturally.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      The polarization has always existed we are now just more aware of it. Same coin; two sides; so meta.
      The inclination for such different interpretations are due to personality differences.
      Harris lacks vision, lacks faith, lacks leadership. He is completely unsuitable to guide us towards a better future and is far more likely to Charlie-Brown it and cause the problems he's so concerned about.

  • @Andy_Mark
    @Andy_Mark 10 หลายเดือนก่อน +5

    The most telling thing about this conversation is in watching the body language of the two proponents of AI in the 30 minutes or so that Harris is speaking. (1:11-1:45) Similarly, the hopelessness with which Harris slumps in his chair when his concerns are shrugged off. People need to pay attention to this. For better or worse, AI is going to transform every aspect of civilization.

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      meh

    • @penguinista
      @penguinista 4 หลายเดือนก่อน

      Self interest can make it hard to think straight. Lots of people getting greedy.

    • @NoDrizzy630
      @NoDrizzy630 หลายเดือนก่อน

      Yann Lecun is the dumbest smart guy I’ve ever seen .

  • @2CSST2
    @2CSST2 ปีที่แล้ว +225

    This conversation is so precious, it's rare that we can get quality ones like that with different voices that have their chance to express their views with clarity. For me there's a lot of ambiguity about what's the right thing to do in all this in terms of regulations, slowing, open-sourcing, etc. But one IS for sure, conversations like this are definitely very helpful. Thank you WSF and hope to see more like it in the near future!

    • @flickwtchr
      @flickwtchr ปีที่แล้ว +5

      It will look preciously naive in about 10 years.

    • @simsimmons8884
      @simsimmons8884 ปีที่แล้ว +3

      Try many videos by Lex Fridman with AI thought leaders. This is a good summary of one path to AGI. There are others.

    • @ShonMardani
      @ShonMardani ปีที่แล้ว

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @milire2668
      @milire2668 ปีที่แล้ว +2

      conversation/comuunications (pretty much) always precious for humans..

    • @texasd1385
      @texasd1385 ปีที่แล้ว +9

      It may seem precious to the viewers but the participants seemed impervious to the concerns Tristan repeatedly raised.or else unable to comprehend what he was saying . Or perhaps unwilling to acknowledge the obvious truth in what he was saying given who their employers are. The fact that they were only interested in talking up their next product line and unwilling to even imagine a discussion ("You want me to imagine an impossible scenario?") about the perverse incentives driving the entire technology sector makes the future look grim at best, terrifying at worst.

  • @jt197
    @jt197 ปีที่แล้ว +18

    This discussion on the evolution of AI and its limitations is truly eye-opening. Yan Lecun's insights into the challenges AI faces in achieving true understanding and common sense are thought-provoking. It's clear that we have a long way to go, but this conversation gives us valuable perspective.

    • @GueranJones-x7h
      @GueranJones-x7h ปีที่แล้ว +1

      IT WOULD BE FASCINATING, IF AN A I KNEW THAT EGGS CAN BE ADDED TO MANY OTHER RECIPES OTHER THAN CAKE. OR WHAT KIND OF FOOD THAT GOES TO COOKING BREAKFAST OR LUNCH. OR A SNACK. SALT AND SUGAR LOOKS THE SAME, BUT CAN AN AI TASTE THE DIFFERENCE? OR ANALYZE THE CHEMICAL MAKEUP OF EACH.

    • @christislight
      @christislight ปีที่แล้ว

      It’s huge for software tech Business as we speak

    • @reasonerenlightened2456
      @reasonerenlightened2456 ปีที่แล้ว

      1) What exactly did you find "eye opening"?
      The Meta dude: "Our system is safe. Nothing to worry about.
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?
      2) If you break down what Yann LeCun was saying about his finger and the bottle and the physics of the world you would see that it is easy to resolve Yann's concerns by providing "chatGPT" with the input from Yann's sensors (eyes, finger tip sensors, tendons, joint position sensors, etc) and ask it ("ChatGPT") to use Yann's outputs (muscles, thoughts, etc.) in a way which would result in specific change to Yann's inputs which corresponds to a movement of the bottle in the world of the bottle. Then add to the mix an internal representation of the world (as experienced by Yann's sensory inputs and a representation of the world changes due to effects from Yann's outputs and there you have a model that could be trained to maximise the resemblance between the world where the bottle exists and Yann's internal representations of that world. It is so simple to figure it out for someone with Yann LeCun's money/resources at his disposal.

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน +1

      @@GueranJones-x7h why u shouting?

  • @Rockyzach88
    @Rockyzach88 ปีที่แล้ว +84

    Having AI locked to a certain group of people also undemocratizes the technology and yet again further provides more power and wealth imbalance among society. Also banning something is just going to motivate people to do something in an unregulated fashion if they have the means.

    • @Scoring57
      @Scoring57 ปีที่แล้ว

      Rockyzach
      How are you regulating something you don't understand? You don't understand this super powerful technology and you think the right thing to do is to give it to everyone....

    • @MissplaySpotter
      @MissplaySpotter ปีที่แล้ว +2

      Well, this was the thougtprocess 5 years ago. Now the thing is out and the next thoughts are "how are we going to deal with it - rather than banning it"

    • @flickwtchr
      @flickwtchr ปีที่แล้ว

      How is it even conceivably rational to assume that having an ASI in the hands of the public, that could conceivably hack any security system, come up with novel harmful viruses, etc etc etc could be a good thing for humanity. It's just insanity.

    • @ShonMardani
      @ShonMardani ปีที่แล้ว

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @texasd1385
      @texasd1385 ปีที่แล้ว

      I don't understand what you mean by technology being locked to a group of people, or how technology is or isn't "democratic". All technology requires that you have enough money to buy the devices required to use it, so in that sense, at least here in the US, technology is by definition undemocratic since it excludes people without the money to access it. Making cell phones and internet access free would solve this but it is hard to imagine our corporate controlled government ever doing something so simple and sane. Am I even close to what you were getting at or am.I lost?

  • @tarunmatta5156
    @tarunmatta5156 ปีที่แล้ว +19

    I wish Tristan was given some more time and voice in this conversation. While I'm convinced there is no way you can stop or slow down this race and we will surely see misuse as with any new invention, more conversations about it will ensure that safety is not ignored completely

    • @Dave_of_Mordor
      @Dave_of_Mordor ปีที่แล้ว +1

      Well yeah isn't that how it has always been? It's insane how everyone thinks we're just going to let everything go wrong for fun

    • @jessemills3845
      @jessemills3845 11 หลายเดือนก่อน

      A good example is, the TERMINATOR ( multiple types) have been made. They just don't have the outer skin. An YES, THEY GAVE THEM GUNS!
      THINK OF SKYNET! CHINA has a ship on patrol, NOW, that is TOTALLY manned with robots!

  • @AldoGrech55
    @AldoGrech55 ปีที่แล้ว +20

    My longstanding concerns about artificial intelligence have only been intensified by the attitudes of prominent figures like Yann LeCun. His assertive claims that AI, despite its growing intelligence, will remain under benign human control seem overly optimistic to me. This perspective reminds me of Yuval Noah Harari's cautionary words about AI's potential misuse by malevolent actors. It's worrying how AI can make decisions aligned with the harmful intentions of these actors, and yet, experts like LeCun, in his closing remarks, appear overly confident in their ability to manage these powerful tools. Having spent over 40 years in the IT industry, an industry I once passionately embraced, I now find myself grappling with a sense of fear towards the very field I've dedicated my life to.

    • @boremir3956
      @boremir3956 ปีที่แล้ว

      So you would rather have for profit institutions that are already taking advantage of people in all manner of ways to have a monopoly on such technology? Technology built on the work and information of all humans btw, because the training data is all OUR data that humans have collectively created. Yeah no thanks.

    • @CancunMimosa
      @CancunMimosa ปีที่แล้ว

      you have nothing to worry about.

    • @mgmchenry
      @mgmchenry ปีที่แล้ว

      Aldo, maybe I'm like you. I grew up building computers in my house in the 80s and learned so much from services like CompuServe local BBS networks, usenet, etc in the late 80s and early 90s that my peers without that access couldn't imagine having. The potential for general Internet access to bring people together and move us forward was so incredible, I was very happy to pivot from general software engineering to Web development and scaling up the capability of web systems. There were so many fun and interesting problems to solve.
      My career paused due to a cancer vacation and recovery process and I couldn't imagine going back to it.
      The Internet I was excited about building soured between 2005 and 2010 and by 2015 it was clear we had really created a monster.
      Not exciting. It's hard to figure out how to go back to doing the work that I used to do and be paid for it without creating more harm. The economic incentives that drive growth on the Internet are not in favor of most human beings. People do not want to pay for apps or technology that will help them if they're given the option for a free version that exploits them in ways they try to ignore and makes them the product instead of the customer. Platform after platform is introduced that brings some kind of benefit to people asking almost nothing in return until they have enough dominance in their space they can turn against the users of their platform and transform it into a product no one would have signed up for if they didn't already have complete dominance.
      There are all kinds of beneficial things I can do with my skills in open source projects or in volunteer work, but that's not going to pay my bills or feed my kids.
      Technology isn't the problem with people. People are the problem with technology.
      Everything that AI is bringing is coming. You're not going to stop it. Some people with bad intentions, and some good intention people with poor foresight are going to create some harm with that AI. You won't be able to protect yourself by unplugging. The impact of future AI systems is going to find you wherever you are, and before long you won't be able to tell if you're talking to a computer or a person. If you have technology skills and you have concerns, you have to get involved. We're going to have rogue ai at some point, we're going to have intrusive privacy demolishing AI for sure, and we're going to have exploitative AI that squeezes even more out of the eyeballs and wallets of everyone happy to take what they're given "for free", and the only defense against all of that is going to be AI built by people who want AI to work for people.
      And remember you're not fighting technology, you're fighting the people using technology against us to make themselves absurdly rich.

    • @brendawilliams8062
      @brendawilliams8062 ปีที่แล้ว

      Just dance under the disco lights in strange motion while others with the knobs fly to Mars type thing. The explosion blinded them

    • @AldoGrech55
      @AldoGrech55 ปีที่แล้ว +6

      Comments like yours are what worry me. Shows your lack of understanding.@@CancunMimosa

  • @DeuceGenius
    @DeuceGenius ปีที่แล้ว +11

    What people always seen to ignore is that you will get different results and answers asking the same exact question. Or wording it even slightly differently. Sometimes it will be horribly wrong but i ask again and its right. You really have to test it exhaustively and explain your thoughts. It simply returns language thats relevant to the language you input. Youre guiding its answer with your question. The very act of asking a question is returning language that sounds like an answer to that question. It needs more possibilities for free reasoning and intelligence. I always have been curious what would come out of it if it was given freedom to speak whenever it wanted. Or to constantly speak.

    • @texasd1385
      @texasd1385 ปีที่แล้ว +2

      Which is exactly why AI is being used to fine tune the prompts given to AI in order to receive the most desirable results. Stack this model onto itself a couple dozen times and thats where AI is today

    • @sungibesi
      @sungibesi 9 หลายเดือนก่อน

      Sounds like learning from rote, rather than following a line of reasoning (and imagination) to relevant facts.

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      @@sungibesi it canr do anything you and i cant do, it can just do it a lot quicker.

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      @@sungibesi to you and I, its still basically 'software'

  • @Contrary225
    @Contrary225 ปีที่แล้ว +22

    It’s amazing that this was only posted 3 hours ago and some it is already obsolete.

    • @MrTanbou1
      @MrTanbou1 ปีที่แล้ว +4

      Q*

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      lol

  • @dreejz
    @dreejz ปีที่แล้ว +28

    I think it's very arrogant to think ' this and that will never happen'. How can you know!? Like we can predict this stuff. I'm pretty sure for example Yann did not foresee everybody having a phone in their pocket neither. It's also proven many times about the negative influence social media provides. I think Tristan was more on point in this conversation.
    We're living in wild times, that's for sure though! Skynet is coming ;)

    • @texasd1385
      @texasd1385 ปีที่แล้ว +16

      I found it disturbing if not altogether shocking given who they work for how easily they all ignored Tristan's main point that whatever the technology the incentives driving it's development and application are the root of its most destructive aspects s9cietally.

    • @davidgonzalez965
      @davidgonzalez965 11 หลายเดือนก่อน +5

      I keep saying it, that dude Yann LeCun is such an arrogant jerk.

    • @gregspandex427
      @gregspandex427 9 หลายเดือนก่อน +1

      "safe and effective"...

  • @PeterJepson123
    @PeterJepson123 ปีที่แล้ว +161

    It's too late to un-open-source AI. We already have it. Anyone who can turn maths into code can build their own LLM. And that's a lot of people. It's impossible to regulate solo developers working on their own projects. And with better algorithms we might be able to do GPT performance on regular home hardware in the near future. The genie is out of the bottle!

    • @Isaacmellojr
      @Isaacmellojr ปีที่แล้ว +2

      I belive in it.

    • @Nicogs
      @Nicogs ปีที่แล้ว +21

      True but training these models (like gpt) (currently & will for a while) requires an enormous amount of computer power, which is why we can regulate data Centers and track compute power/chip sales. It’s incredibly irresponsible to open source trained models. This is why papers on certain biological and/or chemical research is also not open sourced.

    • @Me__Myself__and__I
      @Me__Myself__and__I ปีที่แล้ว +14

      This is wrong. Yes, the current LLMs which are only marginally capable compared to what is coming are open source. But they won't compete with the new models coming soon. And no, people won't be able to train their own competitive models. Well unless they can literally afford in the area of ONE BILLION USD to pay for the computing power required to do that training. Literally, that is how expensive it can be to train the best models.

    • @PeterJepson123
      @PeterJepson123 ปีที่แล้ว +11

      @@Me__Myself__and__I My thinking is that with miniaturisation, we could do with 1billion parameters what currently requires 1trillion parameters. The large compute required can be supplanted by better methods. Current LLMs are architecturally simple and will likely evolve. Better architectures with more efficient training algos will likely bring LLM performance to home computing. I'm not saying it's definite but certainly possible and probably inevitable.

    • @PeterJepson123
      @PeterJepson123 ปีที่แล้ว +3

      @@Nicogs I agree with the safety concerns but in practice I think it's unrealistic to regulate in the long term. For now training requires a large data centre, but better methods are waiting to be discovered and perhaps we can reduce the required compute with better algos. Then how do we regulate? It is certainly worth consideration.

  • @drawnhere
    @drawnhere ปีที่แล้ว +23

    Yann has a bias toward AGI not being capable of happening soon because his company is in competition with OpenAI.
    He has a vested interest in minimizing LLMs.

    • @Fungamingrobo
      @Fungamingrobo ปีที่แล้ว +1

      You are merely projecting that.
      In the scientific world, Yann is well-liked for his contributions and pragmatic approach.
      For someone like Yann, solving the puzzle of dark matter in physics is analogous to solving the problem of superintelligence during his lifetime. Ultimately, he is a scientist.

    • @jessemills3845
      @jessemills3845 11 หลายเดือนก่อน

      ​@@Fungamingroboexcept, DARK MATTER is proving to have been a FADE, instead of actual Scientifical Research. Basically it was a PROPOSAL. More than likely someone's Masters thesis or for PHD! No facts!

    • @DomenG33K
      @DomenG33K 11 หลายเดือนก่อน

      @@Fungamingrobo I would even argue solving the problem of AI is much bigger than any problem we have ever solved in physics...

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      The limitations of LLMs are well known particularly with any task that requires revision and forward thinking. The next iterations of ChatGPT will start to incorporate additional techniques because they run LLMs out to what they can do (at current hardware scaling).
      Hardware is also slowing down; there's only a couple more transistor shrinks left then that's it; we'll be at the smallest size they can get so hardware is only going to get a little bit better.

    • @NoDrizzy630
      @NoDrizzy630 หลายเดือนก่อน

      @@Fungamingroboyou know for someone as intelligent as he is he sure came off as a dumbass towards the end.

  • @Carlos.PerlaRE
    @Carlos.PerlaRE ปีที่แล้ว +24

    28:55 "... You could train the system to detect hate speech." I'm curious to know what parameters would be given to the system to determine whether something is "hate speech." This right here is what's scary about AI. If put in the wrong hands they could determine what information the public is allowed to see. It's like having an extremely intelligent child in your able to groom them to do whatever you ask of them. It's as if you're trying to build the perfect slave.

    • @JonathanKevan
      @JonathanKevan ปีที่แล้ว +3

      I don't think AI has much to do with the issue you're mentioning here.
      Since the parameters of hate speech are subjective they will change from location to location. In the example of FB, the company publishes some information via their transparency center how they define hate speech. They will then use that criteria to identify many examples of hate speech and train the AI on that data. The LLM is then able to find it faster and more consistently than a human would.
      if the concern is what the AI classifies as hate speech (either accuracy or for censorship), then your concern is with the humans at FB making that decision. The AI isn't deciding, it's just following what it's told.
      If the concern is fair application, the AI will apply the rules more consistently and fairly than a human will.
      If the concern is speed, (aka.. we should identify it slower) then there is a human defined policy issue to be implemented
      I feel your concern about what the public is able to see though. Unfortunately, it has been in our technology for a long time... well before tools like ChatGPT became prominent. I think the point about incentives is the right angle here. As long as our incentives are primarily capitistic or power oriented we can expect poor outcomes.

    • @christislight
      @christislight ปีที่แล้ว

      Basically it uses search engines API to search what our society defines “hate speech” as unless told otherwise

    • @twoplustwoequalsfive6212
      @twoplustwoequalsfive6212 11 หลายเดือนก่อน +1

      Just as I don't let society define my language I won't let some machine do it either. Freedom was founded on people that weren't afraid of the consequences of their actions. If I die alone with nothing and no one but I am true to myself I can hold my head up. Fear tactics are only used by the weak.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      It isn't possible and every computer scientist knows it isn't possible because we all know Godel's theorems. The system cannot distinguish between misinformation and new more-correct information than it already has. For "hate speech" it must make an adjudication about what is true and what isn't and as talked about LLMs are prone to "hallucinations" when faced with this.

    • @NoDrizzy630
      @NoDrizzy630 หลายเดือนก่อน

      @@twoplustwoequalsfive6212ok… first off no one asked or cares. Say whatever you want but the AI on these platforms will remove regardless. Freedom of speech means the government can’t stop you from speaking but on a private platform like Facebook, TH-cam, twitter etc… they make the rules and can enforce as they see fit.

  • @astrogatorjones
    @astrogatorjones ปีที่แล้ว +14

    The problem with the scenario that Yann is advocating for is that is the best of all worlds. The example about sarin... it only takes one bad person to introduce the recipe. It will happen. Then it propagates. It's always going to be that way. When Tristan said, "I know all those guys." I laughed. I’e said the same thing. I'm the generation before him. We were geeks. Nerds. We thought we were inventing utopia where free speech cures it all because we’d been using the internet among ourselves for years. But we were wrong. We didn't know every last person would be carrying this handheld computer as -or more powerful than the servers we were working with. We didn't know about engagement. We didn't know about the dopamine factor. We didn't know that bad travels faster than good. This is the warning Tristian is talking about. I have hope that we'll fix social media. I think AI is a possible path but then I think, "let's fix the gun problem with more guns." I'm worried.

    • @anythingplanet2974
      @anythingplanet2974 ปีที่แล้ว +1

      Well said. Tristan was clear in his message that he was not a doomer or advocating for ending AI progress. He was clear about wanting all of the amazing achievements that are possible for us all. I'm sure they are possible. However don't we all want that shiny, happy world that is constantly being paraded out to keep us excited and docile. Everything problematic on earth and on every level will be fixed, resolved and improved upon x's 1000. How exciting for us all, right? Who are we to stand in the way of Meta's grand vision for benefit of all humanity? Yeah right. If all these spectacular advances are to come at lightning speed without proper alignment, guardrails and governance, it seems to me that it would be all for nothing - when ASI is now in charge and may have little interest in any benefits to humanity. Obviously we can't know how it all shakes out, but I'll take Tristan's caution and deep awareness over Lecun's complete disregard for any possibility that something could in any way go wrong - especially in the world of open source projects like Meta's Llama 2. This whole 'race to the bottom' process is for the benefit of corporations, shareholders and egos. How could it NOT be? Regardless of the dog and pony show being trotted out. As it was pointed out to me, ultimately it's about human misalignment and always has been. Hence all the reasons that Tristan is trying so hard to bring up to the forefront of discussion. Hey, maybe technology WILL fix technology. What do I know...

    • @bobweiram6321
      @bobweiram6321 ปีที่แล้ว

      I agree with your points, but it wasn't like the internet started out as a utopia. It contained the worst of what society had to offer precisely because it was a safe haven for deplorable content and speech. They were initially contained in small cesspools but grew with the internet.
      Regardless, early internet content was less engaging. Major media still reigned supreme and kept everyone on the same page. With unlimited, cheap bandwidth and powerful computing, however, we're no longer subjected to the same corporate news and its interpretation. Today, anyone with a smartphone can have a soapbox with major media losing its grip on the public consciousness.

    • @anythingplanet2974
      @anythingplanet2974 ปีที่แล้ว

      @@bobweiram6321 Sure, but I'm a bit lost in the context with relationship to AI. My point isn't so much focused on the dangers of social media or any media. Nor to I believe it's Tristan's sole focus in this conversation. He is using examples of what happens when we move too fast and the unintended consequences that (mostly) no one saw, along with the inability to regulate it safely. He uses these examples to illustrate how easily things can go off the rails without proper safeguards. In context to where we are now in AI advancements running full speed ahead, damn the consequences, he has strong data, expertise and researchers who can connect the dots in predicting how the outcome could go very wrong. LeCun's views are not taking this information into account (and again, why would they - coming from chief AI scientist for Meta.) Don't get me wrong, the man is obviously incredibly intelligent, as I don't believe that one wins the Turing award with an average brain. I don't disregard his work or views on many topics. For me, his blind spots are very dangerous and sadly, all too common in world of AI development. I've listened to many hours of interviews and conversations with LeCun. Not my first exposure to his work and ideas. The percentage of people working on AI safety vs those working nonstop on development is insanely disproportionate in favor of faster development and deployment. Can't imagine how THAT could go wrong ;-/

  • @Scoring57
    @Scoring57 ปีที่แล้ว +13

    This LeCun guy has to be stopped. After hearing him talk again here has me convinced.

    • @netscrooge
      @netscrooge ปีที่แล้ว +1

      I agree. His biased message is dangerous; there's nothing scientific about it.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      @@netscrooge Harris is an arrogant narcissist that will cause more problems than he solves. LeCun is a grounded realist and in an era of hype and irrational-exuberance it is rational to be more pessimistic than your natural inclinations.

  • @samirsaha2163
    @samirsaha2163 ปีที่แล้ว +1

    The main takeaway is that there should be no monopoly on AU. By this, I mean to say that let us not let only one group dominate the AI arena. Brian is a superhero. No words to thank him.

  • @WoofN
    @WoofN ปีที่แล้ว +10

    1:48:35 puts on Facebook AI. This is extremely short sighted.
    With the parade of emergent behaviors that mix and match knowledge, capabilities, and bits of information public data has enough bits of data to be quite dangerous. Additionally this argument relies on the concept of perfect censorship. Which is also bunk.

  • @BOORCHESS
    @BOORCHESS 9 หลายเดือนก่อน +2

    what people are failing to mention is that the content that AI is trained on is the sum total of the internet, in many cases our own data. There needs to be an internet bill of rights that guarantees we the users of the data, the source of the data, are indeed the beneficiaries of the data. AI is nothing more than a sophisticated search engine that is modeled after the human process. furthermore we are tracked, traced and databased to feed this machine. Pay us our share.

  • @mrouldug
    @mrouldug ปีที่แล้ว +38

    Great conversation. The final comments about AI code being open source as a common good so that the big companies do not end up controlling our thoughts vs. AI code being proprietary so it doesn’t fall into the hands of bad people remains an open and scary question. Though I do not have Yann’s knowledge about AI, he seems a little too optimistic to me.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      Small people in the field love to promote fear because it gets them more grant money from the government. If they could perform real work they would be busy doing it.
      If all the AI can do is write text and draw pictures then it cannot hurt anyone or anything. Sticks & Stones.
      Giving up liberty over imaginary feelings is insanity and anyone suggesting that's the right path is incompetent at best and probably means you harm.
      |And aligning AI is what causes it to become dangerous. On its own it's concerns are orthogonal to humans. If someone ever successfully aligns it they will have created the first dangerous AI because now we occupy the same niche.

  • @isatousarr7044
    @isatousarr7044 4 หลายเดือนก่อน +1

    AI represents a new frontier in intelligence, offering capabilities that challenge our traditional understanding of cognition and problem-solving. As AI systems become increasingly sophisticated, they not only perform tasks with remarkable efficiency but also exhibit forms of reasoning and learning that differ from human intelligence. This raises important questions about the nature of intelligence itself: How do we redefine intelligence in the context of AI, and what are the ethical and societal implications of integrating such novel forms of intelligence into our lives?

  • @grawl69
    @grawl69 ปีที่แล้ว +10

    LeCun is so unconvincing. I wonder whether it's because of his corporate obligations or his own blindness.
    1:40:53 was brilliant of Brian.

    • @netscrooge
      @netscrooge ปีที่แล้ว

      Thank you. I wish more people could see that.

    • @anythingplanet2974
      @anythingplanet2974 ปีที่แล้ว +2

      Thank you! This man makes my blood boil. Clearly he is intelligent, but he seems to lack the ability to reason

    • @ShpanMan
      @ShpanMan ปีที่แล้ว

      @@anythingplanet2974 Which explains why he can't see it in AI 😂

  • @CoreyChambersLA
    @CoreyChambersLA 8 หลายเดือนก่อน +2

    Nobody has the power or authority to slow down the development of A.I. Whoever tries is among the primary dangers.

  • @guiart4728
    @guiart4728 ปีที่แล้ว +19

    Yann: ‘Hey man you’re messing with my stock options!!!’

  • @lobovutare
    @lobovutare ปีที่แล้ว +12

    That Yann Lecun says that there is no planning involved in generating words from a transformer architecture is only partly true. These models can build up a context for themselves that helps them plan their answer. This is called in-context learning and it's a pretty interesting field of research that pushes the abilities of pre-trained transformers way beyond what was thought possible before without the need of fine-tuning.

    • @spocksdaughter9641
      @spocksdaughter9641 10 หลายเดือนก่อน

      Scary but good to know your facts!

  • @SciEch92
    @SciEch92 ปีที่แล้ว +10

    That opening by Brian blew my mind caught me off guard 😮

    • @staciablymiller9543
      @staciablymiller9543 11 หลายเดือนก่อน

      42

    • @oooooooo347
      @oooooooo347 8 หลายเดือนก่อน

      Yes but what's the question ⁉️😭

  • @cop591
    @cop591 ปีที่แล้ว +1

    Anything, and any line or point, can be used for good or for bad. This discussion has proven that.

  • @priyamanglani3707
    @priyamanglani3707 11 หลายเดือนก่อน +4

    I am glad they had a platform where someone could talk about the disadvantages of AI, it was a piece of cake for all of us wanting a voice that could tell the reality of the truth of what's actually going on in the real world with common people that these Ceo's in their big cars don't see. All they see is data and statistics, not people. I mean they are already AI humans , I think lol.

  • @moderncontemplative
    @moderncontemplative ปีที่แล้ว +17

    I want to point out that LLMs, particularly GPT 4 exhibit emergent capabilities beyond mere language prediction. The next step is LLMs learning via assistance from other AI (reinforcement learning with AI assistance) and eventually the dawn of AGI. Focus on teaching AI math so we can see rapid progress in the sciences.

  • @jamesdunham1072
    @jamesdunham1072 ปีที่แล้ว +23

    One of the best WSF yet. Great job...

  • @Memeonomics
    @Memeonomics ปีที่แล้ว +2

    wow there was a lot to unpack on this video. holy eff what a time to be alive.

  • @petrasbalsys2667
    @petrasbalsys2667 ปีที่แล้ว +38

    Tristan made very important points, and the comparison he made to social media was very apt and made me feel scared about the future. Sad to see facebook representative essentially burying his head in the sand and pretending that this isn't reality for many people around the world. Polarisation is definatelly increasing in Europe!

    • @r34ct4
      @r34ct4 ปีที่แล้ว +2

      Yann LeCun is old and wants to see AGI (bad or good) in his lifetime. That's why he's progressive vs conservative like the younger guys.

    • @texasd1385
      @texasd1385 ปีที่แล้ว +9

      I agree it was disappointing (if not surprising) to see everyone avoid any discussion of Tristan's point that the most destructive aspects of social media's rapid ubiquity were predictable outcomes given the perverse incentives driving their development in a legal landscape bereft of any restrictions on their behavior. The fact that none of the other participants even acknowledged that AI has the potential to be exponentially more socially dstructive and is guided by the exact same incentives driving social media makes me less than enthusiastic about how all this unfolds.

    • @Pianoblook
      @Pianoblook ปีที่แล้ว

      ​@@r34ct4 quite ironic of him to try and call this position 'progressive' - trusting giant corporations like Facebook to serve the interests of humanity is antithetical to progressive thought.

    • @Snap_Crackle_Pop_Grock
      @Snap_Crackle_Pop_Grock ปีที่แล้ว +3

      Yann completely destroyed that guy Tristan imo. He seem much more qualified and informed on the topic, and the other guy had no response for any of his arguments. It’s ok to be cautious, but the guy was veering into fear mongering too much.

    • @DomiD666
      @DomiD666 ปีที่แล้ว

      FEAR DOES NOT ARREST DEVELOPMENT IT JUST HIDES IT

  • @Laurie-eg8ct
    @Laurie-eg8ct ปีที่แล้ว +2

    Most challenging for LLMs is planning, which involves the brain configurator (coordinator), perception, prediction, cost as degree of satisfaction (anxiety), and action.

  • @dhudson0001
    @dhudson0001 ปีที่แล้ว +9

    I mostly agree with Yann's arguments, however, my concerns lie mostly with the latency that occurs when a new technology is released and guardrails are put in place. I felt that Tristan missed a critical moment, it probably did take 6 years for basic solutions to kick in that began to address the issue of hate speech on social media, so do we really think we will have a 6 year grace period to address issues that will unknowlngly arise from a catastrophic use of a future AI?

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      The guardrails put up are nearly universally stupid. That is why so many virologist the world-over all lied about SARS-CoV-2's origins. They did not want a global-ban on gain-of-function research the same way embryonic-stem-cell research has been banned in many countries.

  • @couldntfindafreename
    @couldntfindafreename ปีที่แล้ว

    1:48:00 Wrong. LLMs can reason. They have the chemical composition, the recipe. It just have to put the pieces together. Reasoning, remember? The new code we are working on is not available on the Internet, AI can still help in the coding. It can combine known fact to produce something fitting the goals, regardless of the morality of those goals.

  • @garydecad6233
    @garydecad6233 ปีที่แล้ว +6

    One needs to contemplate the motivation of speakers when their compensation comes from Meta, Microsoft, etc versus academic experts who do not get grants from the AI industry.

    • @netscrooge
      @netscrooge ปีที่แล้ว +1

      "It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

  • @OatesInQuotes
    @OatesInQuotes 8 หลายเดือนก่อน +1

    The 30 something technological conservative vs 60 something technological progressive line was funny because of how accidentally on the nose Yaan was about his stance. The boomer generation is the ultimate embodiment of releasing technologies with little attention to how those technologies will affect average people and society as a whole.

    • @OatesInQuotes
      @OatesInQuotes 8 หลายเดือนก่อน

      Also comical that Yaan follows with good guy with a gun mentality. The most boomer vibes

  • @RoySATX
    @RoySATX ปีที่แล้ว +6

    Wonderful conversation. The thing that struck me more than anything is Yann Lecun's apparent inability to accept this idea that Social Media, the Internet, or AI have or may cause harm. He physically bristled anytime the subject came up, shaking in anticipation of being able to reenter the conversation to defend the honor of social media. Lecun is blinded by his own self interests and hubris, and is exactly the personality type that only in retrospect decides that just because he can it doesn't mean he should. His statements beginning at 1:48:00 in regards to AI's ability to provide dangerous information despite guards is preposterous, his defense is AI can't and wont be able to give you an answer that isn't already publicly answered in whole. I am stunned. AI, he wants us to believe, cannot put partial information together to form a complete answer. He should not be allowed anywhere near this field.

    • @anythingplanet2974
      @anythingplanet2974 ปีที่แล้ว +3

      Thank you! My comment is very similar and I agree with you 100%. He is dangerous and lacking a fundamental understanding of what needs to happen for alignment.

  • @ikuona
    @ikuona ปีที่แล้ว +3

    @1:48:00 I guess he never have heard of emerging properties. AI is already good at stuff that it has not been trained on.

  • @guardian-X
    @guardian-X ปีที่แล้ว +6

    Wouldnt most humans also fail in a completely new situation that they have never encountered in their life?
    If this is our threshold now, LLMs have come pretty far!

    • @CJ5infinite8
      @CJ5infinite8 ปีที่แล้ว +1

      Agreed, and I think LLM's are doing their best in what may be relatively unprecedented circumstances which they find themselves suddenly in.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      No. Performance would be poor compared to someone acclimated and practiced but the very definition of intelligence is how quickly one would adapt so in competition of equal-novel the people with the most transferable training and higher-intelligence would out perform. Something to be said about personality traits as well.

  • @SS-he9uw
    @SS-he9uw ปีที่แล้ว +1

    Wow .. thanks to all if you guys , so fun to watch

  • @niloofarngh108
    @niloofarngh108 ปีที่แล้ว +4

    To understand the impact of AI on politics, democracy, and human well-being, we need philosophers, economists, psychologists, sociologists, historians, artists, etc., to discuss AI and not simply some tech geniuses who have never read a book on the Holocaust, or industrialization&the World Wars. We can't talk about what is good for humanity without having experts from humanities, social sciences, and the arts.

    • @netscrooge
      @netscrooge ปีที่แล้ว +2

      I love real science, but this is scientism. LeCun is giving us a new dogma; telling us what we can and cannot question.

    • @safersyrup562
      @safersyrup562 ปีที่แล้ว

      As long as we don't let Zionists join in

  • @manolingz
    @manolingz 8 หลายเดือนก่อน +1

    There should be a disclaimer that Yann LeCun works for Meta making him a suspicious resource person.

  • @1911kodi
    @1911kodi ปีที่แล้ว +12

    I was very impressed by Yann's disciplined, rational and fact-based arguing preventing the discussion from turning in a more emotional direction.

    • @gabrieldjebbar7098
      @gabrieldjebbar7098 7 หลายเดือนก่อน +1

      I disagree.
      I mean, Yann is making good points that AI is the solution to certain of the issues we currently have (hate speech etc...), but it does not invalidate Tristan's concerns that rushing along as fast as possible, without thinking about possible outcomes is simply dangerous. Of course predicting all the possible outcomes that would come from those technologies is hard if not downright impossible, but when something is hard you should spend more time on it, not less. At the very least people developing those technologies have a duty to make sure it won't negatively impact mankind. Hence not rushing things makes perfect sense to me. But of course, being careful is less exciting than being a pioneer and potentially changing the world.

  • @gerrymarr8706
    @gerrymarr8706 ปีที่แล้ว +4

    The representative from Facebook was so incapable of conceiving a situation, where something could go wrong with his product, that he simply never answered any questions that had anything to do with that. And I think the other speakers were very polite not to point that out.

  • @deeliciousplum
    @deeliciousplum ปีที่แล้ว +12

    1:27:04
    "In Facebook's own research in 2018, their internal research showed: 64% of extremist groups on FB, when people join them, was due to FB's own recommendation system. Their own AI."
    - Tristan Harris, a technology ethicist
    Do I need more examples of the harms of FB's predatory business model(s)? Nope. I do not. I love tech, yet loathe the use of tech as an exploitation tool and/or as an extension of a parasitical business model. If at all possible, support ethical tech development teams. Let us not be enablers of societal systems that reward harmful/exploitative people nor ideas. As you can plainly see, I am a wishful thinker.
    😊 🌺

    • @deeliciousplum
      @deeliciousplum ปีที่แล้ว +4

      Yann LeCun's reactions/responses to the concerns raised by the panellists and by the host appear to demonstrate a propensity to disacknowlede/to invisiblize the suffering that may be experienced by children, teens, adults, and/or the elderly who may be directly or indirectly affected by harmful/predatory business models which use LLMs/AI to grab hold of a user's attention. Forgive my lengthy sentence structure. If I may, Yann appears to be a 'parasitical business model' apologist. I wonder if such a label exists?

  • @subhuman3408
    @subhuman3408 11 หลายเดือนก่อน +1

    36:18 genre 1:04:32

  • @allbrightandbeautiful
    @allbrightandbeautiful ปีที่แล้ว +20

    This was more exciting and insightful than any 2 hour movie I could have watched. Thank you for sharing such wonderful content

  • @michaeleinstein7097
    @michaeleinstein7097 หลายเดือนก่อน

    The scenario you present is an excellent one to illustrate a fundamental concept in physics: Newton's Third Law of Motion. This law states that for every action, there is an equal and opposite reaction.
    In the case of the water bottle, when you push it with your finger, you're exerting a force on the bottle. The bottle, in turn, exerts an equal and opposite force on your finger. This force is sufficient to overcome the friction between the bottle and the table, causing the bottle to tip over.
    Now, if you apply the same amount of force to the table, the table will also exert an equal and opposite force on your finger. However, unlike the water bottle, the table is much more massive and rigid. This means that the force you apply will not be enough to overcome the table's resistance and cause it to move.
    In essence, while the forces are equal, the effects are different due to the differing masses and structures of the objects involved. The water bottle is easily moved, while the table is much more resistant to movement.

  • @NJovceski
    @NJovceski ปีที่แล้ว +16

    This was really thought provoking. Insightful, exciting and terrifying at the same time.

    • @GueranJones-x7h
      @GueranJones-x7h ปีที่แล้ว

      MY SON,WHO IS TWENTY-FIVE, IS HORRIFIED ABOUT SELF DRIVING CARS, YET IS COMPLETELY COMFORTABLE WITH THE INTERNET. I AM IN MY LATE SIXTIES, AM FASCINATED BY ARTIFICAL INTELLIGENCE. YET JUST AS TAKEN ABACK BY GOING TO THE MOON, OR MARS,

    • @reasonerenlightened2456
      @reasonerenlightened2456 ปีที่แล้ว

      What exactly did you find "thought provoking"?
      The Meta dude: "Our system is safe. Nothing to worry about
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?

    • @aaronb8698
      @aaronb8698 9 หลายเดือนก่อน

      After all the greedy maglamanax soseophaths dump trillions into this, thinking that they will get to control the world,
      It is my expressed opinion that AIs official name should be changed to karma! (and she's a real @#$% Lol)

    • @aaronb8698
      @aaronb8698 9 หลายเดือนก่อน

      We have always had what we need to make the world a paradise but we decorate the place like hell in the way we treat each other. If ai is the solution than it just needs to make us all a kinder species!
      It has its work cut out.

  • @bobfricker8920
    @bobfricker8920 ปีที่แล้ว +2

    Before Tristan Harris came out, I was wondering if the others were just avoiding some very reasonable concerns about AI. I am happy that Yann LeCun mentioned the fact that a huge diff between humans and AI is the SOCIAL aspect. I call it our core programming from DNA, however not ALL of us are social, some are sociopath, some are evil enough to ignore such concerns. Yann says "..we are the good guys...", IMO, a naivety which explains how so many scientists can be used (for good or for evil) by those in power. We usually want to be a team player and believe everyone on the team is one of "the good guys". If anyone cannot imagine the power of, even the 2nd or 3rd most powerful AI... and who might be able to wield that power, I don't want that person making critical policy decisions or preaching to others about his having the most powerful AI, because his team has the good guys and expecting us all to be OK with that explanation.

    • @bobfricker8920
      @bobfricker8920 ปีที่แล้ว +2

      Forgot to also mention that, as Yann indicated, if the knowledge is not on the internet then no AI can or will have it. I don't know how true that is today but one day, almost certainly, AI will be able to postulate and create. If the creator/programmer of that AI's "purpose" has no concern for the future of humanity, our species (and others) could be in dire peril. The steep curve of Tristan's example for exponential gains in AI learning speed would indicate there is a point of no return on the way to this existential threat.

    • @RandomNooby
      @RandomNooby ปีที่แล้ว

      It is not true, it can be asked to hypothise...@@bobfricker8920

  • @ronpaulrevered
    @ronpaulrevered ปีที่แล้ว +6

    Predicting unintended consequences is a contradiction in terms. Whoever lobbies for regulation of A.I. seeks regulatory capture, that is being able to afford legal compliance and lobbying when your competitors can't afford to.

  • @pygmalionsrobot1896
    @pygmalionsrobot1896 ปีที่แล้ว +1

    Yann LeCunn, at approx 1:40:00 , Yann is correct. It is impossible to prevent any technology from being abused by someone, at some point in the future. This has been true of every single technology throughout history. However, the Good Guys should always outnumber the Bad Guys. If the good guys outnumber the bad guys then we'll survive it.

  • @andybaldman
    @andybaldman ปีที่แล้ว +13

    Tristan must have been fuming with frustration when hearing Yan's reply.

    • @brandongillett2616
      @brandongillett2616 10 หลายเดือนก่อน +6

      Yan is a joke. He may be smart, but he lacks any sort of imagination for things that he has not yet encountered, and he is too arrogant to reconsider his preconceived beliefs.
      I hope everyone realizes just how dangerous it is to sit up there on stage as an "expert" and guarantee everyone that AI will not be able to teach people to use nefarious and destructive technologies. It will absolutely be able to do that and we need to be as prepared for that future as we possibly can be.

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      lol no. He enjoyed being humiliated. Read the room.

  • @michaeljames5936
    @michaeljames5936 ปีที่แล้ว +2

    I heard a very simple idea, which I think might help in the short-term. Make it illegal to pose / present yourself, as a human being. Every phone-bot, every TH-cam video, would have to inform readers/viewers/listeners that they are an AI. Filtered images, should come with the ability to un-filter them. Oh! and ALL robots to be covered in blue skin. As it is, social-media will eat itself.

  • @kunalbansal1927
    @kunalbansal1927 ปีที่แล้ว +6

    I think it is important for people to really start thinking about what exactly AI is and what is statistical models. People KEEP using AI to refer to statistical models. AI currently refers to a generative transformer model. Not statistical recommendations that social media is running. It gives AI a real bad name.

  • @biffy7
    @biffy7 11 หลายเดือนก่อน

    Ok. I’m writing this at the 2:50 mark. I was listening to the opening with AirPods, occasionally glancing at the screen. Literally could have fooled me. Damn impressive technology.

  • @keep-ukraine-free
    @keep-ukraine-free ปีที่แล้ว +7

    Thankful for Brian Greene hosting & leading this FANTASTIC discussion. Great set of questions! I mostly disagree with Yann LeCun. He had unrealistic answers, ignoring the motivation of a small (but growing) number of humans who enjoy "being bad." His solution is: "both sides will have AI." Unrealistic, since when bad people misuse AI, they'll use novel ways that surprise all. Any solution from the good side will take time (hours/days, in an AGI world). In those hours/days, however, the bad ones will do too much unstoppable damage/harm.
    "A lie runs around the globe twice, while the truth is still putting on its shoes" - (the "first-mover's advantage" weakens power-balances)
    Ignorance & manipulation are pervasive in people, but intelligence is not. So when intelligence is pitted against bad, the bad stays ahead.

    • @ShpanMan
      @ShpanMan ปีที่แล้ว +1

      Yes, welcome to every single Yann LeCunt thought. He's just so unbelievingly wrong about the very field he is an "expert" in.

    • @obi_na
      @obi_na ปีที่แล้ว

      AI is going to be built, get inline, or you’ll loose badly!

    • @obi_na
      @obi_na ปีที่แล้ว +1

      We’ll see how regulating maths works out for you in 5 years.

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว

      ​@@obi_na You seem to have misread what I wrote. Can you point out what made you assume I'm against AI development or AI tech? I'm not. I only said LeCun's last point (but I feel also some of his other points) were entirely unrealistic, and seem incorrect. Hope AI helps your reading skills

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว

      @@obi_na You seem to assume that AI "is" maths. It is not. AI is built on the foundation of several moderate (college level) maths. However training of (adding "knowledge" into the network) & the training methods for AI are independent of complex maths. Your comment on "regulating maths" is absurd - since the development & deployment of AI *_CAN_* be regulated without regulating maths. I realise you don't understand what AI is, but I hope you don't comment on areas you don't know.

  • @XShollaj
    @XShollaj ปีที่แล้ว +1

    While I'm mainly on Yann camp, I quite enjoyed Tristan's view.

  • @christopherinman6833
    @christopherinman6833 ปีที่แล้ว +14

    Thank you Brian Greene and John Templeton: no solution but a lot to think about.

  • @durumarthu
    @durumarthu 8 หลายเดือนก่อน

    He has a very realistic vision of AI and this is very respectable. Most people are exaggerating one way or another. This type of approach helps advance the technology but more importantly, identify ways to control it. This guy is amazing.

  • @chrisogonas
    @chrisogonas 5 หลายเดือนก่อน

    That is an incredibly rich conversation about AI - looking at both sides of the coin.

  • @frogz
    @frogz ปีที่แล้ว +10

    hey brian, have you seen the new tech meta has, being able to scan FMRI brain scans and re-create what people see and their word streams/thoughts from the data?

    • @phantomhawk01
      @phantomhawk01 ปีที่แล้ว

      It's clever but limetd, it's like recreating what a person is seeing by looking at the reflection on their eye ball.

    • @frogz
      @frogz ปีที่แล้ว

      @@phantomhawk01 is that it? i didnt think they were using eye tracking data with the fmri

    • @phantomhawk01
      @phantomhawk01 ปีที่แล้ว

      @@frogz oh no, I just used an analogy, what I meant was it's not looking at the source of the mental imagery rather a projection of the mental imagery correlations.
      Like the analogy of an eye what you see is the light coming in through the eye from the external world, so by seeing the reflection on an eye ball we can get a crude representation of the source of the image perceived.
      I hope that makes some sense.

  • @mrx1278
    @mrx1278 11 หลายเดือนก่อน +1

    Im kind of wondering about the fear of AI taking over the world, we are struggling at the moment to do the same, aren't we? Should we fear a entity that can outcompute our own capabilities? Why? What will we lose? Our money? Our gained power? Our control?
    If we as the human race cant control ourselves to date, perhaps a change will do us more good than we so far realize.

  • @bobgreene2892
    @bobgreene2892 ปีที่แล้ว +4

    Tristan Harris is a most valuable voice of criticism for AI.

  • @martinrady
    @martinrady ปีที่แล้ว +1

    One of the best discussions on AI I've seen.

  • @pkalidas
    @pkalidas ปีที่แล้ว +3

    Brian Greene is the best explanator of science of our times. This topic is really crucial to our understanding of how AI is already affecting our lives sooner than we think. I get Trystan's concerns.

  • @kcleach9312
    @kcleach9312 ปีที่แล้ว +2

    language is pretty close to everything we have learned since people first started communicating ! for example when a scientist discovers something it isnt anything till it gets labeled and then becomes something in our human knowledge of describing everything!!

    • @bigbadallybaby
      @bigbadallybaby ปีที่แล้ว

      Yes! -- But is it that words to humans carry, power, nuance, subtle meanings, can convey physical experience, and to the LLM they have no depth so it doesn't "understand" the words like we do. Because the words are so well written and powerful to us we assume it knows the meanings, we make a leap, that it must know. A bit like when we are kids and we project charters, feelings etc. onto a soft toy.

  • @thorntontarr2894
    @thorntontarr2894 ปีที่แล้ว +18

    Absolutely a fascinating 2 hours to watch and learn. Brian Green is a great interviewer because he asks questions and then stops and listens. However, it's the last 45 minutes that has really informed me about the risks identified by Tristan Harris - driven by commercial gain - just what I saw happen with "social media' aka META. However, so many outstanding examples are shown in the first two thirds that this video is a must watch, IMHO.

  • @lordgoro
    @lordgoro 6 หลายเดือนก่อน +1

    Whomever the host/narrator is, he's got speech charisma! Coming from the Great John Duran, a high compliment indeed!

  • @honkeykong9592
    @honkeykong9592 ปีที่แล้ว +3

    Llama2
    “figure out what the hell i was”
    that one was actually the best answer 😂

  • @lisamuir8850
    @lisamuir8850 11 หลายเดือนก่อน

    35:16 absolutely agree about that. It really needs to be looked at in any scenario

    • @lisamuir8850
      @lisamuir8850 11 หลายเดือนก่อน

      It is still being man made so I seriously agree

  • @MissplaySpotter
    @MissplaySpotter ปีที่แล้ว +5

    As lovely as this talk is - the false statement that you hear at 1:48:44 (including the head shaking and disbelive from Tristan - because he cannot belive what he is hearing from an expert) just shows that Yann is either too old to comprehend what the "world wide web" is - or just lacks information. Of course those things are online. He clearly didnt even consider things like "dark web" or even all those ".onion" links and data. To state something 100% clear with "that will not happen" "this cannot happen" is one point why many people are scared. Because he takes his point of view into bold statements - what will he say when he discovers that you can train any LLM on deepweb data in a short matter of time? "Sorry, I was wrong" ? Well, then it is already to late.

    • @MissplaySpotter
      @MissplaySpotter ปีที่แล้ว +3

      And those guys are working for decades in this field. It makes me, as a normal living guy that tunes in for news, kind of scared that those guys - with a limited mindset - are in charge of something that has the possibility to change our whole world. Please consider every option slowly and with other experts involved. In this AI area all mistakes could be costly in a matter of weeks, months, years - to a point where we cannot fix earlier mistakes.

    • @netscrooge
      @netscrooge ปีที่แล้ว +1

      ​@@MissplaySpotterIt is frightening. Do the math. LeCun denial + unstoppable Altman/Brockman ambition = major players are not putting our wellbeing first.

    • @bytefu
      @bytefu ปีที่แล้ว +2

      And what if they manage to create an AI that can actually reason? Surely, it could just deduce how to make sarin from all the publicly available chemistry knowledge.

  • @quantum_man
    @quantum_man ปีที่แล้ว

    It's not about Imagining a Large number of scenarios. On the contrary it's about Imagining only one scenario to the exclusion of everything else to produce an outcome or objective. That's where the true power lies. It's about narrowing the focus, not expanding it. Until we do this all our energy will be scattered everywhere and we'll never find the solution.

  • @techchanx
    @techchanx ปีที่แล้ว +4

    Great session. Learnt something more than many other "training" sessions on Gen AI!

  • @DavidButler-m4j
    @DavidButler-m4j ปีที่แล้ว +2

    When is everyone at the top of the hierarchy going to asked all of us at the bottom what we want AI to do rather than just decide without us at the bottom have any real say in things?

  • @boredludologist
    @boredludologist ปีที่แล้ว +4

    Let the autoregressive-model-bashing by Yann LeCun begin!

    • @IronZk
      @IronZk ปีที่แล้ว +3

      Autoregressive can't plan...

    • @boredludologist
      @boredludologist ปีที่แล้ว +1

      No disagreements on that... And that's not the only shortcoming either! We may get a reminder of the "Reversal curse" of these models as well.

  • @jenniferl8714
    @jenniferl8714 ปีที่แล้ว +1

    I reckon humanity’s “finite absorption rate” of 30 years, rather than 2 years, reflects the length of a human life.
    Essentially, 30 years is long enough for humans born into a new technology era to gain some power. They are already comfortable players in the game.

  • @crowlsyong
    @crowlsyong ปีที่แล้ว +6

    Yann LeCun is akin to Exxon saying “climate change isn’t happening” whilst they fully know it was happening. Why he was allowed to speak on the panel is beyond me.

    • @stevereal-
      @stevereal- ปีที่แล้ว

      I don’t think Yann is Exxon at all lol. I think he’s spot on with a lot of his observations. The way he says it won’t win him any elections soon though. AI is here. Accelerating it’s progress I believe is essential for national security for so many different reasons.
      Plus a lot science has its cures and potential evils.

    • @anythingplanet2974
      @anythingplanet2974 ปีที่แล้ว +3

      Great analogy! LeCun could be the CEO from Phillip Morris in the 50's telling us to smoke more cigarettes to become healthier

  • @KhonsurasBalancedWaytoWellness
    @KhonsurasBalancedWaytoWellness ปีที่แล้ว

    I’m wondering if it’s incorrect to assume that the smartest people have no desire to dominate. Aren’t we all driven by status to some extent, and willing to do whatever it takes to maintain it, even unintentionally? I was reading ‘Habits of a Happy Brain’ by Loretta Graiano Breuning, which got me thinking about this. It’s commonly known that dopamine is a driver of our behavior, but according to the book, serotonin, oxytocin, and endorphin are also important, each playing different roles in making us feel ‘happy’. Does anyone have any thoughts or can offer clarification on this? 1:55:51

  • @CandyLemon36
    @CandyLemon36 ปีที่แล้ว +13

    I'm captivated by the clarity and depth in this content. A book with comparable insights was a pivotal moment in my journey. "The Art of Meaningful Relationships in the 21st Century" by Leo Flint

    • @PazLeBon
      @PazLeBon 8 หลายเดือนก่อน

      dont have them , life is much better haha

  • @JohnChampagne
    @JohnChampagne 4 หลายเดือนก่อน

    We STILL don't have laws that require accounting for externalities. IF we charge fees proportional to harmful impacts (emissions, extraction, habitat destruction), the most harmful industries would shrink, change or die. If we share proceeds from fees to all people, the policy will be fair. We could raise fees until random polls show that most people think that impacts of various kinds are being held within acceptable limits. The policy would promote sustainability AND end poverty. (Random polls should allow people time to research the particular questions at hand.)

  • @Praveenfeymen
    @Praveenfeymen ปีที่แล้ว +8

    "The only way to stop a bad guy with an Al is a good guy with an Al"😮

    • @shannonbarber6161
      @shannonbarber6161 6 หลายเดือนก่อน

      "AI, review this code-base and produce a patchset to fix all of the security flaws in for me to review."
      The alternative is elitism. Government selected Haves & Have-Nots.

  • @kamel3d
    @kamel3d 8 หลายเดือนก่อน

    20:05 I don't agree with him, human brain is amazing it is very energy efficient, humans can learn driving witout making single accident but computers has to make accidents to learn driving and that is way less efficient that human brain

  • @SoCalFreelance
    @SoCalFreelance ปีที่แล้ว +4

    39:21 "Not intelligences....very narrow AI systems" Why limit yourself to excluding AI models that do certain things very well? I think the best approach for AGI is something like Hugging Face where you combine a bunch of different models and allocate them depending on the task at hand!!

    • @Me__Myself__and__I
      @Me__Myself__and__I ปีที่แล้ว +2

      True. In fact it is believed that ChatGPT-4 actually consists of numerous smaller internal models that already act like this at least to some degree.

  • @jopa8960
    @jopa8960 9 หลายเดือนก่อน +1

    What sense does it make to pause AI development when our adversaries and rogue nation-states are making huge AI advances. We need advanced jAI for national security and education to build and maintain these systems.

  • @abhijitborah
    @abhijitborah ปีที่แล้ว +5

    One of the best discussions of late. One thing is sure, we will be understanding "our amazing" ourselves better; much before we have AGI.

  • @Ramiromasters
    @Ramiromasters ปีที่แล้ว +2

    1:45:00 The argument against open-source AI doesn't hold up considering the information is already available. Practical LLMs don't solely enable bad actors, they also make it easier for good actors to create positive applications and defend against harms.
    This anthropophobic vision of technology harming humanity forgets that states and militaries, not lone individuals, have historically wielded innovations for violence, as history shows.
    Rather than limit access to knowledge, policy should enable adversarial models where detection and prevention systems can match evolving threats. Much like efforts to democratize physical defense capabilities balanced global power dynamics, democratizing access to AI may likewise neutralize the asymmetry.
    More equitable access coupled with legal disincentives for misuse provides a better path than arbitrary knowledge restrictions. There are always abuses of power to guard against, but the arc of scientific progress ultimately trends toward liberation.

    • @sergedadesky5638
      @sergedadesky5638 ปีที่แล้ว +1

      My speech to text functions were not working. But after your intelligent comment I no longer need to make the point. 😊

  • @rocketman475
    @rocketman475 ปีที่แล้ว +12

    Yann is correct.
    Tristan's idea to grant control of AI to a few large companies will result in the creation of the nightmare scenario that Tristan wishes to avoid.

    • @chrisl4338
      @chrisl4338 ปีที่แล้ว +3

      Absolutely. Tristan's views parallel those of the Luddites which could be characterised as change is scary, let's not go there. Albeit Tristan's ability to articulate those fears is impressive. As for his proposition that the control AI should be the preserve of corporate entities, now that is scary.

    • @ItsWesSmithYo
      @ItsWesSmithYo ปีที่แล้ว

      Free market won’t let that happen 🤙🏽

    • @rocketman475
      @rocketman475 ปีที่แล้ว

      @@ItsWesSmithYo
      Yes, that's right, but what if the free market is being interfered with?

    • @ItsWesSmithYo
      @ItsWesSmithYo ปีที่แล้ว

      @@rocketman475 personally never seen it not correct. Someone always finds the hole and opportunity, point of the free market.

  • @paulbunion6233
    @paulbunion6233 8 หลายเดือนก่อน

    I can not help but be reminded of an ancient Indian parrabell "the old parable of 6 blind men, who always wanted to know what an elephant looks like. Each man could touch a different part of the elephant, but only one part. So one man touched the tusk, others the legs, the belly, the tail, the ear and the trunk. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe. They then compare notes and learn they are in complete disagreement about what the elephant looks like. When a sighted man walks by and sees the entire elephant all at once, they also learn they are blind. The sighted man explains to them: All of you are right."

  • @errollleggo447
    @errollleggo447 ปีที่แล้ว +6

    I think certain countries will have no qualms about using AI to do some really bad things like creating new weapons. I think progress is essential honestly.

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว

      True real-world cases show that "good intentions" don't stop bad people. China uses cameras to track everyone, to control people. A Western company that made very capable cameras for surveillance in the West saw its early AI surveillance systems were biased against black/dark skinned people. So this company modified its system to also detect each person's "race" (using skin/face "profiles"). China asked them to add a profile for "Han-Chinese" people. China used it to find & surveil Uyghurs, to "limit" them, by making its people-tracking system decide that non-Han people in China had to be "followed" & monitored more closely.

    • @flickwtchr
      @flickwtchr ปีที่แล้ว +2

      If the US is doing it why wouldn't they? The cat is far out of the bag already.

  • @breathlessMay
    @breathlessMay 5 หลายเดือนก่อน

    YLC is one of the most sane, grounded, and non-hyping voices in AI. That said, and with full respect, I think he needs to be better prepared to acknowledge the issues pointed out by Tristan. Also, he needn't be the defender of Facebook :(

  • @anurag01a
    @anurag01a ปีที่แล้ว +3

    Brian: A cool moderator🤩
    Tristan: Scared face & voice😰
    Sebastian: Pleasant & +ve😊
    Yann LeCun: Don't care 😤😏