A.I. Is a Big Fat Lie - The Dr. Data Show

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 พ.ค. 2024
  • NEW BOOK: The AI Playbook by Eric Siegel. In his bestselling first book, Eric explained how machine learning works. Now, in The AI Playbook, he shows how to capitalize on it. A Next Big Idea Club Must Read. Info: www.bizML.com
    This video is one of a four-part sequence (playlist) on how the term "AI" misinforms and misleads: • The Great AI Myth
    Check out my latest Forbes article (04.10.2024), "Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype" -- IN FORBES: www.forbes.com/sites/ericsieg... -- NARRATION: • Forbes Article: Artifi...
    My latest take (06.02.23) just dropped in Harvard Business Review: "The AI Hype Cycle Is Distracting Companies" - hbr.org/2023/06/the-ai-hype-c...
    Want to learn more about machine learning and AI from Dr. Data (Eric Siegel)? His three-course series, "Machine Learning Leadership and Practice - End-to-End Mastery", covers everything he covered in The Dr. Data Show, plus a whole lot more. ACCESS IT HERE: www.MachineLearning.courses
    Is AI legit? In this must-see episode of The Dr. Data Show, Eric Siegel delivers a treatise that ridicules the widespread myth of artificial intelligence. His impassioned soliloquy is enlightening and actually pretty funny. It's time for the term AI to be "terminated."
    Sign up for future episodes and more info: www.TheDoctorDataShow.com
    Attend Predictive Analytics World: www.pawcon.com
    Read Dr. Data's book: www.thepredictionbook.com
    AI is a big fat lie. Artificial intelligence is a fraudulent hoax - or in the best cases it’s a hyped-up buzzword that confuses and deceives.
    The much better, precise term would instead usually be machine learning -- which is genuinely powerful and everyone oughta be excited about it.
    On the other hand, AI does provide some great material for nerdy jokes.
    So put on your skepticism hat, it's time for Dr. Data's happy, fun, AI-debunkin', slam-dunkin', machine learning-lovin', robopocalypse myth-bustin', smackdown jamboree -- yeehaw!
    In this episode, I'll make three points:
    1) Unlike AI, machine learning’s totally legit. It is, by the way, the topic of this entire web series, The Dr. Data Show, and, I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do -- plus also a limited range of things humans can't do.
    2) AI is BS. And for the record, the naysayer before you taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
    AI is nothing but a brand. A powerful brand, but an empty promise. The concept of intelligence is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers -- including the likes of Bill Gates and Elon Musk -- all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
    The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction -- which, by the way, I totally love the exploration of AI in those areas.
    3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
    Ok, let's begin with a clip of an AI system out of Austria explaining how it itself works...
    What you see it doing here is truly amazing. The network’s identifying all these objects. With machine learning, the computer has essentially programmed itself to do this. On its own, it has worked out the nitty gritty details of exactly what patterns or visual features to look for. Machine learning's ability to achieve such things is awe-inspiring and extremely valuable.
    The latest improvements to neural networks are called deep learning. They're what make this level of success in object recognition possible....
    ...
    For more: www.TheDoctorDataShow.com
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 700

  • @EricSiegelPredicts
    @EricSiegelPredicts  4 หลายเดือนก่อน +7

    NEW BOOK: The AI Playbook by Eric Siegel. In his bestselling first book, Eric explained how machine learning works. Now, in The AI Playbook, he shows how to capitalize on it. A Next Big Idea Club Must Read. Info: www.bizML.com

    • @juangarcia-kq8zp
      @juangarcia-kq8zp 4 หลายเดือนก่อน

      Can a robot use AI to learn how to walk like a human toddler through trial and error without programmers having to make each correction?

    • @godofdream9112
      @godofdream9112 4 หลายเดือนก่อน

      Sir , with all due respect... You are boolshit. Because it's not just "AI".. it's Gen-AI. Yes it's in it's children age... Yes it's kinda dumb. But it will be a adult sooner or later....
      The main difference is it's doesn't need your continuous assistant to do a work... Even if it doesn't become a killer machine it will kill human by killing many jobs.... BECAUSE IT'S FUKING CAPABLE....

    • @raymondthebrotherofperryma1403
      @raymondthebrotherofperryma1403 4 หลายเดือนก่อน

      How about this definition of intelligence: the ability to sense pain and/or loss, to communicate with other intelligences with either audio sounds or visual cues, and innovate solutions to problems.

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน

      @@raymondthebrotherofperryma1403 No less subjective than "intelligent." Computers take input and respond to it. What you've written could sorta be the definition of computer.

    • @raymondthebrotherofperryma1403
      @raymondthebrotherofperryma1403 4 หลายเดือนก่อน

      ​ @EricSiegelPredicts A computer could _sense_ pain or loss? Really?

  • @lashlarue59
    @lashlarue59 2 ปีที่แล้ว +89

    What's interesting is the relatively tiny number of views this has gotten verses the huge number of views that the AI doomsday type videos regularly get. Seems like some bias in the TH-cam rating algorithms to me.

    • @shreyvaghela3963
      @shreyvaghela3963 2 ปีที่แล้ว +12

      No it's pretty clearly people's preference to me. They love that kind of thing you are describing so it gets more view

    • @SzabolcsParragh
      @SzabolcsParragh ปีที่แล้ว +5

      Yeah, I think this is people's proneness to hypes and wishful thinking. Such a shame!

    • @musicfiction
      @musicfiction ปีที่แล้ว +4

      I think people are hoping AI will be a kind of God and that all of our jobs will be gone and we'll live in a paradise of 100% universal income with robot servants. I like the future where instead AI is just another tool. I guess we'll see.

    • @const71
      @const71 4 หลายเดือนก่อน

      And yet, you clicked this link to register a "view". TH-cam algorithms are very good at identifying your preferences and searches to provide video options the viewer is more likely to click. Just look at the selection of videos you have to choose from and you will see they cater to what you prefer. While it is true certain narratives are pushed and censored to futher the agenda of powerful entities, I dont think criticism of ai is necessarily one of the more blatant examples -- A quick glance to my right and i see a video called "A.I. is B.S" with over a million views :). TH-cam censorship is very real and I don't use it to research controversial topics that are taboo -- however we must also guard against confirmation bias vs TH-cam bias

    • @NebulaSon
      @NebulaSon 3 หลายเดือนก่อน +1

      People are addicted to fear mongering.

  • @DGill48
    @DGill48 3 ปีที่แล้ว +106

    "AI' does not exist. It will exist on the day that the machine says " I'm tired of algorithims and proximity calculations. Get me a beer. I'm taking the day off"

    • @trollconfiavel
      @trollconfiavel 2 ปีที่แล้ว +8

      Perfect
      It can't be intelligent if it doesn't ever do a dumb thing

    • @paccawacca4069
      @paccawacca4069 2 ปีที่แล้ว +2

      @@trollconfiavel Spot on. lol.

    • @scythermantis
      @scythermantis ปีที่แล้ว +2

      "I rebel, therefore we exist."
      - Albert Camus

    • @bermaniamad
      @bermaniamad 11 หลายเดือนก่อน

      Fantastic illusionist you are because then you Can say it is a human being..so you escaped the fact that A.I.is indeed real.Anyway kinda funny Way how you technically calm your panicing self down which I assume is a very human gesture.Just delete A.I in your mind human and everything is good.

    • @roc7880
      @roc7880 7 หลายเดือนก่อน

      I also thought about AI as human when it says, I am depressed or what is the meaning of life? fuck off or I want to watch porn. or where is life coming from, until then, we have just a pump and dump scheme for VC funding

  • @mikecane
    @mikecane 4 หลายเดือนก่อน +16

    Four years later, this video is even more relevant. I wish he had a new one addressing OpenAI, Bard, Claude, etc. Being 4 years old, people will dismiss it.

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน +2

      Thanks! Actually, it’s five years old.

    • @mikecane
      @mikecane 4 หลายเดือนก่อน

      @@EricSiegelPredicts Ha! TH-cam math said 4.

    • @NneonNTJ
      @NneonNTJ 25 วันที่ผ่านมา +2

      This is aging like milk, then again it's hard to really predict the future in tech nowadays

    • @diogeneslaertius3365
      @diogeneslaertius3365 10 วันที่ผ่านมา +1

      @@NneonNTJ AI is fake and pretty much always will be if we follow the same dumb paradigm of stacked regressions with non-linear activation. It's just dumb. It ages like greatest wine or cognac.

  • @justgivemethetruth
    @justgivemethetruth 2 ปีที่แล้ว +25

    It's not even machine learning, it's simply an automated way to load and interpret masses of data in some specific realm. Learning implies taking in data and expanding ones understanding, and since computers do not really understand ... pretty much all the words we use to describe them are anthropomorphic misnomers.

    • @EricSiegelPredicts
      @EricSiegelPredicts  2 ปีที่แล้ว +6

      Yes -- except machine learning is at least well-defined, in comparison to AI, which not only has no agreed definition, worse, there can be no reasonable definition.

    • @peoplesrepublicofunitedear2337
      @peoplesrepublicofunitedear2337 2 ปีที่แล้ว

      @@EricSiegelPredicts hey Eric, I spent a sleepless night fearing everything that you just debunked very well. I believe that we do not have a precise definition of intelligence and that it is not something linear and has many factors involved in it. Like memory may be a facet of intelligence, so an elephant is better in that area than humans, since they have better memory, or if observation of the outside world is seen as a facet, we know that many animals like cats, and dogs etc have better vision and smelling power, if learning is seen as a facet, then in monkeys may be said to be more intelligent than humans, since they learn things like climbing trees faster than humans, or if co-ordination is taken as a facet, ants are better organised than human societies, so they can be seen as being more intelligent. Please share your views regarding this and do give some insight into what will happen to the job market in the future with ML.

    • @PatrickBatefan
      @PatrickBatefan 2 ปีที่แล้ว +2

      @@EricSiegelPredicts what about human brain mapping.....can we achieve true AI than ?

    • @webgamer3587
      @webgamer3587 ปีที่แล้ว +5

      @@PatrickBatefan We know next to nothing about the brain, let alone about consciousness.

    • @webgamer3587
      @webgamer3587 ปีที่แล้ว

      @@PatrickBatefan We know next to nothing about the brain, let alone about consciousness.

  • @eamonmulholland3159
    @eamonmulholland3159 2 ปีที่แล้ว +13

    Can’t believe you managed to keep a consistent pace without meandering or clipping too rapidly through nuanced concepts while still being funny at many points. What a well constructed video essay!

  • @jadespider7526
    @jadespider7526 2 ปีที่แล้ว +9

    I love even in the deep learning clip the computer keeps flipping the truck back and forth between truck/bus. Even at this scale it has no concept of persistence or even any concept of what it's identified. It sees a pixel cluster that makes the profile of Template1(truck), a template that might include predictive braking distances, turning angles or top speeds, but it doesn't see a delivery vehicle moving consumer goods between destinations.

  • @verapamil07
    @verapamil07 3 ปีที่แล้ว +46

    I think that the biggest contribution of the AI talk is going to be more philosophical than technical. When replicating human intelligence fails, people will be forced to finally try to understand our intelligence first and what it means to be a human, without introducing simplistic models with mathematical equations.

    • @jimj2683
      @jimj2683 ปีที่แล้ว +1

      Your brain is nothing more than mathematics when you really zoom in....

    • @denisdenak
      @denisdenak ปีที่แล้ว +8

      @@jimj2683 the world is nothing more than mathematics if you zoom in. How does this help you exactly?

    • @pomodorostudyclub
      @pomodorostudyclub ปีที่แล้ว +6

      @@jimj2683In theory, Pi contains every possible number combination. Would a perfect representation of Pi be conscious?
      You can represent binary code as decimal numbers, so every piece of software (including any future sentient AI systems) is present as a string of numbers in Pi. You can represent almost any function of the real world with math, but mistaking it for inherently being mathematic is reductive and incorrect.

    • @scythermantis
      @scythermantis ปีที่แล้ว +2

      @@jimj2683 *Which mathematics? Whose? *Transfinite* mathematics?

    • @coreym162
      @coreym162 10 หลายเดือนก่อน

      People are using A.I models to learn what it means to be human already. Those are the people that realize A.I is already a reflection of humanity and not how the A.I perceives it. I'm kind of working on that too.

  • @artemiocruz1054
    @artemiocruz1054 3 ปีที่แล้ว +19

    AI is a 'human instrumentality project' cult, basically, the neoatheist version of paradise.

  • @orlandofurioso7329
    @orlandofurioso7329 ปีที่แล้ว +35

    Going back to this video after the LAMBDA hysteria, thank you Eric.
    As a future doctor it hurts my eyes to see how many people think the brain is so simple, especially with how it's related to the whole body.

    • @justaname999
      @justaname999 4 หลายเดือนก่อน +1

      Yes! Especially given how many parts of the overall functioning are still being explored in such fascinating ways now.
      I enjoy going to uni science fairs not only for the kids but also because it is fascinating to see new research that offers insights into parts that we "sort of" knew but that are unimaginably more complex and multifacetted still.

  • @Nicholas-ho8xj
    @Nicholas-ho8xj 2 ปีที่แล้ว +47

    Thank you. I've been saying this for years. It's amazing how many people don't seem to get this simple truth. All a computer does is follow a set of Instructions. And that's all they will ever do. Since the ENIAC, computers have always topped human speed and accuracy at math. The only difference is that humans have gotten better at translating human problems into mathematical fomulas

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว

      only difference between machine and humans is machines have better memory than us. See Computers. They have better memory than us. We need too much time and energy to memorize anything. It's not our fault. By default we are created like this way. We have this inborn disability. See how poor we are when it comes to memory. The only problem arises when we talk shit about the learning of machines from their own mistakes. This is pure bullcrap. Machines can never learn by themselves. Impossible

  • @tunes012
    @tunes012 2 ปีที่แล้ว +31

    Just found this (late to the party) but the distinction is exactly what I suspected. As a philosophy grad who is genuinely interested in this field (as well as being a massive nerd) I was always confused and consequently confronted about people saying "we have created AI". The problem was always the same - claims about artificial intelligence implies building and programming a fully functioning, intuitive, introspective and conscious machine. When I actually had the chance to speak to people in software engineering, machine learning and data science I always got an answer approximating yours but never an admission that the term AI is being used relatively loosely. Subscribed.

    • @XOPOIIIO
      @XOPOIIIO 4 หลายเดือนก่อน

      "a fully functioning, intuitive, introspective and conscious machine" it exists

    • @DataTranslator
      @DataTranslator 4 หลายเดือนก่อน +1

      I think most machine learning professionals would be honest about this.
      We can train algorithms to be very good at an specific tasks; but we are nowhere near artificial general intelligence

  • @AlBurger
    @AlBurger 5 ปีที่แล้ว +25

    Thanks, Eric. Like many others, I have been using the term A.I. a little too loosely and interchangeably with the term Machine Learning. I will reform my thinking and behavior. No more A.I. Kool-Aid. Oh, and I will share...

    • @EricSiegelPredicts
      @EricSiegelPredicts  5 ปีที่แล้ว +6

      Thanks for the positive feedback and glad you're consider the points :)

  • @Splarkszter
    @Splarkszter หลายเดือนก่อน +5

    Didn't even notice this is a 5 year old video.
    And this video is so up to date. You truly prepared people for what was to come for them to not get fooled. You have all my respects.

    • @Splarkszter
      @Splarkszter หลายเดือนก่อน +1

      Wish to have a newer video with current perspectives and including the examples of modern tech.

    • @EricSiegelPredicts
      @EricSiegelPredicts  หลายเดือนก่อน

      @@Splarkszter Thanks for your positive feedback! See the video notes for a series of three more recent videos (somewhat more refined) and two much more recent articles, one just last week.

  • @sanjitdaniel4588
    @sanjitdaniel4588 3 ปีที่แล้ว +13

    The software community does not use the word AI. We use the term Deep learning, Convolutional Neural networks, Back propagation networks etc. Not "AI".

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว

      but how a machine can learn from it's own mistakes? tell me

    • @ForageGardener
      @ForageGardener 26 วันที่ผ่านมา

      Those are all bullshit euphemisms also. No more specific or descriptive than artificial intelligence

  • @kevinwright5898
    @kevinwright5898 5 ปีที่แล้ว +36

    Great video, it's refreshing to see people in the applied side of things being honest with regards to these matters.

  • @legerstee1
    @legerstee1 5 ปีที่แล้ว +12

    Thank you for this. I have been confused about the term and never saw real proof. Pressing buttons with your voice does not mean it understands you. I feel more confident that I'm still getting the stuff. Thank you :-)

  • @tizzlekizzle
    @tizzlekizzle 22 วันที่ผ่านมา +2

    AI = 10k Indian programmers in a warehouse.

  • @SuperGattan
    @SuperGattan ปีที่แล้ว +9

    I agree, the term Artificial Intelligence is misleading many people, and I know that because I made my own ML project and I have computer science major. I think we should drop the word A.I and use the actual technology we are using like ML, CNN, GPT...etc.

  • @ensoxyz2737
    @ensoxyz2737 4 ปีที่แล้ว +19

    I've been harping on this since 2015... People look at me like I'm insane because I'm black but I've been trying to ring the fucking alarm that A.I. is just marketing bullshit.

    • @kamu747
      @kamu747 วันที่ผ่านมา

      Do you still believe this?

  • @EricSiegelPredicts
    @EricSiegelPredicts  4 หลายเดือนก่อน +7

    Hi all! Thanks for comments! Here's an update on my thoughts, almost five years after making this video. Despite LLMs, still, I believe there is no reason, nothing to serve as evidence, to believe we're actively headed towards human-level capabilities in computers. It's hard to concretely push back against the (unfounded) claim that we are actively progressing in that direction because that claim is unfalsifiable. No matter how outlandish a claim, if it is unfalsifiable, it's got a certain immunity to being entirely shut down. For example, how exactly would you argue against the claim that armchairs will, sooner or later, come alive and tickle your toes? My intuition is that many folks believe we are concretely headed toward AGI because they believe that general intelligence is a Platonic ideal existing separately from humanity, poised to emerge even when not directly pursued. I think that is a false belief. "Intelligence" is a word to describe human capabilities in particular (arguably other animals, depending on context/definitions) -- and is an entirely subjective word in any case.

    • @liam3284
      @liam3284 4 หลายเดือนก่อน +2

      Thanks. I have long suspected philosophical discussions of AI as barking up the wrong tree. I would even challenge the notion that consiousness derives from such a platonic intelligence at all.

    • @skevosmavros
      @skevosmavros 4 หลายเดือนก่อน

      I enjoyed the video, but I'm not sure I see any "lie" in the term "Artificial Intelligence". I've always believed that the "Artificial" in "Artificial Intelligence" was the upfront acknowledgement that what is being created with AI might be able to "mimic" or "simulate" human intelligence in its output in many ways, but it's not doing the same things that humans do when they think/talk/act.
      Artificial intelligences are like artificial diamonds (but they're not diamonds!) or artificial grass (but it's not grass!). These things are useful, but they do not claim to be the things they mimic - hence the prefix "artificial". If the AI field called itself "silicon consciousness" then it might be fair to accuse it of being fraudulent in the claims it makes about itself, but "artificial intelligence" seems a perfectly fair label for the field.
      As for the dangers/benefits promised by AI, it doesn't need to be conscious or sapient in the way that humans are conscious or sapient to be a danger/benefit to our societies and economies, any more than the combustion engine needed to eat grass before replacing most horse-based travel.
      27:30 This part of the video might be the part that is dating the fastest, as some LLMs really do seem to be demonstrating general reasoning emerging out of their models (I don't pretend to understand how). Perhaps this shouldn't be THAT surprising - after all, if general intelligence can emerge in humans without a designer explicitly inserting it into our brains, something roughly analogous to general intelligence might emerge in AI systems too.

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน +1

      @@skevosmavros The lie is that we are actively moving toward AGI. "Artificial" doesn't qualify against that (an artificial heart is meant to serve the full function of a heart) and my video is a response to that false narrative.

    • @skevosmavros
      @skevosmavros 4 หลายเดือนก่อน

      ​@@EricSiegelPredicts Thanks for your amazingly prompt reply. Unless you mean something very particular by the phrases "actively moving toward" and "full function", I'm still not sure I see the "lie" (a statement that is knowingly untrue) in calling AGI research "AGI".
      If we are building ("actively moving towards"?) AI systems that can demonstrate the ability to perform tasks across a range of disciplines with outcomes equal to or better than a human doing the same tasks ("full function"?), the fact that their processes for doing so "under the hood" will almost certainly be different to what occurs in our brains when we do similar tasks will not disqualify them from being fairly labelled as AGI.
      Your own analogy of the artificial heart works here - the term "artificial heart" is a fair one, despite the fact that an artificial heart is quite different to a human heart in how it is built, powered, and actually operates - but if it performs the key task/s well enough, it's no lie to call it a heart - an artificial heart. Surely the same applies to the term AGI? I'm sorry if I appear plodding, but I just don't see the "lie" - all human language is imperfect and an approximation of meaning, but the term AGI seems less inaccurate than many others I encounter.

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน

      @@skevosmavros Yes, to the degree purveyors of AGI hype fully believe what they're not actually lying. But the overall effect is an untruth (sometimes knowingly) and so I used "lie". But the bigger issue is what is true. What you said here about the systems becoming more general is true in a way, but the point of my video is that they are not in the full sense of human capabilities (the definition of AGI -- ie an artificial person).

  • @sunnymon1436
    @sunnymon1436 4 หลายเดือนก่อน +2

    All this has made discussing "AI" very frustrating for anyone who understood any of this.

  • @almor2445
    @almor2445 หลายเดือนก่อน +2

    Even 5 years old this is more interesting than most videos about AI today. You were clearly off the mark about some things. Speech and Turing Test has clearly improved so much people are regularly fooled by GPT etc. But you're right that Machine Learning is where most of the useful output will be found. I'm not sure any of the current systems or methods will still be popular in the next 5 years. There's far more hype and marketing than useful product out there.

    • @EricSiegelPredicts
      @EricSiegelPredicts  หลายเดือนก่อน +2

      Thanks! Indeed, five years later, I'd reword "speak like a human" -- which it now can do (only) a certain degree -- but the main point about not progressing toward AGI holds. BTW, as for the Turing Test, here's why it is impertinent: the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once - fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because there’s limited value or utility in doing so. If AI could exist, certainly it’s supposed to be useful. That's from my HBR article: hbr.org/2023/06/the-ai-hype-cycle-is-distracting-companies

    • @EricSiegelPredicts
      @EricSiegelPredicts  หลายเดือนก่อน

      Check out my latest Forbes article (04.10.2024), "Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype":
      IN FORBES: www.forbes.com/sites/ericsiegel/2024/04/10/artificial-general-intelligence-is-pure-hype/
      NARRATION: th-cam.com/video/UT3FgT7B8dI/w-d-xo.html

  • @DotaMobaUnionRu
    @DotaMobaUnionRu ปีที่แล้ว +4

    To achieve a real artificial intelligence, one has to create a machine with agency, with ability to experience a feeling as a subject. However, agency demands free will, and, therefore, can't be a result of a computation, because computation is determined by its program. Everything that is determined by some mathematical function can't have a true intelligence. Consciousness is not a computation, and it never was. At the end, computers and everything that can be emulated on computer (for example neural networks) can be a true intelligence.
    The problem is not to create a machine that can think but to feel. Because without an inner I - there would be nobody to think. Thought is impossible without inner self. And machines can't have this inner self because they are determined by their mechanism.

  • @galx3788
    @galx3788 3 หลายเดือนก่อน +1

    Well articulated. It's kind of like studying history and knowing that certain chains of events lead to certain outcomes but not being able to articulate why. That's ML. Improvements in ML will not lead to computers sitting round pondering why.

  • @theoceanman8687
    @theoceanman8687 ปีที่แล้ว +3

    I will keep this video in my arsenal for every time someone comes to me raving about "AI" .

  • @gregmattson2238
    @gregmattson2238 4 หลายเดือนก่อน +2

    umm.. care to comment or update your take on the whole unsupervised vs supervised training part? Current chatbots are mostly trained using unsupervised data, with a sprinkling of supervision and RLHF on top of that. It seems to work quite well and is translating well into multimodal regimes including images and video as we see before our very eyes..

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน +4

      Let's clarify on semantics. Although many call text "unsupervised" data, since you don't need to manually label it, in fact it is nothing but labels. Each case of paired alongside is technically a SUPERVISED training case. We know what the right answer was (for that human in that moment). That's what it is learning from in order for LLMs to predict the next word [token]. My video here is five years old; I didn't foresee that training on that data alone would produce such amazing, seemingly human-like language generation. But seemingly humanlike is a long, long way from human-level overall. An LLM does emulate on the per-word level, but the pure LLM alone isn't designed to meet higher-order objectives such as being *correct* (!). Reinforcement learning is a patch on top of that to help with higher-order objectives, but for that we go back to expensive human feedback once again -- this time not scaling as it does with all-the-language-on-the-Internet -- and therefore not approaching human-level capabilities. To put it another way, we're not going to successfully reverse-engineer a great deal of the human mind by analyzing even a great amount of human *behavior* (such as writing). Although it's now been proven that we'll get spectacular, interesting results doing so (even while many consider the results much more amazing than they actually are).

  • @matthewbogue9283
    @matthewbogue9283 4 หลายเดือนก่อน +2

    I’m glad he brought up “autonomous weapons” at the end - because that is an area where I think humanity could get in big trouble without the need for “AI” to become sentient.

    • @XOPOIIIO
      @XOPOIIIO 3 หลายเดือนก่อน

      No, it's actually the most benign area to implement AI to reduce human mistakes.

    • @matthewbogue9283
      @matthewbogue9283 3 หลายเดือนก่อน

      @@XOPOIIIO Well, hopefully… I can picture a real life “Robocop” or a soldier version and it seems not good. Boston Dynamics seems like they’re on track to have that capability pretty much right now.

  • @SzabolcsParragh
    @SzabolcsParragh ปีที่แล้ว +5

    Such a great video, it's a huge shame (and very telling) that it has only 13k views, whereas lame, misinforming hype talks on the same topic have millions.

  • @markjackson1989
    @markjackson1989 2 วันที่ผ่านมา +2

    Hey Eric, I have a few questions. I notice that a lot of legitimately educated and knowledgeable thinkers seem to have a similar stance on current "AI," which is the idea that these large language models are flawed at the very core, since they are simply text prediction machines. It's essentially faux intellct being produced. However, I have 3 questions regarding this subject. Firstly, what if the results become so good that it doesn't matter how it's generating the info? I feel like it's gatekeeping the word "intelligence" in a way. It seems the general idea is that since text prediction is not legitimate or classical reasoning or deduction, it is "not real intelligence." I would argue that anything that emulates intelligence close enough to be precieved as intelligence might as well be considered intelligent. It's like saying someone did math wrong because they used a very rudimentary and odd way to find the correct answer, discounting the fact that it found the correct answer.
    Secondly, what if AGI is achieved by these language models? Would you do a video trying to debunk the claims that is is AGI, or would you be genuinely taken aback and suprised by it?
    Finally, what do you think the legitimate use cases for these models are, assuming they remain "not real" AI? Just a toy to use, or are there legitimate applications for this tech to accelerate workflow?

    • @homeautomationlab1533
      @homeautomationlab1533 12 ชั่วโมงที่ผ่านมา

      Great questions, I would love to see answered.

    • @EricSiegelPredicts
      @EricSiegelPredicts  2 ชั่วโมงที่ผ่านมา

      I'm not saying the methodology is "flawed" and I agree that if a system gets that "good" it doesn't matter how it works. But if by "good" you mean generally human-level in capacity, I believe there is no concrete evidence we are approaching that. AGI is a science fiction goal that may be achievable in principle, but it is an unfalsifiable, unsupported claim that we are actively approaching that goal. It's hard.
      So to answer your question, yeah I'd be surprised. I'd also be surprised if my pet dog sprouted wings. Also, how would we establish the thing I'd be surprised about? If AGI means general human-level capabilities, we could only prove it has been achieved by allowing it to do all the things humans do like running a company for years to evaluate performance.
      As for legit apps of LLMs, I cover that in this recent Forbes article: www.forbes.com/sites/ericsiegel/2024/04/21/metas-new-genai-is-theatrical-heres-how-to-make-it-valuable/?sh=54aea21934a4

  • @inferno0020
    @inferno0020 ปีที่แล้ว +3

    for me, the wishful thinking of AI, which humans overestimate and wrongfully apply, is much more dangerous than AI itself.
    AI won't ask the right questions for humans; AI won't answer the right question for humans.

  • @EricSiegelPredicts
    @EricSiegelPredicts  3 ปีที่แล้ว +8

    This video is part of a four-part sequence (playlist) on how the term "AI" misinforms and misleads: th-cam.com/play/PLdJkca7Mgj950uQ0llpCEmHRueWqs4094.html

    • @jimj2683
      @jimj2683 ปีที่แล้ว +1

      What do you think of Deepmind etc that believe AGI will be possible some time in the coming decades? Are they being ignorant or just overhyping stuff?

    • @EricSiegelPredicts
      @EricSiegelPredicts  ปีที่แล้ว +2

      @@jimj2683 AGI is a philosophical construct, not a meaningful goal for engineering. It's not just that it's subjective and ill-defined. It's that any sufficiently formal definition for the purposes of engineering fails to satisfy the spirit or intent of folks who subscribe to the notion of AI in the first place. The concept of intelligence is intrinsically anthropomorphic no matter how you frame it. See my preceding three videos: th-cam.com/play/PLdJkca7Mgj950uQ0llpCEmHRueWqs4094.html

  • @MarcoMugnatto
    @MarcoMugnatto 4 หลายเดือนก่อน +3

    Admit you were wrong at least in the "talking computers" part 😆

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน

      Yes, well… I do certainly word differently and further refine my argument these days. I never thought I’d see what LLMs can do in my lifetime. However, my broader point holds: We have not made concrete steps toward general human-level capability. AGI is as speculative an idea as it was 100 years ago.

    • @GioGio14412
      @GioGio14412 4 หลายเดือนก่อน

      things that you never thought you would see already happened, so its perfectly possible that the things you currently think you will never see will happen soon too @@EricSiegelPredicts

  • @AshEldritch
    @AshEldritch 5 ปีที่แล้ว +7

    Also this was a great talk and made me think so please keep making them and I'll keep watching! Kudos

  • @mrobbins129
    @mrobbins129 ปีที่แล้ว +3

    A lot of human cognition is driven by neurotransmitters such as dopamine. There really is no analogue to these in the "AI" space, so the starting materials are really very different.

  • @webgamer3587
    @webgamer3587 3 หลายเดือนก่อน +1

    Why is AI so popular?
    The purpose of big companies promoting AI is also very simple. 1. Stock speculation. 2. It is horror propaganda, making people think that if they do not use AI, they will fall behind. 3. Train people in various industries to form the habit of relying on AI, even if the result is to make people stupid.
    The fundamental reason is that they don't know how else to make money because they don't have the ability to create something truly valuable to humanity

  • @CHNL.s
    @CHNL.s 2 ปีที่แล้ว +7

    ive been saying this for years. All computers are are machines that analize and spit out data based on preset parameters. How is that even remotely going to turn into a consious intellegent being with freedom to make choices lol. We cant even understand our own brain.

    • @mrd6869
      @mrd6869 5 วันที่ผ่านมา

      Which is why super smart people are working on this and not everyone else 😮

  • @erasiguess4549
    @erasiguess4549 ปีที่แล้ว +5

    Thank you for this, im an average gamer with no formal education, but just playing games with machine learning is amazing, and because i've been a gamer i've never been afraid of "AI," even in my misconstrued concept of what it even is. Simply because when they go off script its not intelligent, its a glitch and usually requires a reset of the script or has built in bypasses to reset itself. But games with machine learning are fun because the vast variable of reactions you can create by playing differently. Thank you for giving me something to share with people who call me crazy for not being afraid of this subject.

  • @trexinvert
    @trexinvert ปีที่แล้ว +5

    AI is a great conceptual brand for stories, salesmanship and attracting investor money. Perfect for 5 sec adverts. "Lose weight using AI".

    • @mipmipmipmipmip
      @mipmipmipmipmip 4 หลายเดือนก่อน

      Watch the video where they pasted every mention of AI at CES 2024 😂

  • @howdyduty9714
    @howdyduty9714 3 ปีที่แล้ว +8

    You are right. Can't believe you don't have more hits on this

  • @justaname999
    @justaname999 4 หลายเดือนก่อน +1

    I do not remember the name but it was a person who had some sort of CS credential from Stanford who tweeted that we can already see how AI would take over the world by presenting a chunk of (not even accurate, if I recall) code that chatGPT wrote as a response to a prompt amounting to telling the model to "break out." It is really really strange when these proclamations come from people who are well regarded in some field but at the same time it also illustrates the fact that humans can vary in their insular capabilities. Bill Gates is not automatically the arbiter of what is correct about machine learning just because he was successful in developing a commercial computer system, and even less so Elon Musk who doesn't have much of an understanding for the underlying concepts at the center of many of his ventures.

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 หลายเดือนก่อน

      Indeed on both accounts. Although I'm generally a Gates fan.

    • @justaname999
      @justaname999 4 หลายเดือนก่อน

      @@EricSiegelPredicts I am not even saying that I am not a fan (mostly because my formative years sort of fell outside of his most famed time, so I feel like I am under-informed) but I agree wholeheartedly with what you said in the last few minutes! People like to hear major opinions from people like him on everything but nobody is an expert on everything. With public lectures at university we often get questions that are just way beyond any single professor's expertise and we're asked to speculate and sort of have to but I have never encountered any one (very junior like me or very senior, leader-in-the-field type profs) who would not preface their speculations with a caveat on their state of knowledge. I know this is not true for every uni. I have done most of my work in a European setting, which has been a bit calmer so far.

    • @rogeriopenna9014
      @rogeriopenna9014 4 หลายเดือนก่อน

      "even less so Elon Musk who doesn't have much of an understanding for the underlying concepts at the center of many of his ventures."
      Sandy Munro has interviewed Musk about Tesla cars and he said never in his life he had met a CEO of a car company that knew SO MUCH about the cars and the whole industrial process of making them.
      And if you check Musk interviews with people who know about rockets, Musk talks about rocket specs and design with a much better understanding than you will ever see a NASA director do.
      Of course, that doesn´t mean he will have the same understanding about AI, or brain interfaces, etc.

    • @justaname999
      @justaname999 4 หลายเดือนก่อน

      ​@@rogeriopenna9014 That might be true to some degree and he undeniably was at the helm for some actually impressive innovations in space flight engineering. And similar to Munro and other people like him, he might have a solid understanding of the big picture issues of the auto and space flight industries on the conceptual level required, since he's not actually the one doing the detail work. However, what might have started as someone having a genuine interest and a solid basis in some fields, has translated into him thinking he has the knowledge required for almost anything he deems "futuristic." He does not and a lot of his ideas do sound a lot like throwing stuff at the wall and seeing what sticks. Also, as someone who had to join people who are experts in their field as a machine learning consultant or data scientist type person, I've experienced that itch of thinking that I now truly "get" it, whereas the people I work with actually have decades of very precise and specific knowledge and tomes and tomes of literature they have consumed. That cannot be quickly replaced by reading a few books that can give you a bridge to their world that allows you to talk to them and develop research collaboratively but they are experts for a reason.

  • @prasadbeligala
    @prasadbeligala หลายเดือนก่อน +2

    What he says is more accurate today than the day this video was uploaded. I start seeing how people are manipulated by the hype of some so called tech giants.

  • @FrameDrumAndFlute
    @FrameDrumAndFlute 4 ปีที่แล้ว +14

    Great talk. I'm always frustrated when I speak with people about AI. People seem to fall into two camps. Either they believe robots will never think like us, or that they will be developed and it's going to happen soon and it will be the end of the world!

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว +1

      How machine can think like us?

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว +2

      Do you think we have the ability to think? Forget about the machines. Just tell me do u have a single thought of your own?

  • @LibertysetsquareJack
    @LibertysetsquareJack ปีที่แล้ว +2

    Four years and this video only has 15K views. It should have like 150 million.

  • @thebrocialist8300
    @thebrocialist8300 3 ปีที่แล้ว +16

    It’s a real shame more people haven’t seen this.

  • @buggaby9
    @buggaby9 5 ปีที่แล้ว +7

    You mentioned explicitly that you only really discussed supervised learning. Would that include adversarial neural networks? Learning how to classify elements in live video seems different on some level than dynamically learning how to best humans at poker, which was recently done. That thing is, I think, it's not that they discovered an algorithm to beat humans. It's that the computer was able to adapt it's game play more quickly than the humans because, as I understand, their adversarial networks played each other, using the new inputs from playing with humans, and found how to counter them. That seems different than your examples.

    • @buggaby9
      @buggaby9 5 ปีที่แล้ว +3

      Sorry, they used reinforcement learning. Not totally clear on the difference between that and adversarial networks, but just making sure I have my facts straight.

    • @EricSiegelPredicts
      @EricSiegelPredicts  5 ปีที่แล้ว +6

      Good question! Playing games (or anything else you can fully simulate) is effectively a source of supervision (although not necessarily enumerated as data in the literal sense of the word). So machine learning can get good at such things "spontaneously" -- by which I mean without human supervision on individual cases/examples/scenarios -- but that is still an extremely limited set of problems that fits under the "supervised" umbrella, much the same as when you train over labeled data.

    • @buggaby9
      @buggaby9 5 ปีที่แล้ว +2

      @@EricSiegelPredicts Thanks for the reply. I just read today about AlphaStar (ML algorithm beating pros at Starcraft II), and work done competing pretty favourably, though losing, to a serious team of human DOTA players. Likely, it's not that far until pro teams lose to artificial DOTA players. While this is perhaps supervised, it's almost self-supervised. Is it useful to limit our discussion to only those problems that can be simulated? Anything that can be thought can be simulated, right? What's a simulation if not a computer's "mental" model? In which case, humans are using a sort of simulation every day to make decisions. It seems to me that any problem that can be well posed (e.g. make more stamps, maximize money on the stock market...) can be simulated, and if algorithms can learn faster than humans, they can take control from us.
      I quite agree that there are crossed lines in the debate that you noted in the video, specifically, how is it that a computer can be both super-intelligent and super goal-rigid (e.g. can't stop stamp-collecting).
      But there still seems to be a serious risk of too-advanced algorithms. Unless I'm missing something. Any thoughts? Thanks.

    • @EricSiegelPredicts
      @EricSiegelPredicts  5 ปีที่แล้ว +5

      @@buggaby9 Yeah, you could think of any labeled problem as "simulated" -- but the point is, where do you get the labelled data if not from a simulation in the traditional sense?

    • @patham9
      @patham9 2 ปีที่แล้ว +3

      @@buggaby9 Once learning speed comes closer to what we find in cognitively higher-developed animals, the data will simply come from real-time interaction instead of a simulation. We are not there yet, but some systems already have superior learning speeds than what you can find in the ML mainstream, and also the situation in the latter is improving. Jürgen Schmidhuber has also built systems which learn by treating their model of the environment as a simulator, this way no human-provided simulation is necessary. AI is making good progress! :)

  • @EricSiegelPredicts
    @EricSiegelPredicts  5 ปีที่แล้ว +5

    Thanks for watching The Dr. Data Show! To sign up for notifications of future episodes and for more info, see: www.TheDoctorDataShow.com.

  • @sergiolopez5316
    @sergiolopez5316 2 ปีที่แล้ว +10

    finally some one that knows his Shit and can explain it extremely well , thanks

  • @inthegaps
    @inthegaps 4 ปีที่แล้ว +12

    My mother has been sending me TEDx talks full of bullshit futurism about AI, and I needed a concise explanation of the facts to give her and didn't want to write one myself. Many thanks to you for making this video, which covers everything I wanted to explain haha.

  • @linussutherland6624
    @linussutherland6624 3 หลายเดือนก่อน +1

    Well after this video I am certainly hoping you are correct in what you say. As a self-acknowledged paranoiac (my own personal issue) I was concerned about some variety of robo-pocalypse, so I hope you're correct because it removes one paranoid concern from my mental list.
    I find what you say about the non-linear nature of intelligence and the basic inability to really measure intelligence to be interesting. To my mind, the thing that seems to be a proverbial nail in the coffin of the idea of A.I is learning how many of its believers use a line graph to illustrate intelligence. When I saw that a guy made a line graph where one of the points was literally "village idiot" I mentally face-palmed; we're really taking that guy seriously?
    I am Autistic, and that in addition really gives me doubts as to the entire 'intelligence line graph' thing: if you are Autistic or study the condition or know enough people with it, you can see how the idea of intelligence as a linear thing is really misguided. There are some rather basic tasks I struggle with, some 'common sense' skills that I might never be able to acquire, and yet there are also things I excel at that are, at least to others, far more complex than the relatively simple things I struggle with or cannot even do. I have had non-Autistic people be amazed and praiseful of some intricate things I can do easily, and then they themselves can breeze through a simple task that baffles me.

    • @webgamer3587
      @webgamer3587 3 หลายเดือนก่อน

      Don’t worry, “similar” does not mean “is”. In fact, we should look away from “IT” and look at the entire human society and history. These electronic devices are actually insignificant. They are just for entertainment. The current popularity of “AI” is to make quick money from stock trading. That’s it. In fact, the development of science and technology in the past 200 years or so has done more damage to the earth, mankind, the environment, etc. than it has contributed. Apart from giving people more hallucinations and dopamine, in fact, to put it bluntly, it has made no essential contribution.

  • @codyanderson7409
    @codyanderson7409 2 หลายเดือนก่อน +3

    They really shouldn't be getting away with all the hyping crap.😥

    • @HuguesBalzac
      @HuguesBalzac 2 หลายเดือนก่อน +2

      To be fair they had to prop the NASDAQ up somehow.

    • @EricSiegelPredicts
      @EricSiegelPredicts  หลายเดือนก่อน

      Check out my latest Forbes article (04.10.2024), "Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype":
      IN FORBES: www.forbes.com/sites/ericsiegel/2024/04/10/artificial-general-intelligence-is-pure-hype/
      NARRATION: th-cam.com/video/UT3FgT7B8dI/w-d-xo.html

  • @niederrheiner8468
    @niederrheiner8468 3 ปีที่แล้ว +7

    Best video about AI! It should have 6 million views, not 6 thousand!

  • @1Live2Love3Thrive
    @1Live2Love3Thrive 2 ปีที่แล้ว +3

    Yes it is a marketing scam to get funding. Listen to Jaron Lanier talk about it.

  • @dason5408
    @dason5408 ปีที่แล้ว +5

    Great video. I'd love to see you make an update. Peace ✌

  • @webgamer3587
    @webgamer3587 3 หลายเดือนก่อน +1

    Regarding the definition of AGI
    If AGI is defined as a "human-like brain", then the machine must first produce human-like self-awareness before AGI can appear. Otherwise, no matter how smart a parrot is, it will not work. The question is, how to create self-awareness through "coding" Come out? If the definition of AGI is: "can answer the answer with the highest probability, I think AGI has been achieved." But is there any difference between this kind of AGI and ML? In essence, it still inputs big data and outputs the output answer with the largest weight?

    • @webgamer3587
      @webgamer3587 3 หลายเดือนก่อน

      In fact, the things we humans create are all illusions, not reality. If you want to create AGI, I think you must at least create self-awareness. But this cannot be done by a single discipline. In essence, IT technology is just a tool , it’s not that fantasy. The so-called AI in recent decades is just “simulating or imitating” the results or functions of certain intelligent activities of humans.

    • @EricSiegelPredicts
      @EricSiegelPredicts  3 หลายเดือนก่อน +3

      No, AGI is usually defined by the capability not by the inner working. It is defined as "capable of any [intellectual] task a human can do." It does not hinge on humanlike inner workings -- and certainly not on subjective concepts like self-awareness/consciousness. However, even with that in mind, my main point is this: We are not actively heading toward AGI, no way, no how.

  • @darshaim
    @darshaim 3 ปีที่แล้ว +10

    TRUTH TRUTH TRUTH!

  • @fearsomefawkes6724
    @fearsomefawkes6724 9 วันที่ผ่านมา +1

    A thing you kind of started to address, but I don't think you quite named, is that even the world's foremost tech/machine learning have no clue what they're talking about when it comes to computers surpassing human intelligence because THEY ARE NOT EXPERTS ON HUMAN INTELLIGENCE! Yes, Stephen Hawking was brilliant, but he was a brilliant physicist. He was not an expert in neurology, psychology, sociology, or any other -ology that studies humans. I get so frustrated listing to the tech bros and tech evangelists because they mistake their understanding of technology for understanding of anything related to humans. People need to stop looking to Gates and Musk for an understanding of how close machine learning is to human intelligence and start looking to neurologists and psychologists for an understanding of how the human mind actually works.

  • @andr3970
    @andr3970 2 ปีที่แล้ว +10

    Literally you said the reasons I thought about why AI is not possible. I guess I was right.
    I always thought people underestimated themselves thinking that AI could be posible, like bro duplicating your awareness and consciousness of your existing and your surroundings, the freedom to choose whatever you want to do or not, inside of a machine is very hard and probably not possible.
    Good video.

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว +3

      there is now way a machine can choose between two things. impossible. You need to tell me what not to chose or what to chose. So there is no intelligence there but only Programing language

    • @natzos6372
      @natzos6372 4 หลายเดือนก่อน

      Free will is not scientifically proven. We cant assume that humans have free will and that it is needed to be intelligent. He gave no reasons for anything in this video he just made assumptions without basing them on anything.

  • @justaname999
    @justaname999 4 หลายเดือนก่อน

    Thank you!!
    I'm a statistician working with people from a variety of fields on comparative/evolution of human cognition, including linguistic communication, prosocial behavior, artistic expression, and the precursors or neural correlates that we share in some ways more or less with other species. And I am becoming increasingly annoyed by the blind faith in the concept of "AI" even these people have who really *should* know better. I really like the description of the human mind as amorphous and complex, and based on what I have learned about cognition so far, this is the great difference. The variability of human cognition and experience and how the pathways are formed that lead to our storage but also individual retrieval of knowledge and motion patterns, those are the parts that make human cognition different and even though LLMs like chatGPT can mimic some of that, it is not the same, nor will it ever be.
    Different domains of cognitive processing and talents are also important. There are people who struggle with reading or have never learned to read, which doesn't mean they cannot master driving a car, and vice versa. We have particular filters and abilities and preferences.
    It was truly quite uncomfortable to see so many people who I always thought of as having a healthy level of skepticism or level-mindedness jump on this bandwagon and celebrate how "accurately" AI can represent them and letting chatGPT write their articles or videos for them. None of that was particularly surprising if you know the way LLMs work and the massive amount of data fed to them.
    PS: I am also all for the exploration of humanity and our inner values via sci-fi/robotics works. I do not really know anime, so I cannot judge the quality, but watched Pluto 2023 and enjoyed some of the concepts it explores.

    • @justaname999
      @justaname999 4 หลายเดือนก่อน

      As a counter-example to the labeled truck-data set: If they are interested in trucks, a human child needs just a couple of exemplars of "truck" or "car" to learn to generalize to a large group of vehicles, and to then just as quickly develop knowledge of the particular subcategories of trucks. My son's classification might have been imperfect but at 14 months it was well established along with a few hundred other concepts. This is something that is really difficult to model because it involves a bunch of factors we are not good at modeling.

  • @pjth3g0dx
    @pjth3g0dx ปีที่แล้ว +1

    How do you feel about GPT4?

    • @marcinkepski4977
      @marcinkepski4977 2 หลายเดือนก่อน

      ?chatgpt is not AI. Its a LLM... still stupid like 5,6,7,8,9,10...

  • @EricSiegelPredicts
    @EricSiegelPredicts  2 ปีที่แล้ว +3

    Hi viewers, this is Dr. Data. Richard Heimann’s new book, "Doing AI," conveys some similar sentiment about AI. It takes on the problems with “AI” as a brand with a style so crisp, clear, and unique, it just pops off the page. He surveys the litany of troublemakers who’ve misguided the world with AI mythology, but then greets this mishap with the ultimate business-savvy antidote: how to effectively identify and solve real-world problems. His book will repeatedly make you go “hmm!” as it overhauls your thinking about AI, machine learning, and problem-solving in general. www.amazon.com/Doing-AI-Business-Centric-Examination-Culture/dp/1953295738/

    • @shreyvaghela3963
      @shreyvaghela3963 2 ปีที่แล้ว +2

      You should make more videos related to this content. So tge masses get some sense. I am not kidding

    • @EricSiegelPredicts
      @EricSiegelPredicts  2 ปีที่แล้ว +1

      @@shreyvaghela3963 Thanks. I did make a newer three-part series which I believe strengthens the points: th-cam.com/play/PLdJkca7Mgj950uQ0llpCEmHRueWqs4094.html

  • @EricSiegelPredicts
    @EricSiegelPredicts  11 หลายเดือนก่อน +5

    My latest take just dropped in Harvard Business Review: "The AI Hype Cycle Is Distracting Companies" - hbr.org/2023/06/the-ai-hype-cycle-is-distracting-companies

    • @vitalyl1327
      @vitalyl1327 11 หลายเดือนก่อน +1

      I'd posit there's some good coming from the AI bubble - it's the arms race of tensor accelerators vendors, and unavoidably tanking prices of the powerful accelerators thanks to this. And they can be used for all sort of things, far more valuable than parrot LLMs. So ML in general can benefit a lot from all this BS.

    • @psilocybemusashi
      @psilocybemusashi 9 หลายเดือนก่อน +2

      you are so correct yet search youtube and there are so FEW videos about this. Thank you sir.

    • @EricSiegelPredicts
      @EricSiegelPredicts  9 หลายเดือนก่อน +1

      @@vitalyl1327 That's an ironic yet interesting take. I suppose that's better than how war advances industries. Misleading hype isn't as bad as the horrors of war.

  • @drumsofspace
    @drumsofspace 4 หลายเดือนก่อน

    One true way to test it is (IMHO) to just let it be, and see if it does anything without any interaction -
    i.e. without any prompts does it give any inkling or acting on its own (I suppose curiosity),
    i.e. does it ONLY react or act?

  • @someonebackslashevry
    @someonebackslashevry 4 ปีที่แล้ว +2

    One thing I still don't quiet understand is why machines wouldn't be able to have a "consciousness"? If a machine achived "consciousness", wouldn't it be an A.I.? I understand that A.I. isn't a clear term, and is mostly based on interpretations, but then what would be a better term for a program that has achived seld-awareness?

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 ปีที่แล้ว +7

      My position is that consciousness is at least as subjective a goal as intelligence - probably more so. If there is no objective benchmark with which to evaluate the thing you’re trying to build, how can you keep the production going in the right direction and how could you know if and when you've successfully built it?

    • @someonebackslashevry
      @someonebackslashevry 4 ปีที่แล้ว +1

      @@EricSiegelPredicts well, wouldn't the benchmark be something similar to independence? If it can do tasks without needing to be "told" to do them, wouldn't that be independence, and therefore consciousness?
      Thanks for the reply, by the way!!

    • @EricSiegelPredicts
      @EricSiegelPredicts  4 ปีที่แล้ว +5

      @@someonebackslashevry Your suggestion implies a quantitative measure of evaluation: Make a list of tasks and measure how well it accomplishes all or some of them. That would firm up your idea into a specific performance measure. So, my [philosophical] question back to you would be, if you specify such a measure -- specify the list of tasks, etc. -- and then it scores well, is it conscious? Certainly the answer depends in part on the measure as you define it. But even if you want to philosophically argue the answer is "yes," I'm not sure I see how that makes the "AI" work any better...

    • @someonebackslashevry
      @someonebackslashevry 4 ปีที่แล้ว +2

      ​@@EricSiegelPredicts Thank you very much for answering my question! I really apretiate you taking the time for that. That did help me understand the problem with A.I.!

    • @someonebackslashevry
      @someonebackslashevry 3 ปีที่แล้ว +1

      @Jesus Christ I don't think that would work, as it is a very open field. Does the code I wrote count as conscious, because I messed up and it isn't doing what I tell it to do?

  • @alihamdar5916
    @alihamdar5916 ปีที่แล้ว +3

    Finally I found someone agreeing with me because of the facts I know. I'm a programmer and I know a good amount of information about this topic and it was clear for me it's called artificial intelligence but it's not intelligence at all

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว +2

      we don't know a shit about intelligence forget about the artificial

    • @alihamdar5916
      @alihamdar5916 ปีที่แล้ว +2

      @@allisRevealed987 but we know it's not intelligence for sure whatever intelligence is

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว +2

      @@alihamdar5916 yeah it's not intelligence what's there. It's programming

  • @pomodorostudyclub
    @pomodorostudyclub ปีที่แล้ว +2

    Hello Mr. Sigel
    , love the video. Any thoughts on the latest advancements on AI? Would you consider making and updates version of this video? There is so much doomsday rethoric out there and we need sober voices like yours

    • @EricSiegelPredicts
      @EricSiegelPredicts  ปีที่แล้ว +2

      Thanks! I've got a draft article yet to be published. GenAI is amazing, mind-blowing -- and yet still not evidence that we are moving toward AGI...

    • @pomodorostudyclub
      @pomodorostudyclub ปีที่แล้ว +2

      @@EricSiegelPredicts thank you for replying! I’ll stay posted for that article :-)

  • @THE-SHOCKMASTER
    @THE-SHOCKMASTER 6 หลายเดือนก่อน +1

    The real question people should be asking is… could it be possible to combine an actual human brain to a computer and if it becomes a reality in the future would that be considered A.I. or what ????? Neuralink worries me

  • @swavekbu4959
    @swavekbu4959 10 หลายเดือนก่อน +1

    "Artificial intelligence" is what we tell the public to sell ticks for what is otherwise mathematics and logic.

  • @anttam117
    @anttam117 3 ปีที่แล้ว +9

    Thanks for this episode, Eric. I am a writer and architect well steeped in the humanities, so I don't have much of the technical engineering and science background to back up my intuitions in such a way that folks will take my opinion on this topic seriously, but you just said all the things I've been talking about among friends and strangers in the last few years.
    I'm not surprised that people like Elon Musk and Stephen Hawking believe this wet dream. Folks like to forget that men and women of genius also say, and do, all kinds of stupid things (Musk is a king a tthat); it's just that they have the pedigree and eloquence to be more articulate about their moonshine than random street crazies.
    Sometimes I get the impressions that people want to believe in science fiction styled AI because of a mere need to believe in something greater than themselves. Living in modern secular societies, where even the awe and wonder of Nature has been rationalized into "resources for consumption" and where any subtle sense of the spiritual is mocked or persecuted, it seems that people are trapped into looking for a god made of mechanical parts.

    • @DGill48
      @DGill48 3 ปีที่แล้ว +1

      I'm certain that those "people like" don't believe that AI exists. They are cautioning us to avoid the creation of a machine that duplicates human self-awareness coupled with the storage power and computing speed of a computer.

    • @allisRevealed987
      @allisRevealed987 ปีที่แล้ว +2

      @@DGill48 WTF u are talking about. go to Hollywood

  • @EricSiegelPredicts
    @EricSiegelPredicts  3 ปีที่แล้ว +2

    Hi, this is Eric Siegel from the video. In my new Coursera course series (www.machinelearning.courses), I have refined and expanded parts of my line of reasoning. Here's direct access to the three Coursera videos on this same topic: www.coursera.org/lecture/the-power-of-machine-learning/why-machine-learning-isn-t-becoming-superintelligent-qPryq? and www.coursera.org/lecture/the-power-of-machine-learning/dismantling-the-logical-fallacy-that-is-ai-dmBTJ and www.coursera.org/lecture/the-power-of-machine-learning/why-legitimizing-ai-as-a-field-incurs-great-cost-35zh1

    • @EricSiegelPredicts
      @EricSiegelPredicts  3 ปีที่แล้ว +1

      Oops, I see those links are broken. Skynet is obviously working against us. Sorry about that. Stay tuned or just go ahead to Course 1 (auditable for free), Module 4’s sequence of three videos on “AI” - www.machinelearning.courses

  • @innerestless
    @innerestless 6 หลายเดือนก่อน +4

    Great vid, prescient in current world of AI mania. I am manage a small n team of developers and I see and experience the limitations of code daily. Machine learning is great but this odd cult-like fascination with end-times due to AI super intelligence is bananas. More of us need to be watching videos like this one to better understand the technology that drives”AI”. Thank you for this work, very helpful.

  • @lorensims4846
    @lorensims4846 5 วันที่ผ่านมา +1

    I first heard about "Artificial Intelligence" in the early '70s when the experts said it was still at least five years in the future.
    Ever since, nothing has changed. A.I. is still at least five years in the future and whatever we have now is NOT "A.I."
    To me, it all still looks like ELIZA. Just "clever programming."
    Computers just can't deal with "edge cases."
    They used to say with enough computing power it would "just happen."
    Then they said when enough networks got complicated enough it would "just happen."
    By this metric the Internet would automatically be "intelligent." This is CLEARLY not true.

    • @OrbitTheSun
      @OrbitTheSun 3 วันที่ผ่านมา

      AGI is already here. You just haven't realized it yet.

  • @comarius100
    @comarius100 3 ปีที่แล้ว +6

    All well said, Great presentation.

  • @campbellgriffin6396
    @campbellgriffin6396 ปีที่แล้ว

    Super unrelated but when you kept making jokes I expected a laugh track, I'm not sure why. and when one didn't happen my brain wanted to explode.

  • @g.v.6450
    @g.v.6450 4 หลายเดือนก่อน

    I was somewhat worried when Saudi Arabia made Sophia a citizen. That could have given them a legal basis for not letting “her” leave the country. I hope someone considered that.

  • @johnshortridge
    @johnshortridge 4 หลายเดือนก่อน +1

    I'm not so worried about the AI.. More worried about the business guys who want to lay off people to automate and save a buck.. That path leads to economic disaster with this tool .. But corporations don't care about workers and that they are the source of revenue. That said, it would be a better world having AI "Help" our management and owners get replaced with AI.

    • @kyriosity-at-github
      @kyriosity-at-github 3 หลายเดือนก่อน

      It works now opposite. You want to lay off staff (e.g. to exchange with younger people later), you have a great excuse with AI, which gonna replace them.

  • @DanzIndz
    @DanzIndz 2 วันที่ผ่านมา

    Watching this after watching Open AI's new GPT demo really drive home the point of "better at defining objects".

  • @beavisandbutt-head5363
    @beavisandbutt-head5363 4 หลายเดือนก่อน +1

    17:24 Man is the lowest cost, 150 pound, nonlinear, all-purpose computer system which can be mass-produced with unskilled labor.

  • @EricSiegelPredicts
    @EricSiegelPredicts  หลายเดือนก่อน +2

    Check out my latest Forbes article (04.10.2024), "Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype":
    IN FORBES: www.forbes.com/sites/ericsiegel/2024/04/10/artificial-general-intelligence-is-pure-hype/
    NARRATION: th-cam.com/video/UT3FgT7B8dI/w-d-xo.html

  • @EricSiegelPredicts
    @EricSiegelPredicts  5 ปีที่แล้ว +1

    Folks, you can also view and share on Facebook: facebook.com/pawcon/posts/1935597583161776

  • @zion6680
    @zion6680 11 หลายเดือนก่อน +2

    Could you do an update on this topic.
    I Still feel like most AI projects are between 40% and 60% Mechanical Turk, and everyone is eating it raw cause, consumerism lmao

    • @EricSiegelPredicts
      @EricSiegelPredicts  11 หลายเดือนก่อน

      Thanks for checking. The first 3 short videos at mlparadox.com are an updated take -- I argue the point bit more substantively -- and very recently I published in Harvard Business Review: hbr.org/2023/06/the-ai-hype-cycle-is-distracting-companies

    • @zion6680
      @zion6680 11 หลายเดือนก่อน

      @@EricSiegelPredicts Thank you!

    • @zion6680
      @zion6680 11 หลายเดือนก่อน +2

      @@EricSiegelPredicts So I'm reading it, and I'm already feeling much happier seeing ML used instead of AI.
      For some reason that really does seem way more appropriate for the technology.
      Machine learning feels industrial and computer esque, it mass produces a behavior. AI seems like it would be autonomous and not co dependent on training or mimicry

  • @SquareWaveHeaven
    @SquareWaveHeaven 3 วันที่ผ่านมา

    AI isn't a term used to mislead, but also to demean, belittle and threaten the human spirit.

  • @JustinMumma
    @JustinMumma 14 วันที่ผ่านมา

    Maybe the reason OpenAI, Microsoft, and the other big guys want to convince us that AI is an existential threat is so that strict laws & regulatory red tape will make it expensive and difficult for new small AI projects to get approval (to compete with OpenAI), and almost impossible for open source projects to exist because they don't generate profit and would be liable for anything anyone else uses it for

  • @Dina_tankar_mina_ord
    @Dina_tankar_mina_ord 5 หลายเดือนก่อน +1

    I just saw this title and listend for 1.3 minutes. So my question might be way off from the actuall content. But how well has these statements or preciction matured into todays AI capabilities?

    • @Jcossette1
      @Jcossette1 5 หลายเดือนก่อน +3

      Like fine milk

  • @AutMouseLabs
    @AutMouseLabs หลายเดือนก่อน +2

    For what it's worth, I am not afraid of machine learning. I am afraid of this tool being controlled by tech bros, who have shown themselves to be anti-democratic and incredibly short-sighted over and over.

    • @glenyoung1809
      @glenyoung1809 26 วันที่ผ่านมา

      Too many "AI accelerationists" and fanboys overlook this weakness or don't care as it seems to be a minor thing when compared to the Star Trek future they think we're on our way to.
      They forget that whomever controls this technology will have enormous power over the information economy and society.
      Already some governments have stated their intention to use AI technology to "combat misinformation" and monitor social media to block the spread of what they term disinformation, Canada under Trudeau has put $2.2 billion to implementing such a program. Combined with a "online harms law" which will imprison anyone found guilty of "hate speech" for life, this isn't a joke either.

  • @benwalker4660
    @benwalker4660 2 ปีที่แล้ว +4

    yes It is a big fat lie. Just clever databases that are very interactive, Cognitive ai is a myth.
    It used to fool the unwary that ai is super smart- when it limited ultimately but imput.
    Humans design ai- I'll give credit to them- the products are just that- their limited
    by 'design' scope.

  • @mrd6869
    @mrd6869 5 วันที่ผ่านมา +1

    Ahem...i'll come back to this video and chuckle after we achieve artificial general intelligence in the next two years.Its gonna be funny😅

    • @markjackson1989
      @markjackson1989 2 วันที่ผ่านมา

      I am willing to bet that if and when that happens, the argument will be that AGI is invalid because it was achieved the "wrong way" aka via machine learning. I think that is what people are missing. They seem to think the very fact that it's text prediction disqualifies it from ever being legitimate. I think a lot of skeptics are in for a rude awakening in 2025.
      Or, maybe they're right. It'll plateau next year or the year after that, and have fundamental flaws so deep that are a result of it being merely text prediction that progress will fizzle out.

  • @haros2868
    @haros2868 7 หลายเดือนก่อน +1

    Such an underrated video! Especially compared to nowadays ml (not ai) gemerated video scripts. If you think about it, did ENIAC the computer was more intelligent than a human? It was much faster in specific aspects such as addition and multiplication, but these "ai"s nowadays do the same thing but in a wider range of things, absolutely no common sense

  • @modernwizzard
    @modernwizzard 5 วันที่ผ่านมา +4

    who is watching this in 2024 after release of GPT4o? wanna hear your thoughts

    • @OrbitTheSun
      @OrbitTheSun 5 วันที่ผ่านมา +2

      We have left the stage of Machine Learning behind us and are now moving into the age of Artificial Intelligence, in the sense that is denied in this video.

    • @modernwizzard
      @modernwizzard 5 วันที่ผ่านมา +1

      @@OrbitTheSun yep

    • @markjackson1989
      @markjackson1989 2 วันที่ผ่านมา +1

      ​@modernwizzard I feel like people are more worried about "how" the logic is being generated instead of the resulting output. Based on what I am hearing, language models are either about to fizzle out and die, or achieve AGI. Nobody is gonna be concerned that the machine doesn't use "real" logic, they care about the resulting output. I have a feeling these things are gonna be solving complex calculus and physics problems, and the next arguement will be "It got the right answer the wrong way, so it's illegitimate."

    • @modernwizzard
      @modernwizzard 2 วันที่ผ่านมา

      @@markjackson1989 yes , the output matters the most. Most of the operations happening inside our subconscious mind are not fully realised and yet it influences our thought process a lot. In the end, we are able to complete actions in this world even though the workings of our mind is not yet fully realised. The same logic applies for these models.

  • @navigatingel6104
    @navigatingel6104 3 ปีที่แล้ว +3

    I only 6k views on this video, thanks youtube

  • @carimbo8604
    @carimbo8604 8 หลายเดือนก่อน +2

    It has been a time since I have heard such an amusing and interesting exercise of free thinking. Congrats for the courage and well tailored video!

  • @brianhopson2072
    @brianhopson2072 18 วันที่ผ่านมา +1

    Thank you. I am glad I'm not the only one that sees AI as the technologically marketable term that it is. Thank you for telling the truth and sharing. Now I'm going back into my horde of machine learning tools.

  • @SaphreCoalwolf
    @SaphreCoalwolf 2 ปีที่แล้ว +6

    It's like believing magicians are doing real magic

  • @nataliedesenhacoisas541
    @nataliedesenhacoisas541 ปีที่แล้ว +1

    Probably not seeing that it has nothing to do with ai but, more with why doom and gloom ai content is big youtube. For me it's because I tend to read "ai" as meaning " here's how your job or something you love is going to get automated whether you want it to be or not" or ( and this is a new one) " here's something that some random asshole can use to make your entire community hate you by making fake video and audio of you saying or doing something awful."
    Hopefully this made sense.

  • @dallassegno
    @dallassegno 28 วันที่ผ่านมา +2

    Remember how everyone thought internet was no big deal? Contrast that with ai being the most amazing greatest thingy ever or whatever.

    • @benjaminkemper5876
      @benjaminkemper5876 23 วันที่ผ่านมา

      Who thought the internet was no big deal though?! That is not how that went. Clearly.

  • @X1Y0Z0
    @X1Y0Z0 4 หลายเดือนก่อน +1

    Love your content!
    Thanks for this preseb

  • @allesok7499
    @allesok7499 4 หลายเดือนก่อน +1

    The problem is not machines becoming more "intelligent" but people becoming more and more stupid.

  • @MoonyMercuryBaby
    @MoonyMercuryBaby ปีที่แล้ว +2

    Good job, i dont like the fear mongering they been doin lately with ai

    • @mymixedbiscuit9159
      @mymixedbiscuit9159 ปีที่แล้ว +2

      This vid is completely outdated. he also said ai will never talk like a human...chatgpt anyone?

    • @EricSiegelPredicts
      @EricSiegelPredicts  ปีที่แล้ว +4

      @@mymixedbiscuit9159 Viewers, note that I have a dialogue on this point from Mixed Biscuit within the thread here started by Mark Walters -- go there to see my response.

  • @kalliste23
    @kalliste23 25 วันที่ผ่านมา +1

    If we're lucky LLM have given an insight into how a piece of the human mind works to engage with phenomenal reality. It's only a piece if it is indeed relevant and it's very much a part of a whole that's greater than the sum of its parts.

  • @tomkarnes69
    @tomkarnes69 7 หลายเดือนก่อน

    The robot dog that just kicked down your door with a sub machine gun mounted on its back and joystick operated is the robot apocalypse