Generative AI is not the panacea we’ve been promised | Eric Siegel for Big Think+

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ธ.ค. 2024

ความคิดเห็น • 1.3K

  • @tradecraft_fm
    @tradecraft_fm 4 หลายเดือนก่อน +498

    Sitting in front of the white backdrop, but a zoomed out shot is hilarious to me for some reason.

    • @clray123
      @clray123 4 หลายเดือนก่อน +20

      The made him sit on it in order to keep dirt from his shoes and spit from his mouth from getting on the elegant floor.

    • @mpprof9769
      @mpprof9769 3 หลายเดือนก่อน +27

      It's nonsensical, as if it is supposed to make it look more "authentic".

    • @Aleph_Null_Audio
      @Aleph_Null_Audio 3 หลายเดือนก่อน +14

      ​@@mpprof9769 - they should lower the boom mic into the shot or have him hold the lav mic! That'd be authentic too! 😋

    • @takeuchi5760
      @takeuchi5760 3 หลายเดือนก่อน +3

      maybe it's for clarity

    • @san2tube
      @san2tube 2 หลายเดือนก่อน +15

      It's a great setting! Witty idea with the C-stand and the back drop in a beautiful wooden, retro looking room. Good for clarity, yes - but it also symbolizes the difference between an artificial and an authentic environment.

  • @Joe29587
    @Joe29587 4 หลายเดือนก่อน +1011

    This video is completely biased, given that the guy is the co-founder of a predictive AI model.

    • @vidak92
      @vidak92 4 หลายเดือนก่อน +143

      Yeah but it's also completely true

    • @magnetsec
      @magnetsec 4 หลายเดือนก่อน +18

      bro really just said y = wx + b, where b = +infinity for predictive AI

    • @MoreFootWork
      @MoreFootWork 4 หลายเดือนก่อน +4

      BT is sht

    • @luizgustavoarantes
      @luizgustavoarantes 4 หลายเดือนก่อน +37

      he literally said that most of it is hype lol

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน +3

      Yeah, what a joke.

  • @gummylens5465
    @gummylens5465 4 หลายเดือนก่อน +753

    "I want AI to do my laundry and dishes so that I can do art and writing,
    not for AI to do my art and writing so that I can do my laundry and dishes."

    • @TomatoTomas20
      @TomatoTomas20 4 หลายเดือนก่อน +11

      Haha you’re funny 😂

    • @onlyrick
      @onlyrick 4 หลายเดือนก่อน +7

      @gummylens5465 - A most excellent observation! Be Cool.

    • @61zu
      @61zu 4 หลายเดือนก่อน +28

      I want AI to do art and writing and laundry and dishes and everything else that I don't want to do.

    • @SOURCEw00t
      @SOURCEw00t 4 หลายเดือนก่อน +16

      You're better off doing your dishes yourself. It can be therapeutic and help you be more creative with your art and writing.

    • @ianmatejka3533
      @ianmatejka3533 4 หลายเดือนก่อน +8

      It’s not that simple.
      Doing the dishes is an incredibly hard task from an AI standpoint.
      First you need mechanical robotics that have a sufficient level of balance and dexterity. This is incredibly difficult, the proof being we’ve worked on robotics for decades, yet do not have a single commercial general purpose robot yet.
      In conjunction with a general purpose robot, you need a sufficiently advanced computer vision system to allow the robot to perceive the environment. Tesla had been trying to implement self driving cars for years, yet that last little bit of precision is difficult to achieve.
      On top of computer vision, you’ll need some form of general intelligence that is capable of planning to a degree. In my opinion ChatGPT is good enough for this task, but you’ll still need to embody it with RAG and a long term memory system.
      The point is, generative AI is a stepping stone to the kind of AI we all want. We cannot achieve robots that are capable of general tasks, without sufficient advancements in computer vision.

  • @crazycool1128
    @crazycool1128 4 หลายเดือนก่อน +525

    The problem started when they rebranded machine learning as AI

    • @rocketman-766
      @rocketman-766 4 หลายเดือนก่อน +24

      Ikr since when every manchine learning stuff is rebranded into AI.
      "We use AI to reduce noises in your vocal recording" is more fancier than "We use machine learning alogrithm to reduce noises in your vocal recording".

    • @matheussanthiago9685
      @matheussanthiago9685 4 หลายเดือนก่อน +42

      "The biggest trick linear algebra ever pulled was convincing people it was ever 'intelligent' "

    • @martiendejong8857
      @martiendejong8857 4 หลายเดือนก่อน +8

      But then what is AI?

    • @nickgirdwood3082
      @nickgirdwood3082 4 หลายเดือนก่อน +19

      The problem is you don't understand what you're talking about.

    • @wanderlust0120
      @wanderlust0120 4 หลายเดือนก่อน +3

      Idk this is not discussed more. The most devious rebranding exercise in history

  • @AdiSadalage
    @AdiSadalage 4 หลายเดือนก่อน +599

    "Predictive AI", or previously branded as "Big data analytics"🤦‍♂

    • @mrtienphysics666
      @mrtienphysics666 4 หลายเดือนก่อน +55

      or previously Data Mining

    • @bobsavage3317
      @bobsavage3317 4 หลายเดือนก่อน +79

      @@mrtienphysics666 or "Applied Statistics"

    • @emperorpalpatine6080
      @emperorpalpatine6080 4 หลายเดือนก่อน +28

      Also known as interpolating between some values

    • @luisdiegocr
      @luisdiegocr 4 หลายเดือนก่อน +23

      yes, marketing/sales people have to sell the hype, it is their job to exaggerate.

    • @josephyeung2606
      @josephyeung2606 4 หลายเดือนก่อน +11

      or Number Crunching

  • @HollywoodCameraWork
    @HollywoodCameraWork 4 หลายเดือนก่อน +91

    Lots of comments here seem to think that AI will improve linearly towards AGI, but this won't happen without some new fundamental discovery yet to be had, which could happen in 5 days or 50 years. LLMs and diffusion models are maxing out, and have nearly stopped improving, even with trillions of training examples. Yet, a small child can learn from a single example, which shows that our brains' architecture is different in critical ways. Kudos to humanity for discovering a small piece of the puzzle, but that's all it is. All our work is in front of us, and it's not linear. We're on the plateau now.

    • @SEALCOOL13
      @SEALCOOL13 4 หลายเดือนก่อน +5

      Bro this reads like a speech the leader of the resistance gives before going on a last-ditch effort against the machines in a sci-fi post-apocalyptic movie. Particularly the last line has such a 'Adam-McKay-movie-ending-about-how-some-global-tool-was-mishandled' type shit

    • @ПавелКовалёв-с5ь
      @ПавелКовалёв-с5ь 3 หลายเดือนก่อน +1

      They think improving is not linear but exponential!)

    • @squamish4244
      @squamish4244 2 หลายเดือนก่อน

      Most researchers, though, do not say that LLMs are the path to AGI. They say it a major tool that will drastically change society, but it won't produce AGI. Google produced Gemini basically to remind people that it is still an AI company, and Gemini is very good at the benchmarks, but DeepMind's focus is RL and right now, improving its medical platforms. It doesn't need to focus on investors and hype because it's Google, it already owns the world.

    • @hkart2
      @hkart2 2 หลายเดือนก่อน

      I agree that there's still a lot to uncover on the path to truly general AI. Regarding the analogy with child learning and the human brain, I think that our brains aren't learns things from scratch, it's more like we come equipped with a built-in, pre-trained model, and our experiences and learning serve as the fine-tuning process.

    • @softwarerevolutions
      @softwarerevolutions 2 หลายเดือนก่อน

      You just predicted Dennis Hassabi's words, spoken yesterday only.

  • @johndhoward
    @johndhoward 4 หลายเดือนก่อน +613

    This is a nice commercial for his products, but it's interesting he criticized GenAI for not getting it right every time. Predictive AI doesn't either, its predictions are self evidently best guesses too. The difference isn't capability, it's expectations.

    • @shivangchaturvedi237
      @shivangchaturvedi237 4 หลายเดือนก่อน +14

      I agree!

    • @AndersRosendalBJJ
      @AndersRosendalBJJ 4 หลายเดือนก่อน +28

      Yeah he seems very untruthful

    • @bsmithhammer
      @bsmithhammer 4 หลายเดือนก่อน +10

      And it doesn't help when those expectations are grossly distorted by the media, pundits and advertisers.

    • @geoffwatches
      @geoffwatches 4 หลายเดือนก่อน +14

      I mean predictions are inherently fallible and everyone knows this. Whereas the other day chat gpt (which we expect to be pretty accurate) did a sum for me which was wrong, I asked it to recheck and it was like oh sorry I was wrong. Bizarre.

    • @thereal415er
      @thereal415er 4 หลายเดือนก่อน +35

      He literally just said that in this video. Did you not watch the whole video? it seems from this comment you rushed to put in your two cents before actually seeing the entire video. He clearly mentioned that predictive a.i. gets things wrong but the benefit of it in certain contexts like that of UPS outweighs the downfalls to the tune of 350 million dollars saved per year. LISTEN.

  • @ChefAndyLunique
    @ChefAndyLunique 4 หลายเดือนก่อน +141

    He hasn’t given me confidence in the idea that AI isn’t going to completely change our lives.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน +29

      Relax. Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @danieljames4050
      @danieljames4050 4 หลายเดือนก่อน

      It already did change our lives ten years ago when social media giants utilised it to hijack our attention.

    • @munarong
      @munarong 4 หลายเดือนก่อน +12

      For them the upper class, yes, have less change, for normal people like lower middle class down, I believe it will change a lot. I myself lost my job partially because of AI. AI in human out. ( no joke )

    • @HouseRavensong
      @HouseRavensong 4 หลายเดือนก่อน +2

      When it comes to the economy, the Fed is always the last to know. They are like a Victorian detective who explains the crime days after it happened.

    • @josersleal
      @josersleal 4 หลายเดือนก่อน +1

      @@munarong how did you loose you job=? incompetence?

  • @tonyxavier6509
    @tonyxavier6509 4 หลายเดือนก่อน +251

    If you hear "this is just the beginning" so frequently, highly likely that you are hearing about a tech bubble

    • @DaleIsWigging
      @DaleIsWigging 4 หลายเดือนก่อน +22

      The bubble is the investors, not the tech. There is no such thing as an open source bubble.
      They are more like bricks that never get used to build a house but might get repurposed later.

    • @matt.stevick
      @matt.stevick 3 หลายเดือนก่อน +5

      remember this comment reply:
      you hear it frequently due to it is true.
      it is the early dawn of a new technology revolution.

    • @c.t.nelson4314
      @c.t.nelson4314 3 หลายเดือนก่อน

      🤮​@@matt.stevick

    • @squamish4244
      @squamish4244 2 หลายเดือนก่อน +1

      @@DaleIsWigging Indeed. Only a handful of actual researchers in AI are calling LLMs the road ahead. They agree it is an important addition, but not the final leap. Investors have created a narrative about a tech bubble that wasn't there before ChatGPT less than two years ago, and critics have latched onto it to call the whole field a bubble. Also, a lot of laypeople like myself now think LLMs = AI and it's not their fault because nobody in the propaganda machine either for or against has told them differently.

    • @memegazer
      @memegazer 2 หลายเดือนก่อน +4

      If you look at tech bubbles from the past
      they have transformed society in ways many did imagine beforehand

  • @myleshungerford7784
    @myleshungerford7784 4 หลายเดือนก่อน +192

    Generative AI as good for "first drafts" is a pretty good description for where we're at now. But at the old saying goes, "this is the worst its ever going to be."

    • @rahulbhatia4775
      @rahulbhatia4775 4 หลายเดือนก่อน

      Really bro, A.I. can't understand anything. It's human like responses are created by designers and programmers. It doesn't understand anything and may never will. Cryptocurrently was the next thing to money right, did it take off? Countless businesses have professed revolutions, but most have failed.

    • @cactusdoodle8619
      @cactusdoodle8619 4 หลายเดือนก่อน +9

      but is there still that fundamental flaw in LLMs that will never allow the "100%" that is needed for AGI? I totally agree with where I think you're goin that its going to improve and improve from here on out, but I have a feeling there will have to be some kinda MAJOR shake up before the next huge leap that we all felt in the early months of 2023.

    • @myleshungerford7784
      @myleshungerford7784 4 หลายเดือนก่อน +5

      @@cactusdoodle8619 Honestly I don’t know. AGI is beyond my expertise. But as someone who build predictive models I can confirm generative AI hasn’t helped with it. For writing and debugging code, however, it speeds things up immensely. Incremental improvements on that are a big deal even if we don’t get AGI which would change everything.

    • @cactusdoodle8619
      @cactusdoodle8619 4 หลายเดือนก่อน

      @@myleshungerford7784 yeah for sure. I’m using chatgpt as a programming buddy, it’s allowing me to get things done that would otherwise take weeks to figure out. I can’t quite say it serves as a mentor because I still need to know jussssssst enough to catch it when it misunderstands something. But the pace I’m working with it is letting me really learn some amazing things. It blows my
      Mind the most when it comes to brainstorming up functions and test cases.

    • @joseahi349
      @joseahi349 4 หลายเดือนก่อน +11

      For me AI is currently I tool that helps me a lot with coding, meaning I can develop software that helps my repetitive job. But I have to find patterns in my tasks to make this useful. On addition to that, I try it for consultancy... and in this area it fails a lot. What is worse, it gives you confident answers that are false, it makes up things as if it was trying to please you rather than answer correctly.

  • @litpapi1849
    @litpapi1849 4 หลายเดือนก่อน +125

    Generative AI is literally built on predictive models, using them to create new content by predicting the next element in a sequence. It’s an evolution of predictive AI. It’s just predictive AI doing something more creative. So saying predictive AI is better makes no sense lol

    • @RadiantNij
      @RadiantNij 4 หลายเดือนก่อน +4

      My thought exactly. Gen Ai uses hypothetical situations (in a sense hallucinating with a purpose) plus reasoning to know what to predict, I think arguing for predictive simpler system makes more sense when you are talking about security and reliability. In such a case I think more sophisticated expert systems should be in play.

    • @ultramegax
      @ultramegax 4 หลายเดือนก่อน +6

      Yep, I agree. His argument makes little sense, if you have any understanding of how ChatGPT and its ilk work.

    • @JohnDoe-my5ip
      @JohnDoe-my5ip 4 หลายเดือนก่อน +5

      It’s predictive in the same way that autocorrect is predictive. It is a fundamentally random process, like a Markov chain. This is so fundamentally different from how ML/predictive analytics works, that it’s almost the dual of genAI.

    • @litpapi1849
      @litpapi1849 4 หลายเดือนก่อน +3

      Generative AI might introduce elements that seem random, like autocorrect or a Markov chain, but it’s not purely random. It’s guided by probabilistic models, which are structured forms of prediction. While there are differences, I wouldn’t call them ‘dual’ systems-generative AI builds on predictive principles, taking them in a more creative direction rather than opposing them.

    • @animusadvertere3371
      @animusadvertere3371 4 หลายเดือนก่อน +1

      Not

  • @acb78
    @acb78 4 หลายเดือนก่อน +88

    Wow. Steve Martin really can do anything.

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +9

      I'm the guy in the video. You're the second comment comparing me to Steve Martin. I grew up quoting him and will be passing your comment along to my old friends and family! :)

    • @TigreArcana
      @TigreArcana 4 หลายเดือนก่อน +2

      @@EricSiegelPredictsI was thinking an older Ryan Reynolds haha

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน

      @@TigreArcana Haha. Thanks, but that might be bad news. I a big fan of his, and I think he's good-looking, but strangely my wife has told me that she doesn't like his looks! 🤯

    • @WilliamStansbury-xb4ui
      @WilliamStansbury-xb4ui 3 หลายเดือนก่อน +1

      I'm pretty sure that''s Matt Damon. 😂

    • @WilliamStansbury-xb4ui
      @WilliamStansbury-xb4ui 3 หลายเดือนก่อน +1

      ​@@EricSiegelPredictsno...,yer mat damon

  • @jamesmonschke747
    @jamesmonschke747 4 หลายเดือนก่อน +109

    I have been saying for a long time that "generative AI" / "Large Language Models" are NOT "artificial intelligence". They are "imitation intelligence".
    They can only imitate the data that they were trained on, constrained by a query.
    Edit / expansion : Consider that intelligence is orthogonal to knowledge. I.e. a person can be intelligent, but ignorant (due to lack of education), or can be knowledgeable (educated), but not intelligent. I would argue that LLMs / Generative AI may be considered to be a form of knowledge representation, but without intelligence. If they had intelligence as well, then we might not see things like recipes for pizza that use Elmer's glue.

    • @Will140f
      @Will140f 4 หลายเดือนก่อน +11

      They Synthesize. They don’t generate. There is nothing they can say that isn’t based on training data.

    • @ianmatejka3533
      @ianmatejka3533 4 หลายเดือนก่อน +16

      Their data is the entire internet of human writing.
      LLMs operate in an “embedding space”. The embedding space is a high dimensional vector representation of written words that was formed during the training process.
      When an LLM generates a token, it can be thought of as “walking a path” along the geometry of the embedding space.
      Although the embedding space is fixed after training, you can still “teach” it new things temporarily by providing examples within the context. The LLM will pick up on the pattern and “navigate” the embedding space differently

    • @bobsavage3317
      @bobsavage3317 4 หลายเดือนก่อน +14

      Back in the 70s we used to talk about "AI", but we collectively realized that was far too ambitious a goal, so we started acknowledging we were working on something "weaker" than true AI, and switched to talking about "machine learning". Now tech bros talk about "AI" without having solved all of the "hard" problems associated with it. Why? Because $$$. It is a scam. We are no closer to "AI" (now re-branded as "GAI"). Hype -> $$$.

    • @alpha.male.Xtreme
      @alpha.male.Xtreme 4 หลายเดือนก่อน +16

      People keep saying this despite the fact that emergent thinking crops up with higher parameter counts. Humans also learn by analysing patterns in their environment (data set) and predicting the appropriate response to current stimuli. You could make humans out to be unintelligent machines with the same reductive reasoning.

    • @ManjaroBlack
      @ManjaroBlack 4 หลายเดือนก่อน

      Literally describing humans. We can only imitate and generate from the data we were trained on. No one is pulling out new ideas from thin air.

  • @bujin5455
    @bujin5455 4 หลายเดือนก่อน +6

    That whole UPS story is what we've been using with superscalar processors since the 1990s, where we predispatch instructions to the CPU for processing, in a process called "pipelining." This requires branch prediction, where we have to guess what the next most likely instruction call is going to be, so that we can already have the instruction staged and processing by the time the calling instruction has finished executing. It's funny how these techniques get applied more widely, and all of a sudden it's novel again. lol

  • @mrparkerdan
    @mrparkerdan 4 หลายเดือนก่อน +82

    i used predictive AI to invest in the stock market ... so far, lost 72% of my investments 🤦🏻‍♂️

    • @madalinradion
      @madalinradion 4 หลายเดือนก่อน +19

      It predicted you'll lose your money, ai is working as intended boss 😂😂

    • @ivandejesusalvarez9313
      @ivandejesusalvarez9313 4 หลายเดือนก่อน +1

      No you did not lose 72% of your investments. Were you NOT paying attention during the investment part of the whole thing? Were you making a sandwich?

    • @Anfas-s9t
      @Anfas-s9t 4 หลายเดือนก่อน +3

      Not that kinda prediction bro😂😂😂

    • @Anfas-s9t
      @Anfas-s9t 4 หลายเดือนก่อน

      ​@@madalinradion😂😂

    • @NealBurkard-ut1oo
      @NealBurkard-ut1oo 4 หลายเดือนก่อน +1

      Haha, if it worked, whoever created it would be using it, not selling it. I wonder if it accounts for all the other licensed users feeding them the same info

  • @JohnTell
    @JohnTell 4 หลายเดือนก่อน +78

    Thank you Big Think for surfacing this topic. I work in Software sales, and my linkedin page is bombarded with AI hype and staged progress. Big media is just looking at those staged AI visions, amplifying the hype, to the extent where most of the people I know now believes that AGI will arrive any minute. For a lot of people this is causing distress..

    • @olafsigursons
      @olafsigursons 4 หลายเดือนก่อน +4

      It's just starting. It's like looking at a Ford model T and thinking the automobile is just an hype. LOL.

    • @MarioTsota
      @MarioTsota 4 หลายเดือนก่อน +10

      @@olafsigursons gpt 4 was trained on almost all of the internet already. There is not much more data for AI to be trained on. If they choose to train AI models on AI outputs studies show that it degenerates quickly. The models are most likely going to plateau, since the data and energy demands are exponential, while their supply is not.

    • @xxgaelixsxx8151
      @xxgaelixsxx8151 4 หลายเดือนก่อน

      But lets be realistic, how much time till we get an AGI?

    • @mistycloud4455
      @mistycloud4455 4 หลายเดือนก่อน

      AGI Will be man's last invention

    • @williampatton7476
      @williampatton7476 4 หลายเดือนก่อน

      I disagree. I'm not a bit fan of it because I think it will make the world boring. But while there is some hype it's not all hype. And why should we be so sure we understand anything? Like when he's saying the AI doesn't 'understand' what he's saying is it isn't concious. But it's in good faith we assume that of anyone else. Why refuse that of an intelligence just because it's made of silicon? And it's in it's early days too. To say that it's hype while parlty true doesn't seem to capture the genuinely ground breaking and actaully interesting and mind bending things it's presenting. But again for all that I hate what it is doing to human creativity and think it will make the world boring.

  • @mariotejas
    @mariotejas 2 หลายเดือนก่อน +3

    Agreed. Many of the things we are good at it are ML. We used a lot in engineering during the last 10 years. Now we have managers re branding all these advances under AI. The hipe of LLM doesn’t connect with the well know traditional unsupervised/supervised algos

  • @JSK010
    @JSK010 2 หลายเดือนก่อน +4

    100% agree but doesn’t matter: new technologies need to be hyped otherwise no one will spend the billions and trillions to unlock its potential

  • @Mr.Andrew.
    @Mr.Andrew. 4 หลายเดือนก่อน +29

    I stopped the show at 1:10 to just say "duh" that predictive AI is likely to benefit society more than stochastic based generative AI. But it does come with risks of reinforcing stereotypes, so we must be careful in how we apply it. For example, predicting what goods to ship where and when based on market demand is great for logistic efficiencies at scale. To assume that an individual wants a certain type of product and to only present them with options similar to that is restricting potential and opportunity. So we must be careful in applying any AI, predictive or stochastic in nature.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน

      Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @Mr.Andrew.
      @Mr.Andrew. 4 หลายเดือนก่อน +1

      @@katehamilton7240 depends on the definition of "AGI". Everything has limits, doesn't mean you can't create artificial intelligence that can "generalize" just because entropy exists. Nature and evolution did it with us despite this limit. The real question is should we keep trying and what should we do with this technology once we have it. Cat is already out of the bag, just a matter of time and decisions on how to use and control it.

    • @Chris-el4hd
      @Chris-el4hd 2 หลายเดือนก่อน

      When Amazon gave the police predictive AI.... that was INTERESTING.

    • @Mr.Andrew.
      @Mr.Andrew. 2 หลายเดือนก่อน

      @@Chris-el4hd links for context?

    • @mikeha
      @mikeha 3 วันที่ผ่านมา

      especially in health care situations. that's where AI gets dangerous

  • @cvikastube
    @cvikastube 29 วันที่ผ่านมา +1

    The debate between predictive AI and generative AI isn't about one being "better" than the other-it’s about their different purposes and how each aligns with specific tasks or use cases.
    It's not accurate to say "predictive AI is better" because:
    You can’t use predictive AI for everyday task-not all tasks involve prediction.
    Generative AI fulfills a unique need: creative, interactive, or open-ended problem-solving.

    • @EricSiegelPredicts
      @EricSiegelPredicts 23 วันที่ผ่านมา

      I'm the guy in the video. I agree. Comparing them is apples and oranges. However, they are competing for the same share of attention and of budgets -- and far to often, genAI is sucking almost all the oxygen in that respect. Entire data science teams have almost entirely pivoted. This is a mistake. Companies should be investing at least as much in predictive.

    • @cvikastube
      @cvikastube 23 วันที่ผ่านมา

      @@EricSiegelPredicts
      Thank you for your response, Eric. I think generative AI has taken the market by storm because of its accessibility and ease of use.
      Generative AI is almost free for individuals and businesses to start using, with minimal setup or expertise required. Tools like ChatGPT are easy to access and work for a wide range of tasks, from brainstorming to content creation. On the other hand, predictive AI requires significant investment-both financially and in terms of infrastructure, training datasets, and expertise. It's not something every individual or even every company can easily adopt, as it’s mainly used by organizations that need specific forecasting capabilities.
      Generative AI has become a tool for everyone, while predictive AI remains a tool for businesses with targeted use cases. That’s likely why generative AI is dominating attention and budgets. But I agree with your point-businesses shouldn’t neglect predictive AI, as it is critical for solving problems that generative AI cannot address. The challenge is convincing companies to allocate resources to both without being swayed entirely by the hype surrounding generative AI.
      Note: I used generative AI to draft my thoughts to address your point.

    • @cvikastube
      @cvikastube 22 วันที่ผ่านมา +1

      @EricSiegelPredicts I think generative AI has taken the market by storm because of its accessibility and ease of use.
      Generative AI is almost free for individuals and businesses to start using, with minimal setup or expertise required. Tools like ChatGPT are easy to access and work for a wide range of tasks, from brainstorming to content creation. On the other hand, predictive AI requires significant investment-both financially and in terms of infrastructure, training datasets, and expertise.
      But I agree with your point-businesses shouldn’t neglect predictive AI, as it is critical for solving problems that generative AI cannot address. The challenge is convincing companies to allocate resources to both without being swayed entirely by the hype surrounding generative AI.

    • @EricSiegelPredicts
      @EricSiegelPredicts 22 วันที่ผ่านมา

      @@cvikastube Yes, exactly!

  • @AlexWilkinsonYYC
    @AlexWilkinsonYYC 4 หลายเดือนก่อน +36

    Man confuses statistics, decision trees, analytics and algorithms with genuine AGI efforts (since 1991).

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +15

      I'm the guy in the video. I would say since the 1960s!

    • @jimj2683
      @jimj2683 3 หลายเดือนก่อน +1

      @@EricSiegelPredicts Great video! Do you have a business idea for a fresh graduate (in robotics and ai)?

    • @theawebster1505
      @theawebster1505 3 หลายเดือนก่อน +1

      Everything is AI today. Even if it is pure statistics with a switch operator.

    • @SnaabbeamSneebers
      @SnaabbeamSneebers 2 หลายเดือนก่อน

      There are no genuine AGI efforts because we do not even know how to go about creating this.
      Intelligence cannot spontaneously arise within a computer program. The very notion is ridiculous and self evident to a child, but mid wits actually believe this can happen.

    • @HahaGuitarGoBrrr
      @HahaGuitarGoBrrr 2 วันที่ผ่านมา +1

      AGI efforts such as? You just talking about the Neuroevolution that's we've been applying since the 90s and haven't really improved yet?

  • @AIWorkforceEvolution1
    @AIWorkforceEvolution1 หลายเดือนก่อน +1

    These informative TH-cam videos are incredibly interesting! I truly enjoy learning from them, and I’m also trying to share similar information with my viewers on my own channel.

  • @aaronshowalter7020
    @aaronshowalter7020 2 หลายเดือนก่อน +3

    Maybe I'm misunderstanding, but isn't generative AI technically 'predictive AI'? Isn't that exactly what LLMs are based on, their ability to PREDICT what word comes next, based on a very advanced calculation of the PROBABILITY of the next word in a proper, valuable response?
    Some of the use cases that current gen AI it delivers are: gathering, organizing and summarizing large bodies of research, evaluating the validity of strategies in many different specific domains, streamlining the development of projects by providing actionable feedback at every stage, assuming integrity of data & metrics......
    Ok, it does work better if you train it on the data most relevant to your application, but let's also remember how impressive it is after HOW many years of development? The transformer deep learning architecture was publicly disclosed in 2017. You mentioned that you've been on predictive AI since, what, 91? Hmm, given the pace of progress so far with this latest approach, where do you see it being in an equivalent amount of time?
    (33yrs - 7 yrs = only another quarter century for Gen AI to catch up to the mind-blower that is 'Predictive')
    Is this really a fair comparison is all I'm asking?

  • @mitchs6112
    @mitchs6112 4 หลายเดือนก่อน +7

    I don't need Generative AI to be "autonomous", it's already 2 or 3x'ed my output as a Product Manager.
    Everything is always boiled down to binaries: If it's not autonomous, then it's a failure? I don't understand that argument and can only assume that there are other motivations for this negative perspective from this guy like VC funding for his own company.

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +2

      I’m the guy in the video. I didn’t say genAI is a failure (or not valuable)!

    • @jimmorrison2657
      @jimmorrison2657 4 หลายเดือนก่อน +2

      "it's already 2 or 3x'ed my output as a Product Manager" - Has it really though?

    • @fofopads4450
      @fofopads4450 3 หลายเดือนก่อน

      yeah what did it do? makes your bullet lists?

    • @mitchs6112
      @mitchs6112 3 หลายเดือนก่อน

      You’ve basically described the peak of inflated expectations stage of the Gartner Hype Cycle.
      Generative AI helps me to synthesize larger docs into dot points, create templates for product requirement docs, explain technical terminology and things like that.

    • @jimmorrison2657
      @jimmorrison2657 3 หลายเดือนก่อน +2

      ​@@mitchs6112Yes but after you've proof read and corrected it I find it hard to believe it makes you two to three times more productive. I use it every day for my work too and I love it but I would say it makes me ten percent more productive.

  • @7TheWhiteWolf
    @7TheWhiteWolf 4 หลายเดือนก่อน +19

    The problem is everyone wanted AGI and we wound up with image and text generators over saturating the internet. LLMs and Diffusion Models aren’t AGI and it’s silly to think so.
    The thing is marketing has to spin it so that they get more investors so they have to keep the hype going, it’s all about presentation.

    • @bsmithhammer
      @bsmithhammer 4 หลายเดือนก่อน +1

      People often want what they barely understand.

    • @alpha.male.Xtreme
      @alpha.male.Xtreme 4 หลายเดือนก่อน +3

      People have such short attention spans, AI has only been in the public eye for 2 years or so with genAI and people already demand AGI and dismiss all AI products as 'hype'. It speaks volumes about the actual rate of progress when expectations are this ludicrous that people are mad if AI isn't some omniscient societal paradigm shift that develops overnight.
      Things like generative music, video, images and personal assistants are decade-defining phenomena in themselves and they're still in their infancy. Hype is a silly word to describe such amazing inventions especially when uses in Biotech/healthcare are already pronounced (Alphafold 3 for instance).

    • @alpha.male.Xtreme
      @alpha.male.Xtreme 4 หลายเดือนก่อน +4

      No one claims these things are AGI, something doesn't have to be an AGI to be disruptive. A lot of current jobs can be automated without general intelligence.

    • @7TheWhiteWolf
      @7TheWhiteWolf 4 หลายเดือนก่อน +4

      @@alpha.male.Xtreme Yeah, it’s been 2 years, and the internet has already been turned to crap and slop from gAI.
      It’s not AI, it’s a fake imitation chat bot.

    • @squamish4244
      @squamish4244 4 หลายเดือนก่อน

      @@alpha.male.Xtreme All of these people have been careful to avoid taking cheap shots at AlphaFold (which is not generative AI), ESM, RoseTTaFold, or generative AI used in searching through research papers. They're too much of a net good, and anyway attacking medical AI would make you look like a terrible person.

  • @SciTechVault
    @SciTechVault 4 หลายเดือนก่อน +14

    Nice advertisement. I am impressed. However, I also need to mention that even predictive AI can (and does) go wrong. This is what I don't like about marketing. People only showcase the benefits of the product/service they are trying to sell while conveniently ignoring its limitations. Not fair. Unethical.

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +3

      Predictive AI is (only) better than guessing -- much better. That's valuable.

    • @ruanvermeulen7594
      @ruanvermeulen7594 4 หลายเดือนก่อน +1

      I hear what you're saying. However, predictive AI is indeed more promising to me on the basis of how it works fundamentally. Predictive AI produces wrong output, too, but it is actually working with something closer to the real problem; i.e. it's not just playing with words. But maybe we need to look further than predictive AI, too (hence avoiding the marketing pitfall). But generative AI is not all-powerful, and its limitations are being realised at last.
      Take a look at Sabiene Hossenfelder's video here (I think she is spot on).

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +1

      I'm the guy in the video. I do always work very diligently to be forthcoming about its limited ability to predict -- not like a magic crystal ball, but generally only better than guessing, which is more than sufficient to be valuable for most use cases. I call this The Prediction Effect (introduced in my earlier book, "Predictive Analytics").

    • @ruanvermeulen7594
      @ruanvermeulen7594 4 หลายเดือนก่อน +1

      @@EricSiegelPredicts Apologies if I came across as also saying you're just doing marketing. I just meant to say that, recognizing that anyone giving a talk would naturally like to promote their brand as well, I can still point out that there is more to what you said than just spreading your brand.
      I like to hear this coming from experts like you, because it looks like too many experts are still too invested in generative (especially LLMs). Although I found it fascinating to see how far they got with LLMs I think we can now see that these things have limits that are not going to be overcome by larger datasets, finetuning and agentic designs including mostly LLMs.

  • @CharlF932
    @CharlF932 4 หลายเดือนก่อน +48

    Totally agree. The hype is so hyped that we've "decided" everything surrounding AI even though it's still in development. This is life today, where a movie is not yet in theaters but people already talk about how good or bad it is.

    • @alpha.male.Xtreme
      @alpha.male.Xtreme 4 หลายเดือนก่อน +1

      It's like people want AI to flop.

    • @cesar4729
      @cesar4729 4 หลายเดือนก่อน +1

      ​@@alpha.male.Xtreme Which is neurologically explainable. Our subconscious brain is like a baby uncomfortable with the threats of uncertainty. Faced with a phenomenon that threatens our very foundation, it is difficult to expect a wise and measured reaction.

    • @victorkaranja1420
      @victorkaranja1420 4 หลายเดือนก่อน +3

      ​@@alpha.male.Xtremeyeah it's almost like we can already see the negative consequences from a mile away and would rather we avoid it like the crap we get from social media or current day internet or something ( except somehow worse cus this realistically has less benefit for the layman)

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน +1

      IKR? Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @kevinmcfarlane2752
      @kevinmcfarlane2752 3 หลายเดือนก่อน

      Yes, it’s over hyped but it is still valuable. Just not the greatest thing since sliced bread.

  • @Hexspa
    @Hexspa 4 หลายเดือนก่อน +1

    Generative AI is already revolutionary. I use regular search less, I can rubberduck arguments, I can get my videos turned into shorts, titled and captioned, etc.

  • @its_grub
    @its_grub 2 หลายเดือนก่อน +4

    Correct me if I'm wrong -- not a ai expert -- but is generative AI not just predictive AI applied to words, images, etc ?

    • @MatthewFry
      @MatthewFry 2 หลายเดือนก่อน

      You are not wrong.

  • @dbos7648
    @dbos7648 หลายเดือนก่อน +1

    Finally someone says it the way it is. Whoever is in AI field and worth his salt agrees with this. There is a place for GenAI, but the hype is ridiculous....

  • @TheGuggo
    @TheGuggo 4 หลายเดือนก่อน +11

    The most detailed example of what predictive AI can do is about a large corporation making bigger profits.
    It all boils down to make rich people richer.

    • @NealBurkard-ut1oo
      @NealBurkard-ut1oo 4 หลายเดือนก่อน

      That's what people hear. It's much harder to quantify when using terms of safety, traffic patterns, etc. Plus the savings can be sent through to the customer, which may give then a larger market share

    • @clray123
      @clray123 4 หลายเดือนก่อน

      Like most of human activity overall

  • @rumbobbie1
    @rumbobbie1 หลายเดือนก่อน +2

    He's basically saying a hammer is better than an apple.

  • @olafsigursons
    @olafsigursons 4 หลายเดือนก่อน +28

    It's like saying in 1996 that internetr is not what we were promised. It's just starting.

    • @JohannPascual
      @JohannPascual 4 หลายเดือนก่อน +6

      Yeah. This video is kinda stupid.

    • @DoctorMandible
      @DoctorMandible 4 หลายเดือนก่อน +8

      Except that LLM's existed before 1996 and predate the internet by decades. ELIZA was in 1966! Markov models are older than you seem to think.

    • @jmg9509
      @jmg9509 4 หลายเดือนก่อน +1

      Yes, but the internet had and has a pivotal role in why Ai is finally at the level it is at today. It wouldn’t be possible without the internet + sufficient time for sharing of trillions of data, because machine learning required massive amounts of data (which the internet finally provided) to train on. In other words, the massive amounts of data was the main thing missing in the +/- 90s.

    • @ultramegax
      @ultramegax 4 หลายเดือนก่อน +2

      ​@@DoctorMandible​While ELIZA was incredibly impressive for the time, comparing current LLMs to precursors decades old makes little sense. ELIZA did not make use of reinforcement learning, neural nets, etc.

    • @matheussanthiago9685
      @matheussanthiago9685 4 หลายเดือนก่อน

      Or saying in 2014 that VR is never going to be the next smartphone
      Or it's like saying in 2021 that the metaverse is not really going anywhere
      Or saying in 2022 that blockchain and NFTs are useless and worthless in 97% of cases
      Or saying in 2010 that Elizabeth Holmes was full of Shit
      Or saying thar Elon musk is a conman and a vaporware salesman in any given year
      Or....

  • @jdcmsigma47
    @jdcmsigma47 12 วันที่ผ่านมา +1

    awesome info man!!!

  • @Vysair
    @Vysair 4 หลายเดือนก่อน +3

    We've only just unlocked the AI Tech Tree similarly to when we first mess with nuclear energy

  • @mahiaravaarava
    @mahiaravaarava 4 หลายเดือนก่อน +1

    it's important to acknowledge its limitations and challenges as well. I agree with Eric Siegel's perspective that we must temper our expectations and approach this technology with a balanced view.

  • @brianmelendy1194
    @brianmelendy1194 4 หลายเดือนก่อน +9

    AI is not your friend. The only people to benefit from it are CEOs & techies.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน +1

      Relax. Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @JedRothwell
      @JedRothwell หลายเดือนก่อน

      I have benefited a great deal, in translating and programming. I am a techie, but not in AI.

  • @davidmead6337
    @davidmead6337 4 หลายเดือนก่อน +2

    I am so hopeful that A I can speed up medical research outcomes. There are so many questions in medicine which cannot actually go to the sources of disease, particularly within the general area of the immune system. So many problems are just treating symptoms without really knowing the real causes. I am a retired M.D. On we go.

    • @HahaGuitarGoBrrr
      @HahaGuitarGoBrrr 2 วันที่ผ่านมา +1

      well, not this form of "AI" anyway
      It can give you a million possible theories but it can't tell which one is true

  • @paulocacella
    @paulocacella 4 หลายเดือนก่อน +5

    The question is WHO is getting this efficiency gain. I've not observed a lowering in price of shipping. This kind of efficiency gain is USELESS for general public. The problem is WHO is getting the money.

    • @STEAMLabDenver
      @STEAMLabDenver 4 หลายเดือนก่อน +1

      Who benefits and who loses are good questions to ask.

    • @NealBurkard-ut1oo
      @NealBurkard-ut1oo 4 หลายเดือนก่อน

      You ship that much?

  • @nlbm
    @nlbm 4 หลายเดือนก่อน +1

    It’s useful to hear another voice amidst all the hype.

  • @HAWXLEADER
    @HAWXLEADER 4 หลายเดือนก่อน +7

    What i love about generative ai is that it takes place of boiler plate stuff.
    Summarize this, fluff this up, make a Sunday of what this code does, make this code more efficient using hashing etc...
    I could do these myself but since i don't have to and have just to proof read it, i can do much more.
    I'm not afraid that it'll replace my job, it'll just make the job faster and hence more interesting.

    • @kevinmcfarlane2752
      @kevinmcfarlane2752 3 หลายเดือนก่อน +1

      Yeah in coding I find it’s pretty useful when you’re writing a function and it just generates stuff you were about to write anyway. Essentially souped-up code snippets. But it saves time. It can also be helpful for explaining compiler errors.

    • @leftaroundabout
      @leftaroundabout 2 หลายเดือนก่อน +2

      Automating boilerplate is a bad idea. It just means there will be more and more boilerplate that's harder and harder to understand.
      Instead what we should do is _eliminate_ boilerplate. Not write long texts that need summaries in the first place but put a tl;dr on top. Don't fluff things up, just be honest. Write code that explains itself (and stop using bad programming languages where this isn't possible). Use higher-level constructs that make things like hashed memoisation trivial to apply.
      Possibly generative AI can help with that, but don't use it to squash the symptoms while exacerbating the actual problems.

  • @jamesb2059
    @jamesb2059 4 หลายเดือนก่อน +1

    Excellent. Thank you. I feel I now understand some of the issues much better than I did.

  • @bnjiodyn
    @bnjiodyn 4 หลายเดือนก่อน +32

    So, Gen AI will never get any better. Right... I wonder how much Big Think got paid for this ad

    • @squamish4244
      @squamish4244 4 หลายเดือนก่อน +1

      Yeah. All these contrarians always assume that AI will stop developing RIGHT NOW. Gen AI RIGHT NOW is not 100% perfect, therefore it will never get any better - which gets your attention for like a week.

    • @rahulbhatia4775
      @rahulbhatia4775 4 หลายเดือนก่อน +7

      ​@squamish4244
      If that's the case, then why many great technologies have failed like nuclear energy. That is more important than this technology anyway. Ai today is made anyway from scraping the internet which is unethical in the first place, and even then I've used chat gpt extensively and it is just a realistic Jarvis from Iron man. By that I mean it doesn't understand anything and just gives generic answers which are useful by the way. The tech companies are lying to you and this will be abandoned in the future. It's just a modern day Google glass. Even vr sucks and it's been a decade since it's inception

    • @squamish4244
      @squamish4244 4 หลายเดือนก่อน +7

      @@rahulbhatia4775 Nuclear energy did not fail due to any technological problem. It failed for political reasons. Nuclear fearmongering and bureaucratic fuck-ups led to the cost of reactors soaring, as they had to be made ridiculously safe, way beyond the standards required of coal plants, for instance.
      And every Chernobyl, Three Mile Island and Fukushima was met with hysteria, even though coal smoke kills 800,000 people a year and contributes heavily to global warming. Nuclear is clean and had the building of plants in the 60s continued at that trend, today nuclear and hydro would power the entire USA and its emissions would be much lower. It would also power Europe and China and Putin would not have gotten the idea in his head to hold Europe hostage with gas and oil. India has its own vigorous nuclear program and many plants are being built. Research into thorium reactors continues.
      If anything, nuclear is a cautionary tale about f*cking up and blowing it on a wonder technology.
      As for ChatGPT, like many others on here have said, it is the dumbest AI will ever be. It's merely the beginning.
      Scraping the Internet is controversial, but I would argue that it is not unethical. We voluntarily put our information online. It was our choice. We didn't have to. We simply didn't care and decided the risk was worth it. NVIDIA is scraping a human lifetime's worth of data from TH-cam a day, and TH-cam has 14 billion videos - that WE put on there. TH-cam is open source. Facebook is open source. Google Maps is open source. I think we should get paid for our data, and that is my argument. They are making money off our data, so we should get paid for it. But it's not illegal to scrape it.

    • @seonteeaika
      @seonteeaika 4 หลายเดือนก่อน +2

      People seem to have that idea that LLM like ChatGPT is only going to be fed with more data, and that's all the development it will ever get. Same faults with slightly more accuracy over time but never perfected. Like it escapes their mind that there also exist coders in companies, however they're not the people who add or manage the data! They refine and redefine how to use it, and how to collect it. Like do they even understand how new the whole concept still even is, that it would soon already stagnate?

    • @clone3_7
      @clone3_7 4 หลายเดือนก่อน

      @@squamish4244 I disagree about the scraping the internet part, it is clear that most AI seems to have access to books, which otherwise would have required purchase online, and I doubt these AI-s have paid a penny, yet they know books and can quote from them fairly easily.

  • @noway8233
    @noway8233 2 หลายเดือนก่อน +2

    Very good summery of the AI moment , i think predictive aI have a lot of potential , maybe combined with others tech
    I think this AI hype is a big bubble and gone explode someday.There is a lot of moneu , its gone be huge

  • @HashemMasoud
    @HashemMasoud 4 หลายเดือนก่อน +7

    2:52 how can you expect people to proofread AI answers when those people are lazy and used AI for the purpose of reducing their mental efforts?!

  • @tanyabodrova9947
    @tanyabodrova9947 4 หลายเดือนก่อน +2

    It's such a relief when the annoying music stops - but then it starts again.

  • @alvingalang5106
    @alvingalang5106 4 หลายเดือนก่อน +25

    I am a software engineer, I used to be skeptical about AI. But then I used copilot, generative AI from Microsoft. It blew my mind that it now can make piece of working code. Simple? Yes, but we can then elaborate to add more features. That adding features part, is something that intellectual one can do. I mean creating simple code can be done as simple as grabbing the code in the internet. But then modifying it to match our expectations? That’s different.
    Maybe it’s now not that sophisticated, but I am afraid, if it can code, then theoretically it can make itself better, in a good way or bad way.
    Also, for you who use gen AI, are going to have more advantages than those who don’t. It’s nevertheless worth to explore.

    • @tiredfox2202
      @tiredfox2202 4 หลายเดือนก่อน +7

      meanwhile, copilot suggests me stuff like "x = condition ? true : false", so I'm not impressed.
      It also fails 80% of the time when you actually have a problem you need to solve.
      I feel like at the end of the day, it's just fancy autocomplete that needs to be carefully proofread.

    • @babybirdhome
      @babybirdhome 4 หลายเดือนก่อน

      I use ChatGPT for coding assistance all the time, but what I’ve found is that it quite often doesn’t pass thorough quality control tests. It needs numerous iterations to get things right for quite a lot of the things you ask it to do. But then, I’m also not a software developer - I only write code for utility to get things done in my job, not to write releasable software.
      But having used production software since the 1980s, I will say that the quality of what’s released is on a continual decline and has been increasingly in decline in the most recent years. Yes, things like agile and devops does speed up getting fixes into the software, but I don’t think anyone has actually stopped to do a deep analysis of the lost productivity of:
      1. Time lost to software not doing what it’s supposed to do the first time
      2. Time lost to constantly having to re-learn how to use the software every few months with constant new releases and updates changing things - often just for the sake of changing things, or because they weren’t thought through sufficiently in the beginning
      3. Frustration and having to constantly develop workarounds to software bugs or missing functionality because everything is released as minimum viable product instead of something carefully thought out and fully developed
      All of these inefficiencies are “improved” for software development companies because they’re not wasting time and money developing the wrong features or wasting time to get their product to the market, which, if done properly can get the most basic functionality into people’s hands earlier so they can begin to benefit sooner. However, everyone completely ignores the flip side of that coin - all the productivity lost due to the issues listed above multiplied across the entirety of humanity trying to get things done. And this also ignores the tangential negative impacts of:
      1. Drastically increased frustration of workers who used to be able to just learn how to do the job they needed to do and then become an expert in it and do it with maximum efficiency for themselves
      2. The fact that these are strictly limited resources (that can only be improved so far through effective education and human skills development) in the end - everyone cannot be Einstein. We all have limits to what we’re capable of learning and adapting to, which limits what we’re capable of accomplishing. The tools we create are typically applied to helping us accomplish more, but as we inadvertently dedicate more of those finite human development resources to learning and adapting to constantly changing tools and workflows, we’re taking those away from actual productive, beneficial learning and outcomes.
      3. Loss of capacity because there’s no time for anyone to become an expert in anything if they’re part of the ordinary working class, because everything they do and every tool they use to do it is in a constant state of flux and re-engineering and change - both necessary and unnecessary as dictated by marketing teams and data saying “we need to rebrand to stay relevant or popular and keep making line go up and number get big”
      4. The fact that the capacity of everyone is not equal, and we have likely reached a saturation point already in terms of who is capable of what and to what extent, meaning that further “improvements” to be made by using these paradigms is likely to have the opposite effect when stretched across industries and around the globe, but it will have become a steadfast religion by then, and you’ll never be able to convince anyone to change anything to produce better results - the same way agile and devops struggled to do so when they were first created and introduced.
      5. The knock-on effect of the above as their impact stretches out beyond the end user of the tools these paradigms are creating - in terms of a frustrated person getting off work and still dealing with and recovering from that stress and frustration as they interact with others in the world who may not be so directly impacted by those shortcuts and cut corners
      6. The health impacts of the above on the aggregate population of humanity
      7. The impact to how the next generation encounters the world, what it finds acceptable, the values that living in that world creates and necessesitates without having ever considered what they would be, and the sociological problems those will create and the cost of mitigating or treating them
      And numerous others, if I wanted to spend the time to keep writing longer comments into the void of TH-cam comments.
      In the end, I’m not positive, but am quite convinced and becoming more so as time goes on, that the benefits do not outweigh the negatives here, but we’re unlikely to find that out until it’s too late.
      Don’t get me wrong - generative AI is a useful tool when used judiciously and carefully. But history has taught us nothing if not that overestimating its value will lead to terrible and preventable results. It is likely to be like any other powerful tool - it will not make us better if we do not value and focus our efforts on making US better. It will likely only take our existing problems and multiply and magnify them to the extent that it can increase efficiency in anything. These tools cannot and will not make human beings better - they will only make human beings more of what we already are, and historically, that has not been a good thing for most people. It has managed to smooth out some of the spikes of human experience, but it has done so by leveling the peaks to a higher average of terrible. For example, wars are less common, but the quality of life is lower. We have a ton more productivity and “simplify life” tools than we’ve ever had before, but all we’re doing is having to work longer and harder for less. Again, the peaks are fewer and lower, but the aggregate average is still not better.

    • @andybaldman
      @andybaldman 4 หลายเดือนก่อน +1

      AI will replace coders. But that job had evolved into mostly googling and copying existing code anyway. It's also not exactly surprising that a system made of code would master coding first.

    • @nicholasmassie
      @nicholasmassie 4 หลายเดือนก่อน +3

      Im a software engineer who has used gpt and copilot from day one. I have built llm apps that went to production. The limitations are alot and they are not replacing devs anytime soon. Not to mention half the time it gets in my way and i was better off without it. We are at the end of this hype cycle.

    • @nicholasmassie
      @nicholasmassie 4 หลายเดือนก่อน +3

      @@andybaldman That is so untrue. Most devs are using google to read documentation not copy and paste entire modules of code. Maybe you are just thinking about some basic web dev stuff lol.

  • @M1k3M1z
    @M1k3M1z 3 หลายเดือนก่อน +1

    This is NOT generative AI at all. “Predictive AI” is no different than a super smart auto complete. No one has been able to develop GenAI because THAT will be the true turning point in AI. GenAI doesn’t just predict it creates NEW thought. Not new output like people are claiming in the comments. GenAI can have a uniquely, new thought. That’s what everyone needs to fear.

    • @TomisaMaker
      @TomisaMaker 2 หลายเดือนก่อน

      It lacks depth on so many levels.

  • @howtoactuallyinvest
    @howtoactuallyinvest 4 หลายเดือนก่อน +30

    This prob won't age well. Meta used 16,000 Nvdia H100 chips for Llama 3... They have over 600,000 chips that are going towards future models and they're still buying as many chips as they can get their hands on. Then there's advancements on the algorithmic side that will continue to add step change improvements

    • @onlyrick
      @onlyrick 4 หลายเดือนก่อน +7

      @howtoactuallyinvest - The whole enterprise is advancing so rapidly that I expect nothing said today will be pertinent for long. Exciting times, indeed! Be Cool.

    • @amdenis
      @amdenis 4 หลายเดือนก่อน +2

      You are so correct. Sad how many "experts" with limited DL/NN R&D and related experience are repeatedly so wrong.

    • @felipebodelon3407
      @felipebodelon3407 4 หลายเดือนก่อน +12

      Well the point is not how much they are investing but how much value they can generate out of that investment. Right now it may be too early to tell, but the overhype is already here. You could even say the hype is proportional to the amount of money invested, tho hype doesn't assure value.

    • @alpha.male.Xtreme
      @alpha.male.Xtreme 4 หลายเดือนก่อน +9

      People are coping hard in a self-soothing kind of way. It's as if they think the rate of AI progress is suddenly going to flatline overnight and people aren't working toward new architectures or integrated applications. Are people really going to scream AI winter when it's only a few weeks between big AI announcements now?

    • @AdamJorgensen
      @AdamJorgensen 4 หลายเดือนก่อน +6

      Keep chugging that Kool Aid 😂

  • @Stuharris
    @Stuharris 3 หลายเดือนก่อน +1

    When I first heard about the predictive A.I. functionality my first thought was earthquakes; can we feed a predictive model all of the seismic data we've collected, have it analyze the readings, then hook it up to receive all the new incoming live data allowing it to be cross referenced with the past data in real time. It would then put together 'reports' consisting of anomalies and activity patterns that may likely precede earthquakes, possibly even being able to predict severity levels. I don't think this is going to be like a months or years prediction; but days to hours when talking about an earthquake could potentially save 100's of 1000's of lives.

  • @skweedom
    @skweedom 3 หลายเดือนก่อน +3

    I don't know about the long-term, but right now he's not wrong. Especially about GenAi. Besides 90% of marketed AI apps are often traditional machine learning, which while related, is misleading. Now, AI is of course a broad and highly unspecific domain, but I hope this ridiculous hype goes down and we'll simply see down the line if _GenAi_, specifically, is the new Internet (big hype at first, big gains later). I doubt it simply based on the limitations of it's design, but some sorts of AGI may well change the game for sure. GenAi may be a component of a greater system, we'll see.

  • @calvingrondahl1011
    @calvingrondahl1011 3 หลายเดือนก่อน +1

    Thank you🤖🖖🤖

  • @PlatoCave
    @PlatoCave 4 หลายเดือนก่อน +8

    AI = Affordable Idiocity

  • @mattalley4330
    @mattalley4330 หลายเดือนก่อน +1

    It’s almost like the news media and TH-cam content creators have a vested interest in sensationalizing developments like the coming AI wave/revolution/etc. Heck, one video even hinted that immortality via AI is possible. I mean, how is that for wishful thinking?

  • @mindyourbusiness4440
    @mindyourbusiness4440 2 หลายเดือนก่อน +3

    Generative AI is not all there is about AI. It's just the most catchy for the public

  • @ThoughtfulAl
    @ThoughtfulAl 2 หลายเดือนก่อน +1

    I think this is a good video because it stimulates discussion and human thought

  • @marklapis7569
    @marklapis7569 4 หลายเดือนก่อน +12

    Well Eric, I really hope you're wrong because I've bet my entire future on AI completely transforming the landscape soon. Dropped out of college late last year for software engineering because the job market is tough and most of what I'm learning should be automated soon, but have been waiting for the new normal of AGI to either resume or change career paths once everything stabilizes. I thought it was the right decision at the time but I'm kind of drowning in debt and can't hold out much longer like this...

    • @Niiwastaken
      @Niiwastaken 4 หลายเดือนก่อน +1

      Definitely made the decision too early but honestly you're not wrong

    • @Rudzani
      @Rudzani 4 หลายเดือนก่อน +4

      He probably is wrong, but he’s also incentivised to be wrong.

    • @larsfaye292
      @larsfaye292 4 หลายเดือนก่อน +15

      You completely destroyed your career and life over probabilistic plagiarism algorithms. It's like the Darwin Awards for careers. Well, more work for those of us that remain in the industry (which isn't going ANYWHERE). You truly lost the plot to make such a ridiculous decision over all this obvious hype.

    • @marklapis7569
      @marklapis7569 4 หลายเดือนก่อน +4

      @@larsfaye292 You can't really deny the tech industry is changing rapidly with the advent of AI tools, and they're getting better and better. So my plan was to give it a few years for everything to stabilize. It's temporary, I'll try to go back later (which will pause my student loan repayment) and reconsider my career path. I'm not that hopeless, hopefully.

    • @WhatIsRealAnymore
      @WhatIsRealAnymore 4 หลายเดือนก่อน

      ​@@marklapis7569don't listen to Lars. Like everyone else on the internet and in the real world he doesn't know what he doesn't know. LLM are about 2 years away (when memory and agency is introduced) from replacing most software work and most other work really once loaded into robust robotic mediums. There is absolutely guaranteed to be almost no work in most fields. Everyone studying today is wasting their time considerably. So well done on making an informed decision. No one will be able to pay student loans, home loans or any other finance tools. The entire earth's economy will collapse unless a large basic income is quickly distributed to avoid chaos and the end of our modern world. So studying today is a huge waste of energy and time. Rather go into the trade school work line so long as you can learn it quickly and get earning money as you wait for all this to play out. I do agree with Lars in that I think you might have pulled out a bit early. But who am I to say really. Please do something with your time as I say. The worst thing you can do is sit idle. Much love from sunny Cape town. ❤

  • @NicolasTRANG
    @NicolasTRANG หลายเดือนก่อน +2

    At last a sound voice in the world of AI, not preaching for another trillion dollar to invest in LLM.

  • @JudgeFredd
    @JudgeFredd 4 หลายเดือนก่อน +16

    The bubble will explode sooner or later…

  • @Lortanflout
    @Lortanflout หลายเดือนก่อน

    As an adult with ADD it is really valuable. It gets me past the first draft. I understand that every single thing it generates is suspect. The process of fact-checking and proof-reading is extremely valuable for learning and creation for me. Why? I am great at editing, I think it’s one of my ADD super-skills. So it is foolish to trust it as it comes out of my prompt, but the cognitive processes of checking and editing really works. I use it all the time and it is a game changer.

    • @futuristica1710
      @futuristica1710 หลายเดือนก่อน

      The carbon footprint made by AI is massive …. So, “thanks” for using AI …

  • @zephyr4813
    @zephyr4813 4 หลายเดือนก่อน +63

    I think this video will be good comedy in 10-20 years

    • @JohannPascual
      @JohannPascual 4 หลายเดือนก่อน +15

      Try 3-5 years.

    • @zephyr4813
      @zephyr4813 4 หลายเดือนก่อน +6

      @@JohannPascual i hope so. That would be an even more exciting pace

    • @jayhu2296
      @jayhu2296 4 หลายเดือนก่อน +7

      it already is tho?

    • @Pratim-z7l
      @Pratim-z7l 4 หลายเดือนก่อน

      ​@hitesh6245 you don't have to say sorry mate

    • @markplutowski
      @markplutowski 4 หลายเดือนก่อน +1

      I was just watching a video from three years ago reporting on a team that just got their hands on GPT 3. It’s super interesting to hear their impressions and predictions.

  • @langolier9
    @langolier9 4 หลายเดือนก่อน +1

    I subscribed because of this video and I’m gonna try to watch all the back ones

  • @wastedaga1n
    @wastedaga1n 4 หลายเดือนก่อน +8

    Eric Siegel forget to mention he is Steve Martin's first son.

    • @mr.c2485
      @mr.c2485 4 หลายเดือนก่อน

      Love child ❤

    • @pmg6665
      @pmg6665 4 หลายเดือนก่อน

      Haha, I was hoping someone would mention that 😂

    • @rowanans
      @rowanans 4 หลายเดือนก่อน

      oh really? Steve Martin of Only Murders in the Building?

  • @MatthewFry
    @MatthewFry 2 หลายเดือนก่อน

    I'm a software developer and I predict that generative AI is going to lead to the biggest data expansion algorithm of all time.
    Person A wants to write a fancy flowery e-mail to Person B. They give ChatGPT a summary and it writes four paragraphs of empty prose and sends it to Person B
    Person B doesn't want to read a full-page letter, they have Claude Sonnet to do that for them. They give it to Sonnet and get back a summary that looks remarkably like the original summary.
    Expansion.
    Person A wants to order some tacos. They could go onto the website and create an order but that is too much work. They get their ChatGPT Whisper thingy and have it call the restaurant. The restaurant needs people to actually cook the food. They decide a good use of Gen AI is to answer the phones. After a 3-minute conversation the restaurant's Gen AI creates a 5-word order and charges the credit card.
    Expansion.

  • @npaulp
    @npaulp 4 หลายเดือนก่อน +11

    Generative AI represents far more than just a glorified chatbot prone to hallucinations. It marks a significant breakthrough in AI research. While it's true that the technology has been somewhat overhyped- common with any groundbreaking innovation- it undeniably opens up new possibilities. The debate among AI researchers regarding whether Generative AI can eventually lead to Artificial General Intelligence continues, and only time will reveal the truth of this potential. However, from my vantage point, the prospects are incredibly promising.
    Generative AI appears to have uncovered a mechanism that mirrors the way the human brain operates more closely than previous AI technologies. While earlier AI milestones-such as chess-playing machines, IBM's Watson, self-driving cars, and virtual assistants like Alexa-were noteworthy, Generative AI taps into something far more profound. This new frontier may well be the "panacea" that has long been anticipated in the realm of AI, and I remain optimistic about its future.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน +4

      Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @npaulp
      @npaulp 4 หลายเดือนก่อน

      @@katehamilton7240 Your assertion that "maths is limited" doesn’t quite apply here. Generative AI, particularly neural networks, operates in ways that aren’t directly constrained by the mathematical limitations you suggest. While energy consumption present real challenges, ongoing advancements in alternative energy sources offer promising solutions for the future. Regarding AGI being a "pipe dream," this perspective seems overly pessimistic, especially in light of the remarkable strides made in just the last few years. The progress we've seen in AI- developments that were almost unimaginable a decade ago- indicates that we’ve only begun to tap into its potential.

    • @LeonardoMarquesdeSouza
      @LeonardoMarquesdeSouza 4 หลายเดือนก่อน +1

      AGI is a dream, there's a lot problems to solve first. All Gen AI does not create nothing today, in fact no Gan AI can learn in fact, and that's ONE problem to solve.

    • @unityman3133
      @unityman3133 4 หลายเดือนก่อน

      @@katehamilton7240 what do you think the human brain operates on? fairy dust and jesus energy?

  • @salmajaleel5800
    @salmajaleel5800 3 หลายเดือนก่อน +1

    Predictive AI is equally ''hype'' as GenAI. This video felt like an ad and very biased. I say this because in his speech there was a constant critique for GenAI while he continued to admit that predictive AI has the exact same limitations, and he never once gave actual facts but just continued on promoting.

    • @EricSiegelPredicts
      @EricSiegelPredicts 3 หลายเดือนก่อน

      I’m the guy in the video. I'm struck by seeing several comments like this accusing me of bias (i.e., ulterior motives). I'm used to being an educator who's trusted, so... here's the thing: I’m not saying that predictive AI should get at least as much attention as generative AI because I’m personally more invested in predictive AI - it’s the other way around! And in fact, most of my writing and work leads with predictive's limitation: It isn't a crystal ball. However, a little prediction goes a long way -- predicting better than guessing is generally more than sufficient to improve the effectiveness of large-scale operations.

  • @phantomoftheparadise5056
    @phantomoftheparadise5056 4 หลายเดือนก่อน +7

    Again, no vision, we can't think tomorrow in a linear way. It is not because predictive AI has been the best option so far that it will always be the case. There is no use case of predictive AI that has not been replicated as a proof of concept with GPTs.
    The argument is always the same : we need human intervention to watch what the AI is doing, is it not the same with human employees ? Let's talk about it again when autonomous AI agents hit the market EOY or 2025...

    • @lucacarey9366
      @lucacarey9366 4 หลายเดือนก่อน

      Nothing would be worse for the powers that be if people had the ability to sit and think about things and the way the world is structured. That’s why even when we’re dealing with technology, though obviously imperfect, that has gone from making meme content for nyuks to seriously giving every category of thinking professional a run for their money in just a few short years, “cooler heads” must remind us it’s not actually a big deal

    • @englishsteve1465
      @englishsteve1465 3 หลายเดือนก่อน

      @@lucacarey9366 But we do have that ability. What we might not have is enough information to formulate a coherent "picture" and enough experience to tell good info from bad, or worse, the deliberately misleading. We also need enough honesty to recognise that what we don't fully understand is likely to change that "picture" enormously.

  • @epoyworld
    @epoyworld 3 หลายเดือนก่อน +1

    If Ryan Reynolds and Mark Ruffalo had a son, he would be the guy in the video.

  • @cosbyjackson5053
    @cosbyjackson5053 4 หลายเดือนก่อน +3

    Thank you that was incredibly informative and very helpful. It’s kind of brought me back down to earth.

  • @512Squared
    @512Squared 4 หลายเดือนก่อน

    From my own understanding, this is my list of AI limitations that people should be aware of:
    - AI cannot yet serve as true researchers: AI cannot independently set research goals, follow multi-step plans, or obtain and analyze data to answer specific research questions.
    - AI cannot yet self-improve: While AI can generate programs and could theoretically assist in improving its own cognitive architectures, it is not yet capable of autonomously enhancing its capabilities or collaborating in the development of AI alongside humans. This is an active area of ongoing research.
    - AI can apply a theory but cannot invent one: AI can extend and apply existing theories but lacks the capacity to create fundamentally new concepts or theories.
    - AI can generate a question, but it would not know if it was an interesting question: AI can be prompted to generate questions, but it lacks the awareness or understanding to determine the significance or relevance of those questions.
    - AI can recombine elements in novel ways: AI can combine existing ideas in ways that may seem new or innovative, but these are still based on pre-existing knowledge.
    - The map is not the territory: AI maps patterns found in written materials produced by humans but does not understand the underlying reality those patterns represent.

  • @Xtensionwire
    @Xtensionwire 4 หลายเดือนก่อน +4

    "Hype = mismanaged expectations"
    Thank you for this.

    • @Kylo27
      @Kylo27 4 หลายเดือนก่อน

      lolwut… that’s not what hype means at all./..

  • @poketopa1234
    @poketopa1234 2 หลายเดือนก่อน

    As an MLE, the difference between “predictive” and “generative” is completely false. LLMs predict what token to produce next, and LLMs are capable of generating predictions

  • @catriona_drummond
    @catriona_drummond 2 หลายเดือนก่อน +6

    Ads on youtube are getting worse, I was just shown a 8:27 minutes long ad for some predictive AI company.

    • @EricSiegelPredicts
      @EricSiegelPredicts 2 หลายเดือนก่อน +1

      I'm the guy in the video. I'm struck by seeing several comments like this accusing me of bias (i.e., ulterior motives). I'm used to being an educator who's trusted, so... here's the thing: I'm not saying that predictive AI should get at least as much attention as generative AI because I'm personally more invested in predictive AI -- it's the other way around!

    • @catriona_drummond
      @catriona_drummond 2 หลายเดือนก่อน

      @@EricSiegelPredicts I think in principle we agree. It just came off a bit odd, a bit too much about your company than the matter itself- The Web is a cynical place.

    • @EricSiegelPredicts
      @EricSiegelPredicts 2 หลายเดือนก่อน +1

      @@catriona_drummond I still don't get it. I didn't mention my company. How did it come off being about my company?

  • @Skyking6976
    @Skyking6976 4 หลายเดือนก่อน

    We bought a Tesla model Y. We’ve had it three weeks now and the auto pilot with AI is incredible. We watch the thing learn. I use it 99% of the time and couldn’t care less if I ever drove again.

  • @bsmithhammer
    @bsmithhammer 4 หลายเดือนก่อน +7

    Agreed. There are a lot of very fundamental misunderstandings about what GAI is, and just as importantly, what it isn't.
    And in general, anytime it seems like you're being offered a 'panacea,' be suspicious.

  • @patlecat
    @patlecat หลายเดือนก่อน +1

    It's great to see more sober views on AI instead of the silly hyping all the time.

  • @louisifsc
    @louisifsc 4 หลายเดือนก่อน +5

    LLMs are essentially predictive AI for language, they are predicting the next token based on an input of tokens. Generative AI is just a stepping stone on the way to AGI. The techniques and specific technologies are evolving. He's mostly right that generative AI hasn't created that much value so far, but it is a huge unlock necessary before getting to the next major breakthrough.

  • @suzannecarter445
    @suzannecarter445 4 หลายเดือนก่อน +1

    This was an extremely helpful talk - cleared up a lot of nonsense. Thank you!

  • @sapphyrus
    @sapphyrus 4 หลายเดือนก่อน +18

    Person in 1905: "Wright Brothers' plane isn't the panacea we hoped for."
    Way too early to write it off, the rate of improvement can open up new possibilities.

    • @liwyatan
      @liwyatan 4 หลายเดือนก่อน

      Wright brother plane flew. We don't know what "natural" intelligence is.
      I like to tackle problems on energy and efficiency. In the past few years we have learned that our brain "runs" between 4000 and 5000 models. This models are "similar" to LLMs, with the exception that they are able to train themselves constantly, and are bigger, far more complex. We do know that this models are not what makes us conscious. So it seems that they are used for far "mundane" tasks (just look at how many trillions of cells live in our body). Our brain has to spend a lot of time and energy making us work. A task so complex that we have "other brains" as subsidiaries in other parts of our body. Returning to consciousness it's incredibly complex (some theories say that our brains emulate quantum process at room temperature using nanostructures in our neurons).
      To train 1 LLM, we use computers that consume around 100.000 KW/h. They have to run for days. Our brain does this for thousands of more complex models all the time. It runs our consciousness and all of it using 20W ... what a Macbook Pro uses when is doing nearly nothing ...
      So, that's how far away we are from AI/GAI whatever you wanna call it. It's, optimistically, hundreds of years away.

    • @dewithx
      @dewithx 4 หลายเดือนก่อน

      Don't extrapolate with one data point, that's dummy. Nobody knows where or when the new real advancements will come towards AGI.
      Maybe one day we'll find the cure for Cancer, but that can happen in 5 or 50 years.
      Real progress in any field is slow, non-linear and should not be taken for granted.

    • @tom_verlaine_again
      @tom_verlaine_again 4 หลายเดือนก่อน +5

      That's because their invention was useless in practice. Santos Dumont's one, however, took off (sorry I just had to).

    • @TomisaMaker
      @TomisaMaker 2 หลายเดือนก่อน

      the rate of improvement can be even faster, like these days, but they want profit - they are very profit focused, where quality of life suffers, overall.

  • @capn_shawn
    @capn_shawn 3 หลายเดือนก่อน +2

    Every time I read something from Generative AI, it reminds me of Christian Bale speaking in "American Psycho".
    Lots of words, no depth or understanding.

  • @cruzilla6265
    @cruzilla6265 4 หลายเดือนก่อน +17

    What's with people pointing to the potential for generative ai to make errors but seemingly ignore the number of errors (particularly per unit time) that humans would make?

    • @Jupa
      @Jupa 4 หลายเดือนก่อน +5

      Depends entirely on the task.
      Humans use language to communicate an understanding. AI adapts natural language into codified algorithms. They are playing two different games. Tasks that require abstract thought, continuity and compels action would be better suited a teenage girl than GPT9.9

    • @toreon1978
      @toreon1978 4 หลายเดือนก่อน +1

      @@Jupaand you know this… why?

    • @jareduxr
      @jareduxr 4 หลายเดือนก่อน +3

      @@toreon1978 It's only predicting on a word by word bases. when it "aces" the bar and doctorate exams? It's searching its data scraped from the internet to predict answers. I'm pretty sure I could ace the bar exam with internet access. One of the main reasons this seems bigger than it is right now, is because we are fooled by the language it generates, thinking that it can think and reason at a high level. It can't. The little errors or hallucinations it makes proves that even a low level, it just can't cut the cake. Those errors mean you have to go over everything it does and that you can't trust it to complete the task. It's a word predictor and internet data pool. Which, is still valuable.
      The problem is gen AI is inflated like Dot Com in the 90s. It's not delivering for any of the companies in terms of revenue after billions have been poured in. It's not that useful right now. There are also huge bottlenecks in development and power consumption.

    • @Jupa
      @Jupa 4 หลายเดือนก่อน

      @@toreon1978 actually my name is Gideon, and I know everything.

    • @clray123
      @clray123 4 หลายเดือนก่อน +2

      It's more about the kind of errors. The kind of errors current LLMs make cause them to be nearly useless in my work.

  • @subhasish661411
    @subhasish661411 4 หลายเดือนก่อน +2

    GenAI is also predictive AI if it is doing next word prediction. Many of the pre transformer genAI using RNN or LSTM was doing exactly that.

    • @amdenis
      @amdenis 4 หลายเดือนก่อน +3

      You are totally correct, and as usual, Eric Siegel is wrong about most things in AI-- and he especially has no clue about the difference in growth and capabilities inherent to DL/NN vs traditional pre-DL Machine Learning that he touts. Sadly, many will believe people like him and completely miss a once in history chance to ride the largest technology wave to ever happen, which will affect virtually all aspects of society, business, and life in general.

    • @mintakan003
      @mintakan003 4 หลายเดือนก่อน +1

      The Predictive AI he's describing was the AI we were all doing, prior to ChatGPT. Maybe it was just called "machine learning", at the time. Supervised learning. Classification. (Deep neural nets. But also earlier machine learning "fitting" regression techniques, based on simpler math structures. The simplest of which, learned in grade school, linear regression.) Very narrow task of making a call based patterns of data. It's still there. It's just generative AI now has gotten the attention of most people. (And it too is a form of predictive AI, as a hyped up auto-complete engine.).
      It's self evident by now, that simply scaling up LLM's is not the pathway to AGI. It's a piece of it. It's a step function up, in natural language understanding. But it's limitations are also becoming self evident. We're probably missing several more steps.

    • @amdenis
      @amdenis 4 หลายเดือนก่อน

      @@mintakan003 Exactly. He’s just mischaracterizing and mislabeling it is “generative” vs “predictive”. There is a lot more beyond scaling happening with the evolution of DL/NN, which is rapidly evolving across the scaling spectrum. I have just eight 8-way H100 servers here at my lab, but I am employing numerous techniques to achieve fairly broad college to post-doc level, low temperature (zero hallucination) KB systems. Given that we only have a maximum of about 5.2 TB GPU RAM across NVLink/NVConnect fabric, it’s nowhere near scaling at all costs. In fact, for several research areas we took 64-96 small (sub 6B param) LLM’s and achieved better results with 100% unsupervised/largely unstructured, and semi-supervised synthetic data sets. In any case, as you noted, there are so many areas for improvement using MHT LLM’s, and we’re all just getting started!

  • @kure7586
    @kure7586 4 หลายเดือนก่อน +8

    That is really really old news What's going to be next ? Don't tell me Titanic actually isn't unsinkable ! 😂

    • @clray123
      @clray123 4 หลายเดือนก่อน

      I think they sent him out as a means of "damage reduction"... maybe they're preparing to pull the rug from below the AI stock market...

  • @animeshbhatt3383
    @animeshbhatt3383 4 หลายเดือนก่อน +1

    So who is saying AI will replace everything?It's the folks from Microsoft, Nvida, Amazon,... They are actually pitching in for their own AI based product.

  • @hulqen
    @hulqen 4 หลายเดือนก่อน +3

    A question that hits me over and over again, especially when I look at AI generated images, is this: will the technology powering AI right now (i.e. LLM:s) lead to something like 99,5+% accuracy so that we indeed can trust it to do stuff like medical analysis, autonomous driving etc, or is the technology in itself flawed and will only lead to a dead end?

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +3

      I'm the guy in the video. For certain limited domains, I think it can. Or I believe it can be that accurate on an 80% portion selected by predictive model (hybrid).

    • @madalinradion
      @madalinradion 4 หลายเดือนก่อน

      It will probably get around that accuracy with new models and more compute thrown at the new models, chatgpt 4 is already at 90% accuracy in some tests

    • @JohnDoe-my5ip
      @JohnDoe-my5ip 4 หลายเดือนก่อน

      Autonomous driving has absolutely nothing in common with generative AI. It is a traditional search-based AI problem.

  • @tjdoss
    @tjdoss 2 หลายเดือนก่อน +2

    The backdrop for this video is a giant toilet roll? - non generative AI comment

  • @jaysonp9426
    @jaysonp9426 4 หลายเดือนก่อน +24

    Lol, I work with AI all day every day. Good luck calling it hype

    • @rahulbhatia4775
      @rahulbhatia4775 4 หลายเดือนก่อน +19

      Bro I use ai all the time too and it's useless in most cases. Gives basic answers to complex questions. All its responses are what people have posted on the internet anyway. It's nowhere near the capability of humans. It is a good assistant tho cause it's free.

    • @jaysonp9426
      @jaysonp9426 4 หลายเดือนก่อน

      @@rahulbhatia4775 lol, if you're using it for free then you just invalidated everything you said. Like I said, good luck 👍

    • @あられ-q5i
      @あられ-q5i 4 หลายเดือนก่อน

      it's all about how u use it

    • @asaddat87
      @asaddat87 4 หลายเดือนก่อน +1

      I think this is elaborating on the differences between generative vs predictive ai, even if you use ai everyday, there are limits of generative ai that you might not know unless you are a hardcore developer. On the other hand predictive ai has tangible applications in industry which promises to make our industrial endeavours more efficient. Nobody is saying ai is hype. They are saying generative ai is hype while predictive ai is a more pragmatic.

    • @jaysonp9426
      @jaysonp9426 4 หลายเดือนก่อน +1

      @@asaddat87 and they'd still be wrong. They're saying generative AI is hype because they think ChatGPT is generative AI the same way people thought the light bulb was electricity. ChatGPT is a demo for one use case. Generative AI is electricity, not a light bulb. The people who are upset are saying "I want a light bulb to wash my clothes for me" instead of building a machine that uses electricity to wash their clothes for them.

  • @fifski
    @fifski 2 หลายเดือนก่อน +1

    You know AI hype is so out of control that a CEO of a for-profit AI company has to explain it 😂 I've been saying that the 5th AI hype cycle (current one, and yes there were 4 cycles in the previous decades) is absolutely ridiculous 😂

  • @misterfunnybones
    @misterfunnybones 4 หลายเดือนก่อน +7

    Pump & dump.

  • @Peekul1
    @Peekul1 หลายเดือนก่อน

    I love my sub with chatgpt. Especially the search option.

  • @user-tx9zg5mz5p
    @user-tx9zg5mz5p 4 หลายเดือนก่อน +3

    Gemini couldn't figure out military time conversion. 😂

    • @matheussanthiago9685
      @matheussanthiago9685 4 หลายเดือนก่อน

      And don't forget to eat a least one small rock every day

    • @craigmilton9892
      @craigmilton9892 หลายเดือนก่อน

      Must be junk then.

  • @siriusfeline
    @siriusfeline 4 หลายเดือนก่อน +1

    BUT, predictive AI/machine learning can ONLY ever be logical in its derivations. It can NEVER be intuitive, which is a very different reality when sensing into something and determining where it is headed, what might happen and what might be needed. I'll bet anything, most people reading this will have NO idea what this difference is, including the guy narrating the video.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน +1

      Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

  • @AMOCapital
    @AMOCapital 4 หลายเดือนก่อน +8

    I mean AI is still new ,so let's give it some time and see 🤷‍♂️

    • @jettrink5810
      @jettrink5810 4 หลายเดือนก่อน

      Ai is not new. It has been around since the 1950s believe it or not.

    • @esdeath89
      @esdeath89 4 หลายเดือนก่อน

      ​@@jettrink5810It wasn't technology. Then computers were to slow to make AI possible. And now computers still slow to make true AGI. I think we will achieve true AGI in the next century.

    • @alpha.male.Xtreme
      @alpha.male.Xtreme 4 หลายเดือนก่อน

      Current architectures are new, the technology itself is not.

  • @gregorybabbitt2082
    @gregorybabbitt2082 4 หลายเดือนก่อน +1

    I don't mean to be jargon policing here, but 'generative' AI is not separate from 'predictive' AI, but is a subset of it. It uses a predictive model trained on data=text to predict the next word in a sequence. So generative is also predictive.

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน +2

      I’m the guy in the video. Yes, all machine learning can be thought of as predictive. But this is the vocabulary that has emerged for distinguishing these two very different categories of ML use cases. It’s about how you use/apply ML, not about the characteristics of the ML you apply. But yes, ML always generates models that predict.

    • @clray123
      @clray123 4 หลายเดือนก่อน

      @@EricSiegelPredicts No, it is the bs language you invented to push your company. Never heard any other AI researcher utilize this terminology because if you have a clue about how models work, it's downright silly.

  • @NikoKun
    @NikoKun 4 หลายเดือนก่อน +7

    I know enough about what's going on in AI, that I really have to disagree with this guy. I think he's missing the bigger picture, and oversimplifying how LLMs and generative AI work, in order to appease the doubters and skeptics, and to push his own way of doing things. He's exaggerating the concept of "hallucinations" and creating a false premise, there is no such thing as "seeming to understand". To appear to understand IS understanding. To be able to predict the next word and converse with humans on complex topics, everything that could occur in that conversation must be understood. Progress won't be linear, we merely needs to create an AI agent capable of convincingly working on problems on the level of an AI software engineer, then future improvements will come much quicker.

    • @Instaloffle
      @Instaloffle 4 หลายเดือนก่อน +4

      I'll agree his explanations could be better, and it kinda feels like he might be appealing to skeptics as you say. But...
      As an actual software engineer that has some basic experience actually working with machine learning & ai programs... You're really misunderstanding how LLMs work, and I don't blame you. You're falling into a very common trap that our brains set for us, we seek signs of language, communication, and intent constantly as a valuable evolutionary trait. The downside is sometime we project.
      When we see text from a LLM our brain can't help but seek intentionality and meaning behind the words but there really isn't any. It's a stochastic parrot.
      "Appearing to understand is understanding" / "Everything... In that conversation must be understood"
      This here is the tricky part. Parrots don't need to understand to repeat something back. Likewise programs don't need to understand to reproduce consistent and structured patterns of words. A calculator could tell you to cut a TV in half to split it between two people (1/2=0.5) meanwhile an Ai can suggest adding glue to pizza because it has no idea what those things are.
      It could say "The dinosaurs were wiped out in 1949 by a meteor, in a flash that could be seen from New York to Tokyo."
      This is a completely valid line to a LLM because it's primary goal is syntactically & grammatically correct english. But to us it's very obviously a ridiculous statement.
      This is the root flaw of hallucinations, the stochastic nature is LLM's greatest strength and the cause of hallucinations. Not a bug but a fundamental aspect of its architecture. We honestly shouldn't call them "hallucinations" because that only further convinces people that it would otherwise "understand what it is saying".
      LLMs lack comprehension, intention, and understanding. They never truly know what they are talking about. That doesn't mean their ability to calculate patterns of words isn't an insanely valuable and cool tool.

    • @NikoKun
      @NikoKun 4 หลายเดือนก่อน +3

      @@Instaloffle You're making a lot of assumptions about what I know, and what I've worked with.
      No, none of these things can be described as "stochastic parrots", and frankly, using that term only shows me you're repeating elaborate talking points, rather than giving it the deep level of consideration I have, for almost 4 decades now. That way of thinking about it is nothing more than an attempt to dismiss what you find uncomfortable, by making human intelligence something impossibly special, but yet somehow something that could be "faked", a paradox. The very concepts of stochastic parrots or philosophical zombies, whatever you call it, do not exist outside hypothetical philosophical discussions that are merely used to help us try to understand the nature of our own consciousness. In the real world, they're an impossibility, and make no sense logically.

    • @matheussanthiago9685
      @matheussanthiago9685 4 หลายเดือนก่อน +2

      "trust me bro, one more AI on top of the AI will bring AGI"

    • @NikoKun
      @NikoKun 4 หลายเดือนก่อน

      ​@@matheussanthiago9685 That is not the argument I'm making. I'm merely asserting that there is no such thing as faking understanding. For something to demonstrates functional understanding, it effectively MUST understand.
      But, since you bring it up, AI agents do have the ability to check their work, and when configured in groups checking each other, or given a feedback loop on it's own output, they can indeed solve complex general tasks. Sadly, implementing that at this stage, is still too costly.

  • @Hadrhune0
    @Hadrhune0 3 หลายเดือนก่อน

    I felt like somebody stole 8:27 minutes of my life. Thank you Eric to remind me the value of my time. If I was a barely minimum acknowledged stakeholder or client I wouldn't give you a single dollar.
    It really feels like rant from somebody who stayed to an old technology because had no money to invest in research something new. :)

  • @MÆtelL111
    @MÆtelL111 4 หลายเดือนก่อน +4

    This is so spot on, but I’m also glad. These things take time to develop/be developed. You can’t just throw a lot of knowledge into it and expect it to emerge as a human-level intelligent, conscious being. There’s so much more to us than knowledge. There’s reasoning, morality, consciousness, memory, past experience, emotion, chemical receptors, gut feeling, etc. All of this results in our abilities, not just knowledge, especially impartial information. My main issue is having “beings” like this given abilities to kill and maim without that progress. If anything, that could be worse than a true AGI model with such abilities, as a lack of self-progress and ignorance are often what bring out the worst in people, so why not machines? Unfortunately, not everyone investing is doing so because they want a being or beings that can tidy the house and do math homework with their kids. They’re interested in the warfare aspect, as they have been in swords, guns, explosives, drones, bombs, and the like for millennia.

    • @larsfaye292
      @larsfaye292 4 หลายเดือนก่อน +2

      You're spot on. "Intelligence" isn't just billions of parameters and a sea of GPUs running algorithms over it.

    • @gurlakthedestroyer
      @gurlakthedestroyer 4 หลายเดือนก่อน +1

      Naaa, people mainly do it for money...... Human greed is about 80% driving factor (don't ask me how I estimated that amount 😀)

    • @MÆtelL111
      @MÆtelL111 4 หลายเดือนก่อน +1

      @@gurlakthedestroyer yeah, so tell me, how much is the weapons industry worth? A lot, especially right now with so many wars being fought. Fighting wars is also the ultimate form of greed, as it is usually waged to take land, power, and resources from others, often wrongfully or selfishly, but sometimes also in self defense against those who wage war, doubling the profits for arms sales by necessitating the use of similar arms usage on both sides. And don’t tell me corporations haven’t sold out their tech for the purpose of warfare even if they generally produce non-arms products for general use. It doesn’t necessarily need to be a weapon, either, it could be Starlink or AI used to create propaganda at faster speeds than ever. I couldn’t even tell an AI image of a person the other day from a real photo. Was shocked to see it was labeled AI after. Like I said, I’m less worried about AI replacing humans in the work force (it was supposed to replace me, and it just can’t at present, though it can speed up my job, making me more productive) than it being used for warfare.

    • @MÆtelL111
      @MÆtelL111 4 หลายเดือนก่อน +1

      @@larsfaye292 Yeah, especially when those machines are driving up emissions for no critical or essential purpose.

  • @GoodBaleadaMusic
    @GoodBaleadaMusic 4 หลายเดือนก่อน +1

    You can see military intelligence speaking through this guy. You can see every guy with glasses like this that has respect in civil society coming out and just trying to make sure that this doesn't upset current power structures. Claude stopped translating Punjabi 2 weeks ago

  • @handlesshouldntdefaulttonames
    @handlesshouldntdefaulttonames 4 หลายเดือนก่อน +5

    I think it's insane how basically everyone is like "this is probably a bad idea" and we're still just letting them do this.

    • @mr.c2485
      @mr.c2485 4 หลายเดือนก่อน +1

      Sort of like splitting the atom. I don’t remember voting on that..😮

    • @mr.c2485
      @mr.c2485 4 หลายเดือนก่อน

      I know right? Kind of like splitting the atom. I don’t remember voting on that.
      Wait until CERN does it’s thing. Makes splitting atoms look like child’s play.

    • @craigmilton9892
      @craigmilton9892 หลายเดือนก่อน

      How do you propose stopping "them"?

    • @handlesshouldntdefaulttonames
      @handlesshouldntdefaulttonames หลายเดือนก่อน

      @@craigmilton9892 I don't. they're the realest version of humanity. the rest are cogs. i hope they procreate in meaningful ways and better our specie through research.

  • @anandsuralkar2947
    @anandsuralkar2947 4 หลายเดือนก่อน

    ChatGPT is real deal as a ex software engineer AIML postgrad.
    Its better and more powerful than any of us expected its actually scary