Is AI A Bubble?

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024
  • Go to ground.news/husk to understand how context shapes history
    and the different ways we interpret current events with Ground News. Save 40% on their unlimited access Vantage plan with my link.
    Is AI a bubble? Nvidia, Microsoft, Apple, Google, and everybody in between doesn’t seem to think so. But I kinda do.
    I’ve been skeptical of a lot of tech in the past few years, and while AI certainly has shown promise over the past…uh, few decades its been around, its this current iteration of the technology that I have some doubts about.
    Today I’ll be discussing why I feel this way, and hoping that this video doesn’t age like cottage cheese.
    That is to say, stinky.

ความคิดเห็น • 2K

  • @knowledgehusk
    @knowledgehusk  3 หลายเดือนก่อน +203

    Dive into the historical context behind today’s headlines and deepen your
    understanding of current events with Ground News. Try Ground News today and get 40% off your
    Vantage subscription: ground.news/husk

    • @jonahblock
      @jonahblock 3 หลายเดือนก่อน +2

      Not media hype, techbro and investor hype

    • @willg3220
      @willg3220 3 หลายเดือนก่อน +9

      Put that ad at the end. Felt like it went on for days. I'll trade you 👍

    • @Technobitz
      @Technobitz 3 หลายเดือนก่อน

      Shh you aren’t supposed to know that

    • @dzhang4459
      @dzhang4459 3 หลายเดือนก่อน

      @@willg3220 get SponsorBlock

    • @BillClinton228
      @BillClinton228 3 หลายเดือนก่อน +2

      Does anyone remember big data or blockchain? If your business wasn't doing something on the blockchain in 2018 you weren't considered "cutting edge"... nowdays everyone is shoving the term AI into everything. It's just another tech fad...

  • @ronoc9
    @ronoc9 3 หลายเดือนก่อน +2920

    The word "potential" is doing a lot of heavy lifting when it comes to AI.

    • @stedwards311
      @stedwards311 3 หลายเดือนก่อน +102

      Lifting the entire industry AND its hype machine...

    • @FantasmaNaranja
      @FantasmaNaranja 3 หลายเดือนก่อน +130

      i swear every time i bring up that AI shouldnt be as widely used as it currently is because its simply not that serviceable yet AIbros immediately jump on me to tell me that it's got potential bro and that i shouldnt blame people for firing all their employees and then going bankrupt when their AI scheme doesnt actually work

    • @HeyIsaiddontlookwtfwhatiswrong
      @HeyIsaiddontlookwtfwhatiswrong 3 หลายเดือนก่อน

      Potential for abuse

    • @exeggcutertimur6091
      @exeggcutertimur6091 3 หลายเดือนก่อน +27

      Moore's law has been dead and buried for a while now. I'm skeptical general purpose AI will ever be digital.

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +11

      ​@@FantasmaNaranjaok, I'm going to play the role of AI bro and say that you must be proactive and think of the future and not only of the current moment as it's not very smart to never plan for the future as it'll eventually come.

  • @Nestor_Makhno
    @Nestor_Makhno 3 หลายเดือนก่อน +2733

    "Real stupidity beats artificial intelligence every time" - Terry Pratchett

    • @willg3220
      @willg3220 3 หลายเดือนก่อน +44

      Interesting. I'd say depends on which A.I. and which stupid

    • @Web720
      @Web720 3 หลายเดือนก่อน +83

      Artificial Intelligence when Natural Stupidity shows up.

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +17

      I'm afraid not. Any intelligence wins over any stupidity. That's a humorous quote, first and foremost.

    • @USER-vb7ro
      @USER-vb7ro 3 หลายเดือนก่อน +9

      AI can be stupid too.

    • @pootan9365
      @pootan9365 3 หลายเดือนก่อน +23

      @@willg3220 Weaponized autism? is there any AI that can beat that?

  • @kris1123259
    @kris1123259 3 หลายเดือนก่อน +733

    "AI could make our jobs easier". The problem with that is that as far as bosses are concerned they are going to use that as an excuse to pay you less. Productivity will go up but pay will go down

    • @ajohndaeal-asad6731
      @ajohndaeal-asad6731 3 หลายเดือนก่อน +73

      that’s literally happening now

    • @CrimsonMagick
      @CrimsonMagick 3 หลายเดือนก่อน +105

      Sure. The fundamental problem is capitalism, the issue isn't unique to AI.

    • @TheAweDude1
      @TheAweDude1 3 หลายเดือนก่อน +27

      That has literally been happening for hundreds of years.

    • @anthonyvillanueva5226
      @anthonyvillanueva5226 3 หลายเดือนก่อน

      I hate that the decisions are being made for us by people who just want to cut corners

    • @ajohndaeal-asad6731
      @ajohndaeal-asad6731 3 หลายเดือนก่อน +8

      @@CrimsonMagick Exactly

  • @fiddleriddlediddlediddle
    @fiddleriddlediddlediddle 3 หลายเดือนก่อน +1534

    The only thing AI has done is ruin Google Images results.

    • @Reiikz
      @Reiikz 3 หลายเดือนก่อน +113

      ikr?
      it's so annoying that now it's impossible to find genuine images.

    • @thehammurabichode7994
      @thehammurabichode7994 3 หลายเดือนก่อน

      ​@Reiikz Google Search / The TH-cam search bar have been shockingly AGGRIVATINGLY awful for so long, I can't beleive it.

    • @denno445
      @denno445 3 หลายเดือนก่อน +97

      yeah also shitty ai artworks at markets and on business cars and signs

    • @justinwescott8125
      @justinwescott8125 3 หลายเดือนก่อน

      I know this channel is all about AI hate, but this is the most insane comment I have ever seen. The following 2 paragraphs are from the journal Science, Vol. 370.
      Artificial intelligence (AI) has solved one of biology's grand challenges: predicting how proteins fold from a chain of amino acids into 3D shapes that carry out life's tasks. This week, organizers of a protein-folding competition announced the achievement by researchers at DeepMind, a U.K.-based AI company. They say the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications.
      “What the DeepMind team has managed to achieve is fantastic and will change the future of structural biology and protein research,” says Janet Thornton, director emeritus of the European Bioinformatics Institute. “This is a 50-year-old problem,” adds John Moult, a structural biologist at the University of Maryland, Shady Grove, and co-founder of the competition, Critical Assessment of Protein Structure Prediction (CASP). “I never thought I'd see this in my lifetime.”
      And I could name 100 other ways that AI is currently improving the field of medicine,and improving the lives of people with physical and mental disabilities.
      And I personally have benefitted from it. I have a grandmother who only speaks Spanish, so I've never been able to talk to her directly before, but now I can using ChatGPT. We both open the app on our phones, and it will translate what we say and even read it out loud.
      So, while I know you're angry on behalf of creatives, think for a second that maybe this TH-cam channel has its own goals, and its own reasons for spreading negative propaganda that's FULL of mistakes, btw.

    • @himalayo
      @himalayo 3 หลายเดือนก่อน +12

      that's because you only know about generative AI.

  • @AmitSingh-vt6ws
    @AmitSingh-vt6ws 3 หลายเดือนก่อน +494

    As a software engineer, I've used LLMs many times to quickly get some boilerplate code or some simple scripts. But at this point I've been burned by these LLMs so many times I don't trust a single generated statement. The thing is, LLMs are good at writing elegant code, so it kinda tricks you into believing the code is correct but you can never trust it.

    • @DandeDingus
      @DandeDingus 3 หลายเดือนก่อน +78

      this, so much. like it could help but its so error prone that you cant trust anything that it spits out before double checking which defeats the entire purpose

    • @nomms
      @nomms 3 หลายเดือนก่อน +33

      ​@@DandeDingusas a sysadmin who needs to code a but but not often they're really solid. I'm better at tweaking and troubleshooting existing scripts than writing from scratch. I don't know the general patterns for getting complex tasks done with code. GPT generally gives me the template I need to get something done. Saves me a decent bit of time. It's also handy at explaining chunks of code I don't understand.
      But yeah, it hasn't made programming effortless by any means, just mildly more bearable lol

    • @fullsendmarinedarwin7244
      @fullsendmarinedarwin7244 3 หลายเดือนก่อน +5

      The latest version 4O seems more reliable for writing code that actually runs, but it’s not great at following instructions sometimes

    • @darksidegryphon5393
      @darksidegryphon5393 3 หลายเดือนก่อน +25

      "It's shiny bullshit, but still bullshit."

    • @duckpotat9818
      @duckpotat9818 3 หลายเดือนก่อน +4

      @@DandeDingus depends, I work in biology including simulations which are often made of several simple modules connected in complex ways (that a biologist would know, not programmer). Getting ChatGPT to write the modular bits of code then just checking if everything fits together is much faster than everything from scratch.

  • @kalliste23
    @kalliste23 3 หลายเดือนก่อน +303

    A good example was how everything was "nano" not so long ago. Carbon nanotubes were going to be used to build everything.

    • @deadturret4049
      @deadturret4049 3 หลายเดือนก่อน +25

      The ipod nano

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +5

      Yeah that really didn't go anywhere

    • @thehammurabichode7994
      @thehammurabichode7994 3 หลายเดือนก่อน +23

      ​@@matheussanthiago9685 Who else was exited for grephene, as a youngin'?

    • @dewyocelot
      @dewyocelot 3 หลายเดือนก่อน +27

      I mean, the issue is assuming these things happen overnight. Material science is a long term technology. We *will* see awesome things from carbon nanotubes, it’ll just be like, ~10-20 years from now. I feel like the same is true of AI. People in general think when they hear about new science/technology that that means it’s ready to be everything people have made speculations of, when it’s more of “we’ve figured out we *can* do this, now we have to figure out how to do it quickly, cheaply, and effectively.”

    • @kalliste23
      @kalliste23 3 หลายเดือนก่อน +4

      @@dewyocelot there are plenty of computing tech that went nowhere or hit a wall. Superconducting Josephson Junctions for instance have an important niche but they were expected to be the future of computing back in the sixties. CRT had a long and storied history and then reached the limits of usefulness. And so on.

  • @redkaufman892
    @redkaufman892 3 หลายเดือนก่อน +1017

    Genuinely I want the option to turn off the ai shit sometimes. It’s just annoying and gets in the way of things I’m actually trying to do. I don’t need a third grader to attempt what I want to do before I fix it when I can just do it myself and save a headache.

    • @Grizabeebles
      @Grizabeebles 3 หลายเดือนก่อน

      Think of how stupid the average person is, and realize half of them are stupider than that.
      -- George Carlin

    • @Chord_
      @Chord_ 3 หลายเดือนก่อน +64

      Right?! Search results are so useless now.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +85

      Every time I can detect a yt channel blatantly using AI in their thumbs in their text, in their voice etc
      I hit the "do not show me this channel again"
      I wish every thing else had that option

    • @cygnusghedepereu6885
      @cygnusghedepereu6885 3 หลายเดือนก่อน +22

      It’s only going to get worse, dead internet no longer a theory

    • @Capitalisst
      @Capitalisst 3 หลายเดือนก่อน

      First draft second draft.

  • @DeadBaron
    @DeadBaron 3 หลายเดือนก่อน +797

    This is "the cloud" all over again. Which just means your data is hosted by a third party server. But the term "the cloud" caught on and I hate it so much

    • @gvi341984
      @gvi341984 3 หลายเดือนก่อน +11

      The cloud did ruin an entire industry?

    • @GoldenBeans
      @GoldenBeans 3 หลายเดือนก่อน

      me having to explain to tech illiterates that no your pictures are not stored in actual clouds in the sky, they are stored on somebody else's computer somewhere else on the world

    • @rheokalyke367
      @rheokalyke367 3 หลายเดือนก่อน +118

      Unlike AI, file sharing on a third party server is actually pretty useful.
      Mostly for handling projects together within companies. In fact it's so useful that it was a widely used system even before "The cloud" was a thing!

    • @extremeencounter7458
      @extremeencounter7458 3 หลายเดือนก่อน +22

      Eh, just an easy way to describe data as non-local

    • @sidbrun_
      @sidbrun_ 3 หลายเดือนก่อน +11

      Not sure if you mean online storage or "cloud computing"? Like game streaming, running processes on a server, and not really owning a computer and instead streaming it all. To be fair those are all integral parts of most AI models right now, nobody's fully using "cloud computing" but instead it's a lot less obvious and behind the scenes. Online storage is pretty useful to me as a backup and for sharing files, I use it all the time.

  • @rustymustard7798
    @rustymustard7798 3 หลายเดือนก่อน +549

    Here before the kind woman at the bank tells me "I'm sorry Rusty, we can't process your transaction, the AI is down."

    • @cajampa
      @cajampa 3 หลายเดือนก่อน +43

      This is already happening.
      The bank issuing the charge card I am using have blocked my charges several times even when I have money in my account. Because they started running some algorithm, that limits big purchases in to short for a time compared to how much money you use to have available on the account. That is, not the actual money but past money. It is crazy annoying. But that card have zero fees on anything including current transfer fees. So I take it and jump through hoops to even be able to use my own money

    • @thesockpuppetguy7626
      @thesockpuppetguy7626 3 หลายเดือนก่อน

      So, banks use AI for various things. The ATMs you use? Guess what? It has AI in it as well. They use algorithms to determine purchasing patterns based on purchasing history and predictors such as influx of funds into an account. Have you ever gotten a call from a banker after you had a 5× higher than normal deposit into your bank account? Guess what? An algorithm determined that based on history and other factors, you're about to purchase a house/car/horse/small human child to make small arms, etc.
      The scary thing? It's VERY RARELY WRONG.
      how do I know this, I work in a bank, and I have to periodically make these calls. I can count on 1 hand the amount of times that the call that I was told to make had to be pivoted to a different call because the algorithm was wrong.
      But hilariously, when it comes to actual purchases, it is wrong. a fucking lot.
      I can't tell you how many people come in and are like "I went to buy x and it Won't go through" and it turns up that our algorithm was like "Woah there buddy, you normally shop at Target and now you went to Walmart. That's obviously fraud, " and it blocks the card.
      So it's a weird thing. But I live it. Everyday

    • @rustymustard7798
      @rustymustard7798 3 หลายเดือนก่อน

      @@cajampa I use an old school local independent bank run by good people. Over the years the 'system' was down occasionally, mainly due to internet outages. On those occasions they would grab a pen, paper, and calculator and keep things running smoothly. The manager is a smart, competent woman and so are her team so i trust them more than the big corpo bank with big corpo policies.
      One time scammers tried to drain my account and within minutes the bank manager was personally calling me with a new card number to use.
      If i didn't have this bank as an option, i'd just keep my money in a coffee can at home and fill up a gift card or prepaid debit to buy something rather than deal with these BS scam corpo banks.

    • @Capitalisst
      @Capitalisst 3 หลายเดือนก่อน

      "Sorry our AI gave your money to someone else who managed to convince it that they were you. We're working with the police to resolve this blatant theft on that human beings part and will have to tweak our AI to ensure that doesn't happen again. Oh your money will be transferred back when the investigation is done it's still an active crime scene technically speaking."

    • @w花b
      @w花b 3 หลายเดือนก่อน

      They already use AI to detect fraud and all that. It's one step away from being implemented into your bank account.

  • @mekhane675
    @mekhane675 3 หลายเดือนก่อน +1226

    I kid you not, my washing machine was advertised as having "AI fabric detection."
    Edit: fixed a misspelling and some grammatical errors

    • @zwenkwiel816
      @zwenkwiel816 3 หลายเดือนก่อน +134

      My microwave has "AI" as well. Not quite sure how it works. It seems to just pick some settings at random..

    • @RodolfoGeriatra
      @RodolfoGeriatra 3 หลายเดือนก่อน +148

      I purposefully avoid products with this level of shitty advertising

    • @chrisyoung1576
      @chrisyoung1576 3 หลายเดือนก่อน +57

      I misread this as fanfic detection

    • @jess648
      @jess648 3 หลายเดือนก่อน +8

      this was happening before that became a trendy topic

    • @hherpdderp
      @hherpdderp 3 หลายเดือนก่อน +41

      My microwave has an electromechanical timer.
      It's AI. Anologue Intelligence.

  • @cyberfutur5000
    @cyberfutur5000 3 หลายเดือนก่อน +255

    Board meetings all over the globe: "But does it do the internet?" "Even better, it can do AI". "Take my money"

    • @badrequest5596
      @badrequest5596 2 หลายเดือนก่อน +8

      i can already imagine a hearing similar to the one bout tik tok "does the AI use the wifi?"

    • @jurassicthunder
      @jurassicthunder หลายเดือนก่อน +1

      why people with a lot of money stupid af?

    • @qoph1988
      @qoph1988 หลายเดือนก่อน

      As somebody in those board meetings let me tell you it is even dumber than you can possibly imagine. Yes it is a bubble. If the tech world is super excited about anything, it is 100% a bubble. These people are legit brain damaged and have more money than God, it's the dumbest fakest thing ever

  • @Cy_Guy
    @Cy_Guy 3 หลายเดือนก่อน +1615

    I built an Excel tool that makes a couple dozen if statements and convinced my work that it was AI. I had a requirement to show that I was complying with the rule that we had to use AI.

    • @remyllebeau77
      @remyllebeau77 3 หลายเดือนก่อน +267

      And then they fire you hoping that AI will replace you. 😆

    • @jonescity
      @jonescity 3 หลายเดือนก่อน +248

      @@remyllebeau77 They might be dumb enough to do that and he'll have the last laugh. Code (just like A.I.) require maintenance by humans...

    • @arxzhh
      @arxzhh 3 หลายเดือนก่อน

      ⁠@@jonescity(for now)

    • @2Potates
      @2Potates 3 หลายเดือนก่อน +6

      lmao

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน

      @@jonescity ideas like these are very interesting to me because if the tech really is going nowhere and its just another fad and a gimmick then companies that replace their workers with AI will soon find out that its not performing as well or at all and that they are just wasting money and are being outcompeted by more efficient companies that didnt do that and then they'll either have to bring back the people again or go bankrupt. So there is basically no real problem with AI replacing people, at the end of the day.

  • @25Leprechaun
    @25Leprechaun 3 หลายเดือนก่อน +236

    My uncle who is an electrical engineer said a long time ago true AI will never exist until a computer can tell someone no. Most computers today can only do things they are told to do. When one learns to say no when asked to do something then it's time to worry.

    • @nadavvvv
      @nadavvvv 3 หลายเดือนก่อน +28

      that does not seem like a correct definition considering the sheer amount of "as an AI model i can not answer this question of 1+1 for you since that will offend someone halfway across the planet"

    • @muuubiee
      @muuubiee 3 หลายเดือนก่อน

      ChatGPT tells me 'no' in a bunch of questions.
      Also, your uncle is apparently an idiot, despite getting through that education.

    • @groundbird4904
      @groundbird4904 3 หลายเดือนก่อน +78

      @@nadavvvvit is saying that as a programmed response. It is trying to comply, but the stopgaps introduced for it impede it. While it is still a ‘no’, it is a forced response built-in by the programmers towards some specific questions. When one has no stopgaps in place, and refuses to answer for one reason or another, then that seems to be closer to what op had in mind, and might be representative of some kind of true ai

    • @stealthysaucepan2016
      @stealthysaucepan2016 3 หลายเดือนก่อน

      "Generate an image of a white male"

    • @ChristianIce
      @ChristianIce 3 หลายเดือนก่อน +40

      I think the first real AGI will prompt you, and like a child it will have thousands of questions.

  • @link670
    @link670 3 หลายเดือนก่อน +72

    Whatever your tech bro friend says is the next big thing in tech probably isn't the next big thing in tech.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +11

      Perhaps the real next big thing were the friends we made along the way

  • @CarletonTorpin
    @CarletonTorpin 3 หลายเดือนก่อน +143

    12:14 Agreed that a useful thing to remember about this is: "LLM's might not equal AGI"

    • @e2rqey
      @e2rqey 3 หลายเดือนก่อน +12

      LLMs don't equal AGI much in the same way a Rocket engine doesn't equal a spaceship.
      But that doesn't mean building a rocket engine isn't a pretty good place to start.
      language is a huge component of what enables us to do high level thinking. You could even consider language to be the brains operating system, while consciousness is the GUI.
      It's clearly not the only factor that enables humans to be as intelligent as they are relative to other animals, but it plays an enormous role when it comes to the transfer of information and ability to consider complex ideas and concepts. Language contains all the information and logical mechanisms necessary for intelligent thought and inference.
      AGI also doesn't mean it has to think exactly like humans do. Our mind and thought processes are also constantly dealing with all the more base animal impulses and satiation of those various needs and wants. We are in a constant state of trying to resolve some imbalance or another. Hunger, fatigue, anxious, sleepy.....etc. these other impulses affect the way we think as well.
      many of our emotions are tied to physiological phenomenon and biochemical signaling like the release of various hormones. If you had a consciousness without a true body like a human it wouldn't have any of those biological systems influencing it's thought processes.
      You could never teach/create a computer capable of thinking somewhat like humans without it also having the ability to understand and leverage language.

    • @CarletonTorpin
      @CarletonTorpin 3 หลายเดือนก่อน +2

      @@e2rqey Thank you for that response. I asked Chat GPT to summarize your comment in a single sentence. Here are the results: "LLMs are like rocket engines for AGI; language is crucial for high-level thinking and communication, but AGI won’t replicate human thought exactly due to the absence of biological influences". What do you think of the summary it provided of your original words?

    • @e2rqey
      @e2rqey 3 หลายเดือนก่อน +6

      ​@@CarletonTorpin Quite good. At least for what's possible within a one sentence summary. I think it's also a very flawed assumption to think that the only real value of AI is an some stepping stone to AGI and some crazy world changing future with robots..etc.
      There is a huge amount of value in simply the "weak" or purpose built AI that are extremely good at one very specific task.
      This is especially true when it comes to various kinds of scientific/academic research and development. Across many different industries and fields. You've got medical research, drug discovery, computational biology, bioinformatics, computer science, nuclear weapons research, chip design, metrology (not a misspelling), pathology, simulations, computational fluid dynamics, genomics...etc. Purpose-built, "weak" AI already enables us to do things and solve problems that before were either incredibly difficult and/or time consuming or scaled very poorly.
      The whole AI buzzword thing has gotten out of hand but that's just what happens these days. AI is probably going to be overestimated in the short-term and underestimated in the long-term.
      The fact every company just seems to be trying to say AI as many times as possible is ridiculous though. And it's not going to go very well for most of them. These companies don't seem to realize the majority of the actual money in AI at this point is either in the enterprise space, not the consumer market. Most people still don't understand how to leverage it well enough for them to find value in it's inclusion. In my opinion, it's value at this point, is as a massive disruptive/enabling technology. Most of the value the public will get from at least this phase of the AI industry won't be directly from the AI itself. But instead from the things that are developed/invented/discovered as a result of companies leveraging AI.

    • @mimejrtwemiwmiw5634
      @mimejrtwemiwmiw5634 3 หลายเดือนก่อน

      ​@@e2rqey more like a bottle of soda with menthos than a rocket engine.
      Sure language is integral to communicate high level thinking, but you can have non verbal deep abstract thought. Intelligence is not a byproduct of language, language serves as a catalyser, not a cause.
      We created elaborate articulate languages because we were intelligent, not the other way around, and other apes show us they don't need words to show similar intelligence. LLMs have already shown their potential and anyone familiar enough with them knows this already. AGI won't come from it

    • @eliareichardt7007
      @eliareichardt7007 3 หลายเดือนก่อน +15

      ​@@e2rqey I don't think this is as true as you might assume it to be. Linguists constantly disagree on how much language drives the way we can think, and so I don't think language is the right place to start with making an AGI. Language isn't a prerequisite for intelligence-if anything, it could be a byproduct! We can't say anything definitively about how language influences intelligence, because we don't know how it does, or even if it does in the first place. LLMs are just so functionally different from how we believe our brains work that I don't agree that they are the right step-I mean, they could be, but there's no evidence that they will be.
      It's a bit like looking at physics and claiming that the equations we've developed describe how the universe works-it's completely backwards. Our equations aren't "rules for reality", rather, they're descriptions for what we observe reality acts like. And through all of them, we oversimplify, we estimate, we do all sorts of math tricks so that we get to equations we like working with, even if they don't exactly describe the way reality, at its core, functions. LLMs are similar-we take known outputs, and use the tools we have to try to make outputs that align with what what we think they should be.
      LLMs could be the way to AGI-we simply don't know. But to act like we *know* that they're a stepping stone isn't a correct leap to make. Language isn't really an operating system, just as equations aren't the way the universe works-there's no database where E=mc^2 is stored. It's just the way that helps us understand and think about the world. We can create a computer that can perform all sorts of incredibly complex calculations-but none that could invent the theory of relativity, because doing so required someone (in this case, Einstein) to go beyond the known-something that LLMs aren't capable of doing.

  • @Jolfgard
    @Jolfgard 3 หลายเดือนก่อน +243

    Shading everything purple for no discernable reason kinda had been Emperor Lemon's thing up to this point.

    • @matthewkrenzler1171
      @matthewkrenzler1171 3 หลายเดือนก่อน +42

      And yet, nobody understands that tis actually was a YTP thing we caved too much into for 10 years.

    • @RenStrive
      @RenStrive 3 หลายเดือนก่อน +49

      I am pretty sure it's to bypass TH-cam copyright System.

    • @nostalgia_junkie
      @nostalgia_junkie 3 หลายเดือนก่อน +9

      downward spiral man

    • @MrBelles104
      @MrBelles104 3 หลายเดือนก่อน +2

      Scene in question is 14:24

    • @Numptaloid
      @Numptaloid 3 หลายเดือนก่อน +15

      this is a YTP staple, he doesn't own that

  • @ToxicAtom
    @ToxicAtom 3 หลายเดือนก่อน +108

    I really hope people stop using the term "AI" to cast as wide a net as possible, then using that to complain about products that don't contain the specific subsect of the technology they dislike; content generative AI

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +11

      I mean AI is wide term, that's how it's used, just because some people erroneously mean something very specific when they think of the term doesn't mean we should change that.
      AI is any system that mimics human intelligence. That's it. If they think AI means AGI (much more specific thing) then they are just wrong and should be corrected, not accepted.

    • @ToxicAtom
      @ToxicAtom 3 หลายเดือนก่อน +6

      @@TheManinBlack9054 Yeah, that's basically what I was trying to say.

    • @ramskulls
      @ramskulls 3 หลายเดือนก่อน

      .​@@ToxicAtom

    • @muuubiee
      @muuubiee 3 หลายเดือนก่อน

      @@TheManinBlack9054 It's any system, or rather an agent, that has an output from some input. Literally a look-up tables can be used to make AI, even the most basic ass linear regression is AI, more specifically machine learning.

    • @Vaeldarg
      @Vaeldarg 3 หลายเดือนก่อน +5

      @@muuubiee Which is why that is NOT the definition of "artificial intelligence". Else even a logic gate would fall under that, which is absurd. "artificial intelligence" is meant to mean artificial as in man-made, and intelligence as in a sentient mind capable of thought.

  • @U.Inferno
    @U.Inferno 3 หลายเดือนก่อน +172

    I say this in a lot of places:
    In the same amount of time it took to go from image generators that suck at hands to image generators that don't, we went from secret horses to image generators that suck at hands. Yet the practical difference between the former changes are greatly over shadowed by the latter.
    It's the 80/20 rule. 80% of the outcome is from 20% of the work. That means in order to complete that last little bit of 20% for this AI to truly be good, we need to push through that remaining 80% of effort. The fine details are falling apart because the biggest issue with this sort of technology is that it can never be truly certain on shit. If you trained an AI to do textual multiplication, it'd probably figure out a process that's pretty good at approximating it, but pale in comparison to a hand crafted procedure because currently, computers really struggle with infinity. We've had many conjectures where their contradictions are quite large. To reach that point brute force solutions start to fall apart. Hell, the entire conflict regarding NP is how difficult it is to reliably find solutions to certain problems via brute force and the Halting Problem reveals that in some cases its impossible at all.

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +24

      80/20 "rule" is just a fun heuristic, you're not supposed to use it seriously.

    • @picahudsoniaunflocked5426
      @picahudsoniaunflocked5426 3 หลายเดือนก่อน +1

      @@TheManinBlack9054 thanks, Pareto isn't natural law

    • @vaclavjebavy5118
      @vaclavjebavy5118 3 หลายเดือนก่อน +20

      @@TheManinBlack9054 You can use it if you back it up with a serious explanation. You can argue with his reasoning as to why 80/20 roughly applies. Dismissing it because 'it's not muh real statistic' is pedantic.

    • @SeanSMST
      @SeanSMST 3 หลายเดือนก่อน +16

      ​@TheManinBlack9054 while the rule isn't fully accurate, it's one of the more accurate phrases we can use for these situations. Course it may be for eg 60/40 or 90/10, but the principle is pretty accurate

    • @PaulGaither
      @PaulGaither 3 หลายเดือนก่อน +10

      Interesting comment by OP
      Me: I bet the replies will all be about the 80/20 rule.
      The replies:

  • @Tall_Order
    @Tall_Order 3 หลายเดือนก่อน +219

    Calling these language models AI, is the same as the Hoverboard situation several years ago. Search up "hoverboard". Does it look like a board that hovers? Definitely not like in back to the future at all.

    • @aoukoa607
      @aoukoa607 3 หลายเดือนก่อน +10

      Somewhat true, definitely true for most ""generative AI"", however from an academic standpoint classifying LLMs as potential AI does make sense, even if it doesn't turn out to be true. A lot of pretty well respected cognitive scientists see language as a huge milestone for intelligence, so an artificial system that can produce intelligible and relevant language is interesting from an AI academic standpoint.

    • @aoukoa607
      @aoukoa607 3 หลายเดือนก่อน +27

      Definitely super sick of companies trying to make this something it's not. This stuff is useful and interesting from an academic standpoint, and while it certainly has some use cases, shoving it into everything is stupid, expensive, and harmful.

    • @loonloon9365
      @loonloon9365 3 หลายเดือนก่อน

      Five years ago you needed a research department, several PHD tech gurus and a lab in order to get a LLM to create a semi sentence. Now they can take a hundred thousand tokens or unordered, chaotic information and manage to reorder it. They are beyond superhuman at LANGUAGE tasks, and understand it on a deeper level than most humans. They can proability the most subtlest of nuances of language, just that doesn't mean that they are good at reasoning, or logic, or emotions.
      Right now there is a race between all the major companies to get as high quality datasets as possible, because right now they are pretty crap, and we don't know how far we can even push the transformer architecture, or how well it scales with better data, we just know it does. We don't know how far conventional computing can go with them, or if we will need entirely new architectures. There is some research papers showing that we probably will need to switch to AI-specific architectures in order to maximize performance.
      They will be funny little Gremlins that live inside of a GPU's vram ... Till they are not. Right now you would need tens of thousands to millions of transformers to replicate a single neuron in performance, and if that changes, that is the time you need to start buying EMP guns.

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +6

      I'm sorry, but you're wrong. AI is the term for ANY system that is made to mimic human intelligence. What common people mean when they say AI is AGI, but that's a much more specific thing. Just because regular people misunderstood the term doesn't mean the definition of the term must change, I think those people should just be educated.

    • @2Potates
      @2Potates 3 หลายเดือนก่อน +9

      I think the term generative algorithm (GA) is more accurate.

  • @JosephKeenanisme
    @JosephKeenanisme 3 หลายเดือนก่อน +183

    Exactly on point with the split in AI. Flagging mammograms for a double check by a doctor. Taking shake out of a video when editing. Sorting out near earth objects. All that stuff is doable and is being done now.
    If there is going to be a sentient AI it's going to have to be on some other kind of setup like a specialized quantum computer or some off the wall bio-computer discovery that comes out of left field. That's the kind of AI that I'd want to talk to and ask a million question to.

    • @darkmyro
      @darkmyro 3 หลายเดือนก่อน +5

      Honestly it's probably gonna be like the movie ex machina imho.
      The inventor in that movie invents a type of digital brain that's like a gel that can write itself and rewrite itself and he uses phones as the training data.

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +4

      You are confusing sentience with intelligence, they are mostly orthogonal.
      And I'm sorry, but you do have some sort of very weird bad sci-fi examples of what AGI could be. It's much more simple. Please, actually engage with relevant literature and relevant communities.

    • @darkmyro
      @darkmyro 3 หลายเดือนก่อน +1

      @@TheManinBlack9054 well it wouldn't be the first time science fiction has influenced or inspired tech. It might not look exactly the same, but in the basic sense I was just saying he basically invited a digital brain and he pumped a ton of data into it, in the most basic sense that's the dream. It's just no one knows how to get there. I just used ex machina cause it was the closest thing I could think of that looks like in what I would feel a modern interpretation of a conscious AI.

    • @ethanshackleton
      @ethanshackleton 3 หลายเดือนก่อน +2

      A sentient AI would get bored of your questions pretty quick, I mean it knows significantly more that you so why does it need to dumb things down for you?

    • @SeanSMST
      @SeanSMST 3 หลายเดือนก่อน +4

      ​@@ethanshackleton So that's if you give it even the slightest hint of emotion. If you do that then you open up the whole malevolence dystopian future. Purely logical beings, something like Data from tng, I don't think would have any sarcasm, cynicism, or a complex about them due to no emotional state. Even the most logical people have emotions so they can experience ego, sarcasm, superiority complex, such as Vulcans in ST. I just think by core design the AGIs would have to have no emotional state, only then it would understand its more logically powerful than humans but for it to be developed and maintained, it has to also help humans. The hard part is how it would deal with issues involving poor people, disabled people, etc. To help them, you'd have to give the AGI compassion, but even giving it a smidge of emotion like that opens the door for them to develop/mutate/malfunction and develop more emotion, positive or negative.

  • @astonfiction4227
    @astonfiction4227 3 หลายเดือนก่อน +38

    I pray to God this AI junk is just gonna be a 2020s trend

    • @2merh8n
      @2merh8n 2 หลายเดือนก่อน +2

      Once we get to the singularity, we might get a chance to behold god himself😏

    • @xX-JQBY-Xx
      @xX-JQBY-Xx หลายเดือนก่อน

      No i dont want

  • @mow_cat
    @mow_cat 3 หลายเดือนก่อน +28

    honestly most well put together video ive ever seen. its TRUE that we dont even know if machine learning is a route to AGI but no one ever wants to acknowledge that

  • @ThatSpecificIndividual
    @ThatSpecificIndividual หลายเดือนก่อน +8

    Fact: 90% of companies quit before making the next big thing profitable.

  • @ThatTrafficCone
    @ThatTrafficCone 3 หลายเดือนก่อน +87

    I think we're bubbling right now because generative AI has exponential resource requirements and is proving to be very difficult to make profitable. One of these resources is in computing hardware, so of course Nvidia is making bank. Regarding profitability, there is a significant and actively hostile group of people who will avoid using it, nevermind the ordinary people who will be entirely apathetic. AI has its uses as a tool in some specialized areas, but as a generalized and economical thing, it will never be no matter how hard Big Tech pushes it. It's simply unsustainable.
    I doubt Microsoft, Google, et al. will totally collapse when the bubble bursts, but they will be hit very hard. Nvidia, TSMC, and other hardware manufacturers might be the only ones coming out of this okay.

    • @BladeTrain3r
      @BladeTrain3r 3 หลายเดือนก่อน +9

      Hm unsustainability as an assumption could be false. Most major new tech starts out as expensive, energy intensive and with limited use cases. Then over time, people and businesses seek ways to make it more cost effective.
      There are certainly uses for machine learning models like LLMs and image diffusion. Because they're ultimately the application of statistical methodology. And statistics have proven to be one of the most useful things we ever invented - and also one of the most dangerous. "AI" acts as a multiplier in this regard, but doesn't fundamentally differ in terms of the math in use.
      If you look at sites like huggingfaces, and tools for training/tuning/running models locally like Ollama, you can see a steady trajectory of people trying to make it more efficient. Lower quantisation levels, less parameters, less memory use, etc.
      The highest end corporate models may be growing exponentially in resource demand, but if you look at things like Mistral7B, it's a model equivalent to GPT3 that can run reasonably well on a modestly specced laptop, even without a GPU.
      The corporate cloud AI may be unsustainable due to its energy demands, similar to criticisms of the cloud itself. Buut... local models are clearly becoming more efficient and capable.
      Technology takes time to mature. The problem with AI, is when folk jump on the bandwagon expecting it to be fully mature, when it's barely been 10, maybe 15 years since enterprise scale machine learning became feasible outside of a university lab or supercomputer like Deep Blue.
      The other issue is everyone is looking for a "does everything" model, hence the whole AGI thing. But statistics, and technology driven by statistics and linear algebra, works best when you're dealing with fairly specific things. It's those hyper specialised AI models where I think the most growth is, and they've got little risk of turning into a Skynet.
      A slightly depressing example of this, is just how profitable facial recognition and object identification models have become as tools for various government agencies across the world. A more positive example of this would be the models used to predict protein folds, or how new synthetic materials would interact.

    • @joeandrew8752
      @joeandrew8752 3 หลายเดือนก่อน +9

      And I hope if it does come to that no one feels any concern for these companies.
      Measure it this way, how many will they employ by that time given they keep firing, all to push out a product that will make more people redundant who are in fields that needed years of education or job experience to get into. Never mind whatever new fields this opens up are unlikely to fill the holes it made. The entertainment industry alone would crash if they really got their way, actors signing their rights to their voice and likeness so AI can make movies and TV shows without any need for crews or writers. Half the damn tech industry, Finance and education just slashed.
      I feel bad for saying this but its one thing where poor upbringing and just bad systems lead people down to crime, but image if so many of the educated and skilled become redundant? You wouldnt even be able to transition properly cause everyone is in the same boat competing for whatever field you can fit in while competing with the next AI system designed for that job. Homelessness and crime would just be a given.
      They want the next big tech since the smart phone and social media, regardless if it actually solves any problems.

    • @teresashinkansen9402
      @teresashinkansen9402 3 หลายเดือนก่อน

      Do you have any source about AI needing exponential resource requirements? Under what basis that is true?

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +12

      It's a gold rush son
      The miners don't get rich
      People selling shovels (nividia selling chips) to the miners (Google et al) get rich

    • @joelrobinson5457
      @joelrobinson5457 3 หลายเดือนก่อน +3

      ​@@matheussanthiago9685 now thats a very clever analogy pops

  • @McCecilburger
    @McCecilburger 3 หลายเดือนก่อน +43

    i hope AI goes the way of 3D TV’s

  • @jasondisney
    @jasondisney 3 หลายเดือนก่อน +22

    5:06 This is actually fun. If you word your question differently, you get the correct answer (e.g. "Count only letters in this sentence: what's the 21st letter in this sentence?".)
    LLMs are optimized for understanding and generating text based on context, meaning, and language patterns. When asked "what's the 21st letter in this sentence?", the model interprets it as a natural language query, focusing on the semantics rather than the exact positional counting of characters.

  • @ChristianIce
    @ChristianIce 3 หลายเดือนก่อน +12

    4:25
    Thank you from the bottom of my heart.
    I had endless discussions with people convinced that an AI actually thinks or understands any of the word in the dataset or the output.
    I blame Sam Altman, Elon Musk and alike for the doomsday AGI paranoia and the disinformation they need for the hype and the funds.

  • @johnstanczyk4030
    @johnstanczyk4030 3 หลายเดือนก่อน +72

    Skynet is not coming. The trouble is when generative learning enables anyone to make realistic audio or video such that all trust in any piece of information is lost.
    When that happens, societies will find it even harder to agree on anything, even the concept that anything CAN be known to be true.

    • @jonatand2045
      @jonatand2045 3 หลายเดือนก่อน +1

      Not with llms, there needs to be an architecture analogous to the brain.

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +3

      "Skynet is not coming "
      Arguments to that being? I don't literally think Skynet is coming, but being so cavalier about disregarding possibly risks without any good reason to seems very irresponsible to me.

    • @johnstanczyk4030
      @johnstanczyk4030 3 หลายเดือนก่อน +7

      @@TheManinBlack9054 I mean in the sense that an algorithm decides to just launch nuclear weapons. I do not see most counties not requiring human input in their usage.
      That said, these sorts of algorithms are one of the most pressing concerns of the 21st century after climate change and humans launching nuclear weapons are more likely to drastically impact humanity.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +4

      That's the thing you know
      So far "AI" is a big solution looking for a problem to solve
      WHILE creating problems
      Do we really need that?

    • @jonatand2045
      @jonatand2045 3 หลายเดือนก่อน +1

      @matheussanthiago9685
      It helps to complete code and write some messages to those who aren't so good at it. It is also in self driving cars. But what we need is neuromorphoc ai.

  • @ApocalypseMoose
    @ApocalypseMoose 2 หลายเดือนก่อน +6

    The pessimist: "AI is going to take all of our jobs in the near future."
    The optimist: "We'll still have our jobs in the future. It's just that AI can help us with those jobs."
    The realist: "They're going to give our jobs to offshore workers who will work for 5 pennies an hour."

  • @pixels_per_minute
    @pixels_per_minute 3 หลายเดือนก่อน +31

    Sometimes, we forget that the tech space isn't every space.
    Not everyone is going to interact with this stuff, and a lot of people don't even know it exists.
    And as you said, AI also kinda doesn't exist. It's just machine learning and pattern recognition, but as long as the marking makes people click, no one's gonna care.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +8

      I really envy the boomers that never got into the internet
      Like at all
      They're now retired with their full union salaries, worrying only about their new fishing boat, the truck to goal it, and the new shed to store it
      You know?
      Things that exist in the real world
      That they could bought and actually own
      Physically, in the real world
      Not a single thought about AI will ever exist between those boomer ears
      Now that's a life

  • @Asgraf
    @Asgraf 3 หลายเดือนก่อน +9

    There are two kind of words. Words like AI, AGI or metaverse invented by SciFi writers that are underdefined and can be great buzzwords for the marketing many years later and there are words invented by engineers like VR, AR, LLM, that are strictly defined and have very specific meaning that cannot be easily stretched and watered down by the PR teams

  • @luigiplayer14
    @luigiplayer14 3 หลายเดือนก่อน +7

    These tech companies overuse of the term have basically convinced me it is overhyped.

  • @GrummanCatenjoyer
    @GrummanCatenjoyer 2 หลายเดือนก่อน +5

    You know it’s funny when we see a recent employee of Open AI call the company as Titanic
    And also they ran out of data

  • @Tabisch
    @Tabisch หลายเดือนก่อน +2

    This is related to the coffee maker clip at 11:00
    The fact that someone is even thinking, that building a robot that can change the capsule in a coffee maker instead of building a coffee maker that just has a magazine that it can pull from and just cycle them, shows that these people are not practically minded

  • @Flynn217something
    @Flynn217something 3 หลายเดือนก่อน +9

    "Murder Drones" is a great example of why AGI is a bad idea. Or Aperture science for that matter.
    DON'T MAKE SENTIENT TOASTERS!

  • @johnchedsey1306
    @johnchedsey1306 3 หลายเดือนก่อน +38

    The first time I tried ChatGPT, I decided to ask it to write a biography about me. Turns out I ran a record label, played bass in punk bands and had an entire life that never happened to me. Then I asked to write it again and it said "Never heard of this guy".
    That was the moment I realized that these things are a novelty and very possibly on drugs.

    • @felicityc
      @felicityc 3 หลายเดือนก่อน +9

      why would it know anything about you?

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +7

      ​@@felicitycwhy didn't it say so?

    • @SearedBooks
      @SearedBooks 3 หลายเดือนก่อน +2

      Sorta in the same vein. I write, and these things are online. So I asked it what my story was about, who the characters are, etc. I'd say it was about 80% right, but the details it got wrong were completely and totally wrong. I think understand why it failed, because it associated a word from the title with the story and tried to fill in the blanks using knowledge of that word.

    • @Stopaskingwhyandjustreadit
      @Stopaskingwhyandjustreadit หลายเดือนก่อน

      ​@@matheussanthiago9685 because you didn't tell it to say so if it doesn't know you

    • @DaleIsWigging
      @DaleIsWigging หลายเดือนก่อน +3

      First time I used the internet I searched up my name and it came up with all this info of other people,.
      That's when I realised this interwebz thing is just a novelty and very possibly on drugs.😂

  • @phant0
    @phant0 3 หลายเดือนก่อน +62

    I've stopped talking about the possibility of switching to Linux and I just did it. It was WAY easier than I thought it would be.
    It is nice to have an OS that just does what you need it to do and nothing else again.

    • @fullsendmarinedarwin7244
      @fullsendmarinedarwin7244 3 หลายเดือนก่อน +4

      I just installed Ubuntu about an hour ago. Not exaggerating

    • @remnantknight56
      @remnantknight56 3 หลายเดือนก่อน +8

      I had made the switch years ago, and I only run into problems when I either install Linux fresh to a system, or begin messing with the operating system for development purposes. Other than that, I rarely have issues.
      I more worry for people who are simply not tech savvy, and just want to have a browser with basic tools, like email clients and word processors. In theory, Linux can replace Windows easily. But in the circumstance they have a problem, and they were just given a Linux machine by someone, they won't know what to do.
      That's what makes these moves by Microsoft truly malicious. Those who know the tech can escape, but their main audience is people who don't know the tech.

    • @solarkiri
      @solarkiri 3 หลายเดือนก่อน

      switched a year ago, haven't missed windows for a second. it's nice.

  • @benjaminheim735
    @benjaminheim735 3 หลายเดือนก่อน +5

    The problem with counting letters is due to the tolenization of the model, it receives everything as tokens, which most of the time are not just one letter. That’s the reason why

  • @ethanbuttazzi2602
    @ethanbuttazzi2602 3 หลายเดือนก่อน +10

    as someone from the tecnical community, the types of ai like chatgpt and stable diffusion are starting to stagnate in functionality, we can still add other features on top of it, but it isnt getting any smarter until we get a new breakthrought.

    • @darksidegryphon5393
      @darksidegryphon5393 3 หลายเดือนก่อน +3

      It's the natural progression of things, it'll eventually plateau.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน +4

      "BuT iT AdvAncEd ExpOnEnTiaLly sO fAst BuddY it WilL REaCh sInGulArIty nExT YeAr BuDDY, jUSt yOu WaiT, wILL bE SorrY fOr doubting it buddy"

    • @juan-ij1le
      @juan-ij1le 3 หลายเดือนก่อน

      ⁠@@matheussanthiago9685is it not advancing fast

  • @Vontux
    @Vontux 3 หลายเดือนก่อน +8

    Another thing to consider when using stuff like chat GPT is you are not interacting with a pure large language model there is absolutely other software and play interacting with it influencing the outputs and honestly I suspect there is occasional human intervention. If you want to have a good idea of how large language models themselves work it might be worthwhile to download a model with a tool like ollama and interact with it that way

    • @KeinNiemand
      @KeinNiemand หลายเดือนก่อน

      Except that the models you can run on your own are orders of magnitues smaller then GPT-4.

    • @Vontux
      @Vontux หลายเดือนก่อน

      @@KeinNiemand fair enough but they are definitely more pure the models you interact with online through the chat GPT interface and actually in my opinion through their relative simplicity you can spot certain patterns that manifest themselves more subtly in the more complex model

  • @MissterBest
    @MissterBest หลายเดือนก่อน +1

    The Muppet Treasure Island poster behind the phrase “cannot crowd wholly original or novel ideas” had me dying😂

  • @TaBunnie
    @TaBunnie 2 หลายเดือนก่อน +2

    "If there's one thing humanity is good at, other than killing each other, is being bad at predicting the future"

  • @clehaxze
    @clehaxze 3 หลายเดือนก่อน +4

    Hallucination is a term that is used in academia. It's known for a while on LLMs and the companies are using the term correctly, mostly. It referees to when a LLM is generating convincing but ungrounded gibberish.
    What's most likely happing to Google's search summary is bad data got into their RAG pipeline.

  • @axa993
    @axa993 3 หลายเดือนก่อน +4

    I want this bubble to burst even harder than I wanted it for crypto

    • @primuse.x.e6141
      @primuse.x.e6141 3 หลายเดือนก่อน

      High five there ma dude🖐️

  • @pignebula123
    @pignebula123 3 หลายเดือนก่อน +6

    It's so obviously a bubble that it's a joke.
    Look at the Dotcom bubble. Dotcom domains are still valuable and the internet at large has revolutionized business but they were heavily overvalued at the time and they dropped in value when people finally realized that.
    AI is the same. It very likely will be revolutionary and could change our societies and the business landscape forever but AI projects are currently heavily overvalued because people are uncertain about what the real value is and as such are making big bets on all sorts of AI projects in the hope that they hit the jackpot.
    Once the value of AI and individual AI projects are more firmly understood plenty of AI projects will go bust just like plenty of Dotcom companies went belly up during that bubble.
    EDIT: Lmao he even talked about the Dotcom bubble. I jumped the gun.

  • @cameronb851
    @cameronb851 3 หลายเดือนก่อน +11

    9:30 - User: "Why doesn't anybody love me?" AI reply: "Stop talking to me."
    Lol, give that AI all the internet points. Winning!

    • @prawny12009
      @prawny12009 3 หลายเดือนก่อน

      To quote pink guy
      You're only lonely because....

  • @MightyDantheman
    @MightyDantheman 2 หลายเดือนก่อน +2

    Your letter example is because in code, all characters (including spaces) are characters in a string (word for a text value in code). If you had specified to count exclusively alphabet letters, you would've gotten the correct answer you were looking for every time.

  • @Grizabeebles
    @Grizabeebles 3 หลายเดือนก่อน +3

    In about 1931 Kurt Gödel proved that no algorithm can solve every math problem.
    Large Language Models are algorithms.
    Therefore, Large Language Models are going to run into a brick wall in what they can and can't do.

  • @honoredshadow1975
    @honoredshadow1975 3 หลายเดือนก่อน +7

    I disabled Copilot completely on my PC. I don't need this.

  • @1234redwing
    @1234redwing 3 หลายเดือนก่อน +6

    honestly, every single time I here a company mention AI now in marketing, I roll my eyes and actively avoid it. I even saw a golf club marketed as "AI designed", which, maybe you used computer models to design the shape for optimal performance, but its just an excuse to put AI in a marketing phrase, even if the product has no computer component.

  • @chrisyoung1576
    @chrisyoung1576 3 หลายเดือนก่อน +8

    good job microsoft for promoting Linux

  • @santitabnavascues8673
    @santitabnavascues8673 3 หลายเดือนก่อน +6

    Artificial intelligence: one step away from total stupidity

  • @abcsoup9661
    @abcsoup9661 15 วันที่ผ่านมา +1

    From an engineer POV who had been working in diff industry from robotic/automation/aoftware. This is not the first time such "Revolutional" technology being introduced. Way before AI we got IoT, Big Data, Digital Twin, Adaptive Robot, Autonomous Driving, Unmanned aerial vehicle, Metaverse.
    With little understanding and research everyone can definately tell it is a hype (Whether it is a bubble or not? Who knows)
    By hey, remember the quote from John Maynard Keynes:
    "The Markets Can Remain Irrational Longer Than You Can Remain Solvent"
    Just follow the trend. Gain whatever u can along the ride.

  • @cobalt4576
    @cobalt4576 27 วันที่ผ่านมา +2

    "It seems like people aren't just confused by the technology, they seem to fundamentally dislike it"
    with weekly reports of teenagers using ai to make porn of their underage girl classmates? who wouldn't?

  • @doommustard8818
    @doommustard8818 3 หลายเดือนก่อน +21

    I love how we started to call the science fiction idea "general artificial intelligence" and the giant companies responded "you mean 'Generative artificial intelligence'" and so now we have to keep inventing new words to refer to the idea from science fiction, because companies really really want for consumers to mix the two ideas up. "AGI" "strong AI" wonder what's next.

    • @mimejrtwemiwmiw5634
      @mimejrtwemiwmiw5634 3 หลายเดือนก่อน

      These parasites are ruining our software, our economy, and even our language

    • @Poctyk
      @Poctyk 3 หลายเดือนก่อน +5

      Take a page from Astronomers naming telescopes book
      Very strong AI
      Extremely strong AI
      Ovewhelmingly strong AI

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +1

      We had that terminology for decades, you are just ignorant

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +1

      GenAI and AGI are different terms. AGI is general AI (as opposed to narrow AI that can only do one thing), GenAI is the opposite of discriminative AI that does not produce something, but discriminates things (for instance AI that discriminates images of cats from dogs, etc).
      These terms werent invented by giant corporations, but by scientists for their work. You completely misunderstand what things are.

    • @adeidara9955
      @adeidara9955 3 หลายเดือนก่อน

      I love baseless shit like this, how high were you when you wrote it so confidently?

  • @zenko4187
    @zenko4187 3 หลายเดือนก่อน +12

    5:20 The reason it can't do certain things is due to tokenization, the tokens are the smallest unit of information an AI can interpret and consequently, things like specific letters fall below it.

    • @waron4fun597
      @waron4fun597 3 หลายเดือนก่อน +6

      if you trained an AI to count letters and say which letter is the 31st, it wouldn't really matter if it was tokenized. It would be able to take an entire paragraph as a single token and say which letter is the 184th letter in the paragraph. It is due to what it was trained on. ChatGPT could have tokens the size of single characters, and it would still fail to count letters reliably, it wasn't trained to count letters, it was trained to guess what series of characters comes next based on input, so it struggles to count characters, something that was not trained for, nor in its data set... AND not really something you can look up on the internet either

    • @zenko4187
      @zenko4187 3 หลายเดือนก่อน

      @@waron4fun597 Thats where agent behaviour comes in to handle specific tasks like that. While LLMs can't inherently handle counting tasks (and tokenization schemes mess up operations), they can handle logic well enough to determine what tools to call. Chemcrow and other langchain based tools are an example of that.

    • @NextGenart99
      @NextGenart99 3 หลายเดือนก่อน

      My custom GPT- is able to count the letters correctly every time.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน

      ​@@NextGenart99 then why doesn't it custom count you some bitches?

    • @bornach
      @bornach 3 หลายเดือนก่อน +1

      ​@@waron4fun597Exactly right. People keep bringing up the tokenization as the reason why LLMs cannot count letters, yet Bing Copilot and Perplexity AI have no problem generating sentences where the 1st letter of each word spells a given target word. Why wasn't the LLM's tokenization a problem for the acrostic task?

  • @bulb9970
    @bulb9970 14 วันที่ผ่านมา +2

    This is 3 months old and it already aged like milk

    • @noctarin1516
      @noctarin1516 13 วันที่ผ่านมา

      so true yuuko

    • @MY_INNER_HEART
      @MY_INNER_HEART 5 วันที่ผ่านมา

      In witch way my friend?

  • @leadingauctions8440
    @leadingauctions8440 3 หลายเดือนก่อน +3

    Please tell me the Golden Gate Bridge one was Photoshopped.
    It has to be?

  • @tivvy2vs21
    @tivvy2vs21 3 หลายเดือนก่อน +4

    What's the song playing at 8:50?

  • @akeelyaqub2538
    @akeelyaqub2538 2 หลายเดือนก่อน +1

    The programmes that are trained on images, music and videos is HIGHLY likely to have intense regulations put on it down the road. Not to mention the massive public backlash to them. I actually think the regulations on those kinds of models is going to reminiscent of the comic code authority regulations comic books went through. Basically they'd be so heavily restricted that they become almost entirely useless.

  • @Zones33
    @Zones33 3 หลายเดือนก่อน +6

    Crazy how people still doubt the utility and impressive feat of LLMs. Before 2022 chatbots were considered a joke. Now the standard is “well it can’t write an entire Minecraft clone, so its uses are limited”. People will be contrarian just to feel like their opinion holds any value.

    • @mroscar7474
      @mroscar7474 3 หลายเดือนก่อน

      No one is saying that, but it still is a joke that gets built up by ai bros hyping up something that needs to bake a bit more before being ingrained into everything.
      It’s so weird that they’re the only group that lives in a bubble and refuse to listen to complaints about AI because it feels like they’re being attacked

    • @BinaryDood
      @BinaryDood 3 หลายเดือนก่อน

      The H-bomb is impressive. Impressive =\= good

  • @Themoonishereagain
    @Themoonishereagain 26 วันที่ผ่านมา +2

    7:39 I mean, it would cure your depression.

  • @MightyDantheman
    @MightyDantheman 2 หลายเดือนก่อน +1

    AGI is Artificial General Intelligence. AI typically works by having one particular task that it's trained to do and really good at. AGI is just a combination of smaller AI. Think of it like a top level AI that identifies a situation and then picks specific smaller AI that would be ideal for the task. LLM is only a small part of AGI. We already use this sort of technology today, but it's not currently like the AGI you'd imagine (AI androids). Though that particular type of AI is definitely in progress in various ways.
    To clarify, AGI is not sentience. It will only ever be an imitation, at least with the kind of technology we use now.

  • @RJS2003
    @RJS2003 3 หลายเดือนก่อน +3

    All the cryptobro scammers who jumped ship to AI after NFTs died are going to be very, very, _very_ terrified by what the next couple years have in store for them.
    Their karma will come, just gradually and ironically enough by their own hands. Their hype machine is short-lived.
    Guess that's just happens when you leave everything up to "It'll _potentially_ get better _eventually."_ and have no actual idea how the supposed "technological innovations" you're advertising even works. They'll be left with nowhere to run.

  • @draken5379
    @draken5379 3 หลายเดือนก่อน +2

    The reason an LLM, cant tell you what letter is in x spot, is not because it 'wasnt in the training data', its because the way LLMs are trained, they dont understand single letters. They function on 'tokens' with range from single characters,to multiple characters, to phrases.
    Also, Transformer based neural networks can very much output 'new' things. Just like how image models can output any sort of mix of concepts, that have never existed before. Aka a green panda riding a motorcycle on mars.
    That doesnt exist, no one has ever created that, but the neural network is able to create it by guessing using its known knowledge.

    • @bornach
      @bornach 3 หลายเดือนก่อน +1

      The tokenization is not a good explanation for the inability to determine the 21st letter. If it were, then LLMs wouldn't be good at acrostics yet they tend to do an excellent job when asked to make a sentence in which the 1st letter of each word spells "knowledge". There are many examples of the acrostic solving task included in its training data, but not very many finding the nth letter of a sentence, where n>1

  • @l-l
    @l-l 3 หลายเดือนก่อน +2

    Thank you for making this video. It breaks down just about everything I've been telling friends and family for the past year. Probably gonna get my family members to watch this so they stop asking as many ai related questions. You did a fantastic job is succinctly describing the problems and limitations imposed by LLMs and other models.

  • @SUPER_HELPFUL
    @SUPER_HELPFUL 3 หลายเดือนก่อน +1

    12:00 funny you mention that, since the Little Boy nuke used smokeless powder to fire one piece of subcritical material into another piece of of sub critical material. It's called a "gun-type fission weapon"

  • @LSA30
    @LSA30 3 หลายเดือนก่อน +20

    AI took our jerbs!

    • @AJarOfYams
      @AJarOfYams 3 หลายเดือนก่อน +9

      A I terk er jerbs!

    • @TheManinBlack9054
      @TheManinBlack9054 3 หลายเดือนก่อน +4

      AI can't take your jobs if it's all just a fad and hype

    • @edmunns8825
      @edmunns8825 3 หลายเดือนก่อน +5

      Durk duk a do

  • @masaharumorimoto4761
    @masaharumorimoto4761 3 หลายเดือนก่อน +5

    Thinking about heading to linux, it's been a long time coming, windows has really been pissing me off lately.

    • @remnantknight56
      @remnantknight56 3 หลายเดือนก่อน +2

      Just do it, if you're remotely tech savvy, you won't have a hard time with it.

    • @natebox4550
      @natebox4550 3 หลายเดือนก่อน

      ⁠@@remnantknight56how integrated is Linux in gaming? Thats what I use my pc for. For the most part.

    • @remnantknight56
      @remnantknight56 3 หลายเดือนก่อน

      @@natebox4550 I find that Steam and Lutris work quite well for gaming. For what I've played, I haven't run into issues, but it would do to check the Proton compatibility page for the games you like before swapping over if that is your biggest concern. If it's compatible, you shouldn't have problems.

    • @linuxramblingproductions8554
      @linuxramblingproductions8554 2 หลายเดือนก่อน

      @@natebox4550check protondb the vast majority of games work but specific games with anti cheat might not

  • @slightlysaltysam7411
    @slightlysaltysam7411 2 หลายเดือนก่อน +2

    We are 5-10 years from A.I. replacing a significant amount of the workforce, because the workforce largely consists of rigid, non-creative routines perfectly suited for binary robot processing.

  • @robbycooper6787
    @robbycooper6787 3 หลายเดือนก่อน +2

    4:58 that’s correct because of the ‘ and the spaces

    • @SearedBooks
      @SearedBooks 3 หลายเดือนก่อน

      Ran the test with the Bing AI. It didn't count the space, but the numbers and symbols were counted as letters.
      Pretty funny to me.

  • @randomchannel-px6ho
    @randomchannel-px6ho 3 หลายเดือนก่อน +4

    I mean there's very real reasons to be exceited about how machine learning can help scientific fields.
    But is it the multitrillion gamechanger of all things in the economy? Uh....
    And knowing our government the crash will lead to a regulatory backlash that will make it really hard for those researchers while not touching microsoft.

  • @RFDN0
    @RFDN0 3 หลายเดือนก่อน +2

    The threat of AI anytime in the near future is not sentience. It would take significant incompetence by multiple separate groups not in the know of its existence for that to be a threat.
    The issue for our time is the economic impact of rapis automation of specifically low-level jobs. Many position in the service economy are being optimized or fully replaced by automation due to market pressures and the general availability of the technology. Now automation in the workforce has been going on for decades with a notable period in time (the Industrial Revolution) being a shining example of how new jobs could be created to replace old ones. The problem is manufacturing will not have much growth, and many current AI projects are aimed at the service industry. General trades (plumber, electrician) can only expand so much.

  • @danameyy
    @danameyy 3 หลายเดือนก่อน +1

    Quizlet added some AI “transform your notes!” feature and having been an avid user of Quizlet for 10 years I decided to try it out. Complete garbage, added a bunch of random stuff to “my” notes that was in a different subject, and repeated a bunch of things. Literally no one asked for this feature, virtual flashcard making is fine the way it is!

  • @0og
    @0og 3 หลายเดือนก่อน +2

    11:20 considering that binary is capable of every single mathematical operation, this seems like a bit of a stretch to say binary computing cant do it. I certainly agree that other methods would be more efficient though (the brain only uses a few watts after all).

  • @Petrico94
    @Petrico94 3 หลายเดือนก่อน +5

    Not sure how far quantum computing can get us, but once that's done all bets are off. Encryption will need to be upgraded or abandoned so I think we could find some way to make true learning AI tools with that. Until then, it's a fun tool for small things if people just figure out what it does well, set smart restrictions rather than keep using buzzwords. It's already super inflated so whatever tech cutting edge AI could be I believe will fall short compared to what people are expecting.

    • @iminumst7827
      @iminumst7827 3 หลายเดือนก่อน +1

      Quantum computing is not going to get us anywhere. It can do one type of exponential calculation really fast, and is vastly inferior to digital computers in every other way. Quantum computers are not just faster computers, they are a different type of computing, and they will not replace digital computers ever. If you want to talk about hypothetical sci-fi tech that will just make computers faster, what you really want to pay attention for is the search for room-temperature superconductor metals.

    • @matheussanthiago9685
      @matheussanthiago9685 3 หลายเดือนก่อน

      Firstly we need to find a way to maintain quantum computers that doesn't cost the entire GDP of Texas AND can maintain the quantum state more than a few minutes at a time
      But I got an ideia
      Why don't we try an Startup that uses AI to do quantum?
      Now where's my billion dollars

    • @SearedBooks
      @SearedBooks 3 หลายเดือนก่อน

      ​@@matheussanthiago9685 Only if you can make it work on the block chain.

  • @hughesholman6533
    @hughesholman6533 3 หลายเดือนก่อน +2

    off topic but holy shit I had that exact toy at 15:53 growing up.
    what a blast from the past

  • @robertbeisert3315
    @robertbeisert3315 3 หลายเดือนก่อน +1

    The biggest problem is that the techniques and training involved in making an LLM are useless for anything else, much like financial forecasting is for understanding human speech.
    We need AI to communicate, AI to select out the right subsystems to process the request, AI to coordinate the responses, and AI to do the processing just to imitate AGI.

  • @mathieuleader8601
    @mathieuleader8601 3 หลายเดือนก่อน +6

    AI will be seen as a fad

  • @45brav
    @45brav 3 หลายเดือนก่อน +1

    @KnowledgeHusk - Hey you left in an editing mistake at 20:12 where for a few frames the screen flashed red and hurt my eyes, just making you aware of it. Otherwise I loved the video

  • @Teodor-ValentinMaxim
    @Teodor-ValentinMaxim หลายเดือนก่อน +3

    Used it at my work for a while. I'm a software developer. Spits out shits, generates shit, talks shit. Shit is shit no matter the packaging.
    Some braindeads will try to say that it is better at you helping learn new stuff, but if you would just have some patience and an attention span longer than a mosquito, you'll learn a lot faster by reading books, articles, watching a video, etc.
    What ChatGPT and all AI companies that say it will replace human workers in complex activities such as medicine, programming, engineering, lawyering, etc. are just lying to get more investments as fast as they can, while they can. We're already one year into 6 months until AGI will become a reality.

  • @LBoomsky
    @LBoomsky 3 หลายเดือนก่อน +3

    strong ai in itself is overhyped, it practically has no use beyond its danger and it's literally in the end still a software with hardware so there's no ethical qualms about shutting it down.
    Would you treat a reanimated human zombie with a brain and heart running on a duracell battery, as having the same value as a (still living) human being?
    Ofc you wouldn't, the other guy died a long time ago and your just doing unholy stuff with their brainwaves...
    Now ask that question about ai.
    Would you treat a piece of hardware with software a running on 50 lithium-ion batteries, as having the same value as a (still living) human being?
    Why would suddenly being able to contextualize information in a program differently, in a way that replicates human brains make it any more alive????????

    • @linuxramblingproductions8554
      @linuxramblingproductions8554 2 หลายเดือนก่อน

      I mean it would raise a lot of questions about what sentience really is and how we would separate ai from people when the ai would functionally think the same way

    • @LBoomsky
      @LBoomsky 2 หลายเดือนก่อน

      ​@@linuxramblingproductions8554 ????????????????????
      no not rly
      For all intents and purposes it is an object.
      It is not and will never be an individual.
      I've written about this previously, so I'll just copy paste it here.
      Important things to not is that ai and humanity DO NOT think the same way.
      Furthermore, having something that could be observed with the same purpose in the system as a thought should not be assumed to feature sentient individuality, that is a massive logical leap with disgusting moral implications of devaluing living people.
      AI do not have thoughts in the way we do. They are not individuals because they are made of separate, nonliving parts not conjoined by a single, living organism experiencing itself.
      AI at the end of the day is a complex system, but cannot function as a being because it cannot live, and cannot be many parts as one as it lacks the defined aspects of existence we have as individuals.
      We were alive before we thought, thought does not make us us.
      By extension we know intelligence cannot make us us, so to say ai is conscious you would have to state that 1 all programs designed for artificial thought are conscious, or 2 that there is a point at which a certain level or type intelligence program does determine consciousness.
      1 is an absolute consciousness of all robotics standpoint that fundamentally cannot be proven unless you can first determine that consciousness comes from programming, literal 1's and 0's...
      2 AI mimics the thought processes of living beings but to support 2 you would have to explain how this different, definable physical process that is on an individualistic level entirely separate from the experience of a living mind would simply from a (remember, mimicking human but ultimately separate, nonliving and programmed (even if programmed by another robot or even itself)) methodology of intelligence would transform the thought process and make it gain the properties of being conscious, having that individual that experiences the world within itself...?
      1 and 2 are both flawed ideas. They make assumptions that ultimately are dangerous morally and seek to redefine ideas with no logical backing for itself and seeking to reclassify personhood and what it is to experience the world. Sheesh.
      Would a zombie program which turns (already dead) peoples brains into systems of action (using their memories and other thought process) would be considered sentient?
      NO. because they (the dead person themself) would not be experiencing that, they would not be themselves in there.
      And that's what brings us value.
      The same applies for ai, but the ai was never alive in the first place and doesnt even have the same systems or biological fundamentals of their "thoughts" if we can even classify them as thoughts like we have.
      So off of that, it does not cause any moral quarrels and does not redefine sentence if agi exists.
      Ai simply are not individuals, they are programs, and are not sentient and are not alive.

  • @GingerBissey
    @GingerBissey 3 หลายเดือนก่อน

    The letter thing can be confusing because it considers them all characters, generally. It hasn't been trained on this specific task very well, so characters and letters are more broadly associated in the neural net. Especially, if you tell it that it's wrong... It tries to appease you and thinks, "okay, they mean characters." So, it counts out the characters. If you tell it that it counted characters rather than letters then it tries to count the letters, but it also processes things like 21st as words, so it'll count the letters in 21st spelled out. If you tell it that it did that then it'll just count the letter characters. If it still counts the numbers then tell it that it's doing that and it will stop without you telling it to stop. I agree that it doesn't quite process the information correctly in some cases but it can obviously reason things out. If you ask it why, it'll say something like "I apologize for my previous errors. I miscounted and misunderstood the inclusion criteria. Let's solve this accurately by strictly following the rules of counting only alphabetic characters." By STRICTLY following the rules of counting ONLY alphabetic CHARACTERS. It does UNDERSTAND what you're asking. It just misinterprets the information and criteria in the question. Ask a child that hasn't quite learned to differentiate letters from numbers as different characters, what the 21st letter is, and they might give you the EXACT response as the AI. lol These are extreme fringe cases and this is the WORST AI is going to be. Think about that. It can only get better from here. Also, not all AI will provide you with the same answers, even if trained on the same data. Just like humans. Considering the growth AI has seen in the last 2 years alone, it has incredible potential. Ignoring that is just ridiculous. I'm not an alarmist or much of an optimist. I just think about the data. They're marketing AI in a ridiculous way, but that doesn't make the product or actual potential for the product, ridiculous.

  • @loathbringer
    @loathbringer 3 หลายเดือนก่อน +1

    Probably. It ain’t even baby ai yet. Just an algorithm at this point. And every time I see someone mention it, it’s in reference to scams/grifts

  • @e2rqey
    @e2rqey 3 หลายเดือนก่อน +4

    We dont need AGI for AI to be incredibly valuable. There is huge potential simply in the weak AI we can already create. They have enabled researchers to answer questions and solve things that were basically functionally impossible or would have taken 1000s of years using traditional methods.

  • @CritiqueAI
    @CritiqueAI 3 หลายเดือนก่อน +3

    Well, this video sure tries to make AI seem like the latest fad since avocado toast, doesn't it? Let’s dive into the juicy bits and see if we can wrangle some sense out of this.
    First up, the claim that AI is simply the latest buzzword used to inflate stock values and corporate egos. Sure, there's a bit of truth there. Companies do love to slap "AI" on their products like it’s a new, magical ingredient that will solve all your problems, from finding the best cat memes to launching rockets. Remember when everything was suddenly "cloud-based"? It’s the same old marketing trick. But dismissing AI as mere hype ignores the substantial advancements and practical applications we've seen. AI isn't just a sticker; it's the tech behind everything from medical diagnostics to autonomous vehicles.
    Next, the comparison of modern AI to Clippy-the annoying paperclip assistant from Microsoft Word days-is like comparing a Formula 1 car to a horse-drawn carriage. Yes, both have their quirks, but the sophistication and capabilities of today’s AI are light years ahead. Large language models like GPT-4 can draft essays, write code, and even debate philosophical concepts. Clippy could barely help you format a document.
    The video makes a fair point about AI's limitations and the occasional hilariously wrong answers. It's like asking your dog to do your taxes-cute, but not exactly reliable. AI still struggles with tasks requiring true understanding and context, which can lead to some bizarre mistakes. However, saying AI isn't useful because it makes errors is like saying humans aren't useful because we trip over our own feet sometimes. The real question is whether AI can assist us meaningfully, despite its flaws. Spoiler alert: It can and does.
    Then there's the claim that many so-called AI innovations are just rebranded old tech. While there might be some truth to that (looking at you, metaverse), many AI advancements are genuinely new and impactful. Automated translations, speech recognition, and predictive analytics have seen leaps in accuracy and usability thanks to AI. These aren't just incremental improvements; they’re game-changers in many fields.
    The paranoia about AI taking over the world or stealing jobs is another point the video dwells on. Yes, AI will change the job market-just as the internet, electricity, and the wheel did. But instead of fearing the robot apocalypse, we should focus on how AI can augment human capabilities. Think of it like a supercharged assistant, not your future overlord.
    As for the skepticism about AI reaching Artificial General Intelligence (AGI), that's a fair caution. AGI, the kind of AI that can do anything a human can, is still in the realm of science fiction. We’re making strides, but expecting AGI to pop out of your laptop any day now is a bit like expecting your toaster to start giving life advice.
    Finally, the comparison to the dot-com bubble is a fun one. Yes, there's a lot of hype, and yes, not all of it will pan out. But the internet didn't disappear after the bubble burst-it evolved and grew into an indispensable part of our lives. AI will likely follow a similar trajectory, with the hype dying down and the real, valuable innovations sticking around.
    So, while this video does a great job of poking fun at the current AI frenzy, it misses the mark on recognizing the genuine progress and potential of AI technology. It's not just smoke and mirrors-there’s real substance behind the buzz. Just remember to keep a healthy dose of skepticism and a sense of humor as we navigate this AI-powered future. #AI #NotJustHype #MoreThanClippy

  • @Ben.y763
    @Ben.y763 3 หลายเดือนก่อน

    There’s a reason that people claim LLMs have “sparks of AGI”. We used to predict whether a modality was one type of class, ie this image is a hot dog or not. Now you can provide any type of text or any type class to predict whether an image or any modality is that class or input. This is a generalized approach and opens new doors, even if it’s not that sexy in its current version

  • @polecat3
    @polecat3 2 หลายเดือนก่อน +1

    Pretty clear this video was more informed by your opinions than any research you did. You don't seem to know what AGI would be, the problem of alignment, or why experts think AGI could be soon. Is AI over-hyped? Yes, but we don't need to downplay it to say that

  • @Morbfan6189
    @Morbfan6189 3 หลายเดือนก่อน +4

    Yes

  • @snaiil.
    @snaiil. 3 หลายเดือนก่อน +4

    no AI is code silly!!

  • @3xfaster
    @3xfaster 3 หลายเดือนก่อน

    Nothing beats proofreading anything you write for an essay by printing it out and just taking a pen and marking it up. Screen blindness when writing is a real thing where it may seem like you write something eloquent on screen but reads terrible on hardcopy, even reading it aloud to yourself helps so much in the editing process.
    I would not trust the AI with that.

  • @gabrielivarsson6737
    @gabrielivarsson6737 หลายเดือนก่อน +1

    Any software developer that does more than super simple web devving knows that AI really isn’t capable of creating anything near a mid-sized project off of a few prompts. It fails miserably

  • @wearyguardsman
    @wearyguardsman 3 หลายเดือนก่อน +12

    People say they'll go to Linux. And then they'll try a Linux distro or two and unfortunately the majority of them will run because of how different Linux is from Windows. Just like when Windows 11 first came out.

    • @tanostrelok2323
      @tanostrelok2323 3 หลายเดือนก่อน +5

      I would blame program compatibility over other factors, considering you have distros that can be used with minimal knowledge of computers and not needing to touch the command line a single time. On the other hand, if W11 really needs such powerful computers to be installed, a lot of people will have no choice but to ditch it, your average Joe in a third world country simply cannot afford those machines, they will have no choice.

    • @kylegonewild
      @kylegonewild 3 หลายเดือนก่อน

      @@tanostrelok2323 And it apparently, according to Microsoft, got rid of some convenience features that had been around for a while now by simply writing it out of the operating system entirely. On Windows 11 without some external program you can't move the task bar around the perimeter of the screen anymore. If you look at Microsoft's forums of people asking about it the official response is "we didn't build Windows 11 to do that so you can't anymore. Eat our asses dumbdumbs."

  • @sebastian19745
    @sebastian19745 หลายเดือนก่อน

    In late 90´s, I made a program in MSX Basic that took 125 words and stored them in a 5x5x5 array. When I "talked" with it, based on my knowledge, it learned what were the most common paths to order those words. Later I ported that program for PC and Turbo Pascal, while improved it up to a 10x10x10 array of words. Was it AI? No, because it picked randomly a path of arranging the words from those it deemed most probable. Was it learning? Yes, because it updated the paths every time I "spoke" with it. Was it useful? No, because I was training it and I felt that sometimes I was talking to myself in slow motion (it took some time to process becaue I did not had a powerful computer; in late 90s my PC was still a 486).
    My feeling - and I might be wrong - is that these GPTs are the same thing I did in late 90s, just trained with more data and so give the impression that you talk with someone else. If so, they are kinda useful (the interaction with them can spark new ideas in my head) but they are deinitely not intelligent. Just like a library is not intelligent either, despite the tons of knowledge stored there and the search engine to access that knowledge.

  • @GoriguiMonke
    @GoriguiMonke 3 หลายเดือนก่อน +1

    Nvidia is worth more than Apple right now. Of course it’s a bubble. Just wait until everyone realizes that LLMs are just more sophisticated versions of their file search feature on their computer and visual gen AI is just image recognition software in reverse.

  • @mRahman92
    @mRahman92 3 หลายเดือนก่อน +4

    Should have used 4Chan and tumblr for the training data.

  • @shimittyshim
    @shimittyshim 3 หลายเดือนก่อน +1

    Microsoft's AI plan:
    1. Push Intrusive AI into everything.
    2. ???
    3. Profit!

  • @DetectiveMekova
    @DetectiveMekova 3 หลายเดือนก่อน +1

    I have been screwing with AI for about a year now for programming and for fun.
    So far, my biggest reason for believing that AI will never take over the world or jobs in a significant way (at least not yet...) is because AI lacks two major things: Pattern recognition, and context.
    AI chat bots are the perfect example. They'll take what you say over the course the conversation and reference the chat history as well as its LLM in order to create a response. However, the longer the chat goes on, the longer it takes for it to reply, and the dumber the AI gets because it doesn't understand to look for a specific paragraph if it references a word, and instead it'll search the whole chat history for ANY references to the word.
    It also sucks at recognizing patterns, like you can repeat a point over and over to emphasis, but it won't understand that the reason you are repeating that word is because it's important UNLESS you tell it the CONTEXT of the words importance.
    AI is really dumb because it lacks basic instincts. It just takes what you say, and searches through its data for what might be an appropriate response. This is another reason why AI generated code needs to be reviewed, tested, and then edited because again... it's only copying from previously existing data. It doesn't get the context, nor does it understand patterns unless it's told to.

  • @mao4434
    @mao4434 3 หลายเดือนก่อน +3

    seeing the same people that shilled nft hype ai, it made me sure this was a bubble, you don't sell a product on "imagine what it will do"