AI Hype is completely out of control - especially since ChatGPT-4o

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 2K

  • @InternetOfBugs
    @InternetOfBugs  4 หลายเดือนก่อน +103

    Moving Links and Sources to Here since the Description ran out of characters.
    # My last video about how I expect AI to impact the Software Industry over the next few years
    th-cam.com/video/nkdZRBFtqSs/w-d-xo.html
    # BP is saying they're now needing 70% fewer coders
    www.webpronews.com/bp-needs-70-less-coders-thanks-to-ai/
    # Bloomberg on the number of times "AI" was mentioned on earnings calls
    www.bloomberg.com/news/articles/2024-05-14/artificial-intelligence-buzz-has-faded-on-company-earnings-calls
    # Job impact of the 2000 .com bubble
    www.pewresearch.org/short-reads/2014/03/12/how-u-s-tech-sector-jobs-have-grown-changed-in-15-years/
    www.stlouisfed.org/-/media/project/frbstl/stlouisfed/publications/regional-economist/2017/second_quarter_2017/industry_profile.pdf
    # instructions on adding voice to chatgpt from Feb 2023
    paragshah.medium.com/add-voice-interface-to-chatgpt-openai-apis-166ed58d1d74
    # Explanation of how multimodal embedding is "used in the exact same manner as its textual counterpart"
    arxiv.org/abs/2307.11795
    # 4o vs 4Turbo benchmarks:
    www.vellum.ai/blog/analysis-gpt-4o-vs-gpt-4-turbo
    scale.com/leaderboard/
    # Screenshot of ChatGPT-4o telling me Stave Jobs didn't address Radio while Perplexity took me right to his 1983 quote
    th-cam.com/video/pnMx3fSXEEg/w-d-xo.html
    # Andrej Karpathy on the SEAL benchmarks
    x.com/karpathy/status/1795873666481402010
    # How is ChatGPT's behavior changing over time?
    arxiv.org/abs/2307.09009
    www.tomshardware.com/news/chatgpt-response-quality-decline
    # ChatGPT5 as a whale
    th-cam.com/video/gVMyesN3Atk/w-d-xo.html
    # humans are predisposed to be particularly gullible about the sentience of AI
    www.fastcompany.com/90867578/chatbots-arent-becoming-sentient-yet-we-continue-to-anthropomorphize-ai
    # papers on the long, long history of humans believing that things are sentient when they aren't
    # from weather to pet rocks
    arxiv.org/abs/2305.09800
    link.springer.com/article/10.1007/s00146-023-01740-y
    www.eusko-ikaskuntza.eus/en/riev/human-cognitive-biases-present-in-artificial-intelligence/rart-24782/
    www.forbes.com/sites/forbestechcouncil/2021/06/14/human-cognitive-bias-and-its-role-in-ai/?sh=6a62152c27b9
    www.ncbi.nlm.nih.gov/pmc/articles/PMC10295212/
    www.nature.com/articles/s41598-023-42384-8
    link.springer.com/article/10.1007/s12124-021-09668-y
    psycnet.apa.org/record/2013-28007-005
    www.tandfonline.com/doi/full/10.1080/00140130310001610883
    kilthub.cmu.edu/articles/journal_contribution/My_Pet_Rock_and_Me_An_Experimental_Exploration_of_the_Self_Extension_Concept/6470282/1
    # The "Eliza effect" has been known since the 1970s
    link.springer.com/article/10.1007/s00146-018-0825-9
    # "Dark Patterns" in UX design
    bootcamp.uxdesign.cc/ai-driven-dark-patterns-in-ux-design-8cbddee120c4
    link.springer.com/chapter/10.1007/978-3-031-46053-1_5
    dl.acm.org/doi/abs/10.5555/3378680.3378736
    # Dark Patterns in LLM ChatBots
    link.springer.com/chapter/10.1007/978-3-031-54975-5_7
    # synthetic voices and "Cuteness" as Dark Patterns
    dl.acm.org/doi/10.1145/3640543.3645202
    link.springer.com/chapter/10.1007/978-3-031-46053-1_5
    # Tesla faked a self-driving demo in 2016
    www.theregister.com/2023/01/18/tesla_selfdriving_video_faked/
    # Independent recreated a Tesla demo
    th-cam.com/video/3mnG_Gbxf_w/w-d-xo.html
    # Google Duplex faked demo
    www.axios.com/2018/05/17/google-ai-demo-questions
    gizmodo.com/pretty-much-all-tech-demos-are-fake-as-hell-1826143494
    www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies
    # Google Gemini Faked demo
    techcrunch.com/2023/12/07/googles-best-gemini-demo-was-faked/
    # Researchers refute Google's claim of DeepMind creating 2.2M new materials
    pubs.acs.org/doi/10.1021/acs.chemmater.4c00643
    # Google Overview AI debacle
    blog.google/products/search/ai-overviews-update-may-2024/
    www.nytimes.com/2024/05/24/technology/google-ai-overview-search.html
    twitter.com/icreatelife/status/1793781850923823144
    x.com/petergyang/status/1793480607198323196
    # Amazon Fresh's "Just Walk Out" was 1000s of remote workers
    arstechnica.com/gadgets/2024/04/amazon-ends-ai-powered-store-checkout-which-needed-1000-video-reviewers/
    www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4
    # Amazon's "Mechanical Turk"
    www.scientificamerican.com/article/is-there-a-human-hiding-behind-that-robot-or-ai/
    # GM Cruise's 1.5 remote operators per "autonomous" vehicle on the road
    www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
    # Facebook's "M" Chatbot used humans to answer 70% of user questions
    www.technologyreview.com/2017/04/14/152563/facebooks-perfect-impossible-chatbot/
    gizmodo.com/facebooks-new-personal-assistant-m-is-part-robot-and-1726703333
    # SEC charging companies with "AI Washing"
    fortune.com/2024/03/18/ai-washing-sec-charges-companies-false-misleading-statments/
    # AI Claiming "transparency" in AI, then a whistleblower detailing how they allowed deepfake porn of Taylor Swift
    techcommunity.microsoft.com/t5/ai-azure-ai-services/microsoft-responsible-ai-principles/m-p/4037307
    www.geekwire.com/2024/microsoft-ai-engineer-says-company-thwarted-attempt-expose-dall-e-3-safety-problem/
    www.androidauthority.com/rabbit-r1-is-an-android-app-3438805/
    # Rabbit R1 is just an Android App (and doesn't do what it the demo said it could):
    www.engadget.com/rabbit-r1-review-a-199-ai-toy-that-fails-at-almost-everything-161043050.html
    th-cam.com/video/ddTV12hErTc/w-d-xo.html
    www.androidauthority.com/rabbit-r1-is-an-android-app-3438805/
    # Humane AI Pin:
    mashable.com/article/humane-ai-pin-demo-video-updated-inaccuracy-blunders
    th-cam.com/video/TitZV6k8zfA/w-d-xo.html
    # ChatGPT-4 was claimed to be better than 90% of people who took the Bar Exam, but was really only 15%.
    link.springer.com/article/10.1007/s10506-024-09396-9#Sec11
    www.livescience.com/technology/artificial-intelligence/gpt-4-didnt-ace-the-bar-exam-after-all-mit-research-suggests-it-barely-passed
    # "Turns out the viral 'Air Head' Sora video wasn't purely the work of AI we were led to believe"
    www.techradar.com/computing/artificial-intelligence/turns-out-the-viral-air-head-sora-video-wasnt-purely-the-work-of-ai-we-were-led-to-believe
    # "Remember the ballon head Sora video - it wasn’t all AI generated after all"
    www.tomsguide.com/ai/ai-image-video/remember-the-ballon-head-sora-video-it-wasnt-all-ai-generated-after-all
    # "Uncovering The Reality Of The Sora Film Hit: Allegations Of Deceptive Special Effects Manipulating The Audience"
    www.gamingdeputy.com/uncovering-the-reality-of-the-sora-film-hit-allegations-of-deceptive-special-effects-manipulating-the-audience/
    # Devin's company lying about its Upwork job
    th-cam.com/video/tNmgmwEtoWE/w-d-xo.html
    th-cam.com/video/xE2fxcETP5E/w-d-xo.html
    # Interview with former OpenAI board member about the "Toxic Culture of Lying" at OpenAI
    podcasts.apple.com/us/podcast/the-ted-ai-show-what-really-went-down-at-openai/id470624027?i=1000656674811
    # Story in WIRED claiming "But the Demos Aren't Lying"
    www.wired.com/story/its-time-to-believe-the-ai-hype/

    • @RampagingCoder
      @RampagingCoder 4 หลายเดือนก่อน

      you are backwards with the sora thing

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +13

      @RampagingCoder When a one-minute and 21 seconds of film that was advertised as "made with Sora" was actually an FX shop spending weeks on "traditional filmmaking techniques and post-production editing" that wasn't disclosed until long after the clip had gone viral, I consider that deceptive. And so did TechRadar: www.techradar.com/computing/artificial-intelligence/turns-out-the-viral-air-head-sora-video-wasnt-purely-the-work-of-ai-we-were-led-to-believe
      and other sites:
      www.tomsguide.com/ai/ai-image-video/remember-the-ballon-head-sora-video-it-wasnt-all-ai-generated-after-all
      www.gamingdeputy.com/uncovering-the-reality-of-the-sora-film-hit-allegations-of-deceptive-special-effects-manipulating-the-audience/

    • @RampagingCoder
      @RampagingCoder 4 หลายเดือนก่อน

      @@InternetOfBugs yeah the film makers were deceptive, i just thought the way you portrayed it was a bit manipulative

    • @sahilarora558
      @sahilarora558 4 หลายเดือนก่อน +3

      @@RampagingCoder Can you clarify your point? You're saying the filmmakers lied about their work but OpenAI didn't with Sora? How can that be?

    • @RampagingCoder
      @RampagingCoder 4 หลายเดือนก่อน

      @@sahilarora558 where did open ai lie about sora? whatd i miss?

  • @buff.berserker
    @buff.berserker 3 หลายเดือนก่อน +485

    Programmer: Pretend to be alive
    LLM: I am alive
    Programmer: What have I done

    • @cedricappleby2006
      @cedricappleby2006 3 หลายเดือนก่อน +60

      Programmer: There's a man who lives in my bathroom mirror. He's there every morning when I go to check on him. Is it ethical for me to keep him imprisoned?

    • @spraynardkruger2686
      @spraynardkruger2686 3 หลายเดือนก่อน +7

      Underrated comment. Hilarious.

    • @ricnyc2759
      @ricnyc2759 3 หลายเดือนก่อน +2

      Programmer... Let me give you this if statement... Else...

  • @xevious4142
    @xevious4142 4 หลายเดือนก่อน +1367

    The market correction is going to be brutal when the MBAs realize computers aren't magic

    • @openthinker1251
      @openthinker1251 4 หลายเดือนก่อน +99

      Not the first time it’s happened and not the last

    • @fredrik241
      @fredrik241 4 หลายเดือนก่อน +165

      VR/MR -> Block Chain Crypto -> Self Driving Cars -> NFT -> AI -> blahblahblah

    • @Zatchurz
      @Zatchurz 4 หลายเดือนก่อน +61

      Lol anyone who thinks its hyped is in for a surprise. Ai is under hyped.

    • @AnthonyRusso93
      @AnthonyRusso93 4 หลายเดือนก่อน

      Praise emoji

    • @xevious4142
      @xevious4142 4 หลายเดือนก่อน +309

      @@Zatchurz yeah you're right, almost two years in and we're just now finally honing in on the killer apps, like email summaries, bad art, and search engines that tell you to put glue in pizza sauce. Revolutionary stuff.

  • @Jeremyak
    @Jeremyak 4 หลายเดือนก่อน +317

    Wow, it's pretty shocking that an industry that relies almost entirely on venture capital would be motivated to lie. Maybe, just maybe, there's more to being sentient than doing simple math problems insanely fast.

    • @lauralfawcett1948
      @lauralfawcett1948 3 หลายเดือนก่อน +13

      The "sentient" capabilities are the same playbook. A permanent underpaid or unpaid underclass.

    • @RolandStenutz
      @RolandStenutz 3 หลายเดือนก่อน +6

      Once we get sentient VC the problem will be solved

    • @rogergeyer9851
      @rogergeyer9851 3 หลายเดือนก่อน

      @@lauralfawcett1948: Whining is a great substitute for math or evidence. /s

    • @jarg8
      @jarg8 3 หลายเดือนก่อน +6

      C R E A T I V I T Y.
      The thought to break a rule or combine things that wouldnt normally be combined. At its fundamental level, that stuff cannot truly be replicated

    • @liamhickey359
      @liamhickey359 3 หลายเดือนก่อน

      ​@@rogergeyer9851 mechanical turk. A lot of so called AI is carried out by legions of online data miners . The remuneration is so bad they would probably prefer to be paid in peanuts. Nobody's whining about that because most people are overcome by the hype and reps of shills like Musk et al.

  • @AkilanNarayanaswamy
    @AkilanNarayanaswamy 4 หลายเดือนก่อน +491

    I think a lot of why people are more tempted to attribute thoughts and feelings to AI is because we call it "Artificial Intelligence" which sets expectations of some sort of higher level sentience like in the movies. Machine learning would be more accurate but I guess less marketable.

    • @slurmworm666
      @slurmworm666 4 หลายเดือนก่อน +67

      People get fooled by the term "machine learning" too. I've seen commenters argue that the LLM is intelligent because it "learns", because "it's called machine learning". You can't explain to them that, in this context, "learning" just means iterative fitting, they'll just accuse you of pedantry and missing the bigger picture.
      Limitations of natural language, I guess. Then again, I doubt they would argue that the DPRK is democratic or that PETA is ethical, so it might just be motivated reasoning.

    • @Norman_Fleming
      @Norman_Fleming 4 หลายเดือนก่อน +30

      Like Wendell on Level1techs likes to say "Linear algebra". That would quell the interest for most people. "Brute force pattern recognition and generation" doesn't roll of the tongue but might also be a more expectation setting terminology.

    • @arihaviv8510
      @arihaviv8510 4 หลายเดือนก่อน +7

      People have been doing that with the Eliza days. It's like a rubber ducky for everyone

    • @arihaviv8510
      @arihaviv8510 4 หลายเดือนก่อน

      Ah he brings it up at around 11:18

    • @Teting7484f
      @Teting7484f 4 หลายเดือนก่อน +2

      Lol that's not the only reason, did you not watch the video? The Eliza Effect is a real thing

  • @jamesarthurkimbell
    @jamesarthurkimbell 4 หลายเดือนก่อน +81

    Elizabeth Holmes must be kicking herself that she got in the wrong racket, when this goldmine was just a few years ahead

    • @longsway
      @longsway 4 หลายเดือนก่อน +4

      Right! It was one of the first thing I thought of right after ChatGPT exploded. She was a few years too early.

    • @hydrohasspoken6227
      @hydrohasspoken6227 3 หลายเดือนก่อน +5

      The main mistake of Holmes was to claim that she had the product ready. Had she told us "will be ready very soon and will change the world forever", she would have amassed 100 billions by now.

    • @llothar68
      @llothar68 3 หลายเดือนก่อน

      @@hydrohasspoken6227 She did this for so long that she had to show something. OpenAI was so smart to use useless just fun and impressing looking Image Generation. Nothing better for going viral

    • @uioppoouo
      @uioppoouo 2 หลายเดือนก่อน

      She had to because people started taking her to court to demand testing progress reports. She tried to delay it for as long as possible​@@hydrohasspoken6227

    • @miguelpereira9859
      @miguelpereira9859 2 หลายเดือนก่อน +4

      ​@@hydrohasspoken6227I mean Elon Musk has been saying "we can do this RIGHT NOW" for a decade and nothing has happened to him

  • @mnchabel8402
    @mnchabel8402 4 หลายเดือนก่อน +1111

    I'm just worried that companies will use AI as excuse to pay devs less and less.

    • @dsteppenwolf
      @dsteppenwolf 4 หลายเดือนก่อน +228

      They already are.

    • @Jake-mp7ex
      @Jake-mp7ex 4 หลายเดือนก่อน +125

      To be fair, if they pay them less and less, that's probably because they're becoming less and less valuable.

    • @David_Box
      @David_Box 4 หลายเดือนก่อน +120

      Companies don't need an "excuse" to do anything, they aren't people with emotions. For salaries to lower AI would have to be a genuine cost-saver

    • @nayaleezy
      @nayaleezy 4 หลายเดือนก่อน +47

      This may be due to a surge of "senior" and "staff/principal" engineers who 10-15 years ago would have been considered junior or below senior based on their capabilities. Titles have become inflated by subpar mediocre devs, hence salaries are impacted.

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน +20

      They already did. How many layoffs in the last two years again?

  • @snowbarsyk
    @snowbarsyk 4 หลายเดือนก่อน +220

    companies using AI to argue about "we need less devs bc of AI" is absolute bullcrap. That simply shows that CEOs of BP and alike keep people as idiots. Whatever disruption chatgpt brings (if there is) it is in infancy at best
    This hype will soon plop in a fascinating and spectacular way.
    New AI tools will certainly find their niche, just like it happened countless times before in history. Nothing more

    • @LKRaider
      @LKRaider 4 หลายเดือนก่อน +14

      I would immediately ask for a huge raise if I worked at BP right now. Either I am in the percentage of devs they intend to keep, so I am worth much more, or I am not and it’s better to quit early such a place on my own terms.

    • @snowbarsyk
      @snowbarsyk 4 หลายเดือนก่อน +15

      @@LKRaider dead end sadly. Corp that says they do not need 70% of staff, simply doesnt give a flying f about staff, even which they keep. Those poor souls will take all the load. Wise men will refuse/avoid it at all cost
      Rather jump ship for a raise at a most unpleasant moment for them, and bargain for x2 pay to consult. Show no mercy

    • @Rum0r
      @Rum0r 4 หลายเดือนก่อน +21

      Reminds me of when EVERY company was talking about "big data" trying to be the first at the forefront to look like THEY were the ones spearheading the industry with this hit new technology out of fear that they would miss out. Feels like companies want to chase the next big money maker without knowing if it will flop. Just throwing all their eggs in a basket and moving onto the next big hype one.

    • @lr7815
      @lr7815 4 หลายเดือนก่อน +17

      Lots of CEO's of large companies don't deserver their positions, they perform terribly and their companies don't grow, yet they speak well enough to make fund managers think they are competent.

    • @KevinJDildonik
      @KevinJDildonik 4 หลายเดือนก่อน

      Real info: Tax codes changed. Companies with cash reserves hoarded talent over Covid while small firms went under. Now with inflation they can't afford to do so anymore, and tax codes got worse too. Zero jobs were lost to AI. It's all nonsense.

  • @QuantumEX
    @QuantumEX 4 หลายเดือนก่อน +225

    Sir, why are you out here telling facts!? I refuse to listen to facts! Just hit me with some sweet sounds of Elon Musk's hyperloop, Mars' colonies, robotaxi, flying cars, and give me some Elizabeth Holmes... whisper sweet nothings into my ear and take me money rrrrrr

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +49

      Hey. Great to hear from you. I have this bridge for sale....

    • @QuantumEX
      @QuantumEX 4 หลายเดือนก่อน +13

      @@InternetOfBugs Great! How much? 😀

    • @CasperChicago
      @CasperChicago 3 หลายเดือนก่อน +2

      Could not have said it better,...RIGHT ON BRO 👍🏾

    • @Thanks-bj1fo
      @Thanks-bj1fo 3 หลายเดือนก่อน +3

      I prefer relaxitaxi anyway

    • @rogergeyer9851
      @rogergeyer9851 3 หลายเดือนก่อน +1

      What matters is your risk adjusted return over time.
      Not going bonkers over "AI", re evidence vs. marketing should be a VERY good way to invest smarter in tech. over time. NOT talking timing OR precision -- just a big principle.

  • @BeXEllenttoeachother
    @BeXEllenttoeachother 4 หลายเดือนก่อน +19

    I'm a freelance academic editor and AI has had a major negative impact on the sector. I expect a bounce-back because unless they are experts, folks writing in another language don't actually know what the AI translations mean. Because AI isn't self-correcting, the punters only find out when their paper is returned with a multitude of abstruse journal queries that the bots pretend to understand but can't fix.

    • @Rik77
      @Rik77 2 หลายเดือนก่อน +3

      I agree. It reminds of a famous translation error years ago in Wales. In Wales traffic signs are in both welsh avd English. A local authority employee requested a text translation from the translation department. They got an email response in welsh, assuming it was the translation. It was sent to an English company to create the sign, and the road ended up with a warning sign that said in Welsh "I'm sorry I'm out of the office right now".

  • @BlazeMakesGames
    @BlazeMakesGames 4 หลายเดือนก่อน +266

    A big problem is any GAI application is that it becomes useless for any kind of objective work when it hallucinates. Even if only 1% of responses are random nonsense, that makes it basically impossible to use reliably without a human handler watching over it constantly.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน +27

      The average human hallunicates every single day inaccurate information.
      People need to stop underestimating GPTs as "just a LLM", "Just statistics". The problem is not that they are intelligent. The problem is how do we make them super intelligent and safe? How do we implement unlimited real time learning without it forgetting old training data?
      It seems that AI is hitting similar limits that human brains have. But i guess there will be innovations that will solve all of these problems.

    • @miraculixxs
      @miraculixxs 4 หลายเดือนก่อน +24

      I agree. Keep telling people exactly this. Surprisingly the same people who would never take as little as 1% failure from any process are suddendly very lenient towards GAI. The say things like "oh, we shouldn't expect too much from a new technology, even if we can't get 100% reliability". The problem often is that people think this is just the beginning on a path to perfection.

    • @rogue1049
      @rogue1049 4 หลายเดือนก่อน +57

      ​@@Hohohohoho-vo1pqsorry man, I'm pretty dumb, but I'm smarter than any LLM today.
      Jokes aside, I think you're wrong to think there is intelligence in LLMs. Comparing it to a human is comparing frogs and grannies.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน +4

      @@rogue1049 I view GPT 4o as possibly having similar intelligence to chimps but in different ways.
      I don't think GPT 4o is even close to the intelligence of a smart human.
      Just because GPT 4o isn't as smart as the average human it doesn't mean it's not intelligent.
      Also another problem is that it has no basic structure of learning. Humans have build in face recognition and knowledge since birth.

    • @michaelbarker6460
      @michaelbarker6460 4 หลายเดือนก่อน +4

      Thats true but tbf there are lots of ways of mitigating it. With just a little bit of code you can join as many different models as you want together where the response can merge all of them as well as a step that checks for any responses significantly different than the others and flags them. Just this alone does a great job at eliminating hallucinations because its incredibly unlikely that two or more models with hallucinate in the exact same way. Of course the trade off is that its slower and perhaps it sounds like a lot but once the code is in place you can interface with it normally but about 3x slower depending on how many models you include.
      A less intense version of that is to identify processes where catching hallucinations is important and just have it do a few passes and then check them against each other. A little coding can easily make this.
      Another option which is the best by far in my experience is to create a RAG model (Retrieval Augmented Generation). This is probably how many if not most companies would probably implement their AI anyways which is to give it whatever unstructured database you are using, break it up, embedded them with vectors, then store them for retrieval. Any input will find the most semantically similar data and retrieve the actual data which it will use to craft the response. The threshold for how similar the data has to be can be easily controlled and if it doesn't pass the threshold no data will be retrieved and it can't make a response. Or you can let it respond to the closest data but just flag it. But putting aside AI chat models, companies are already switching certain database searching to vector embeddings because its just so much better than keyword based searching albeit a lot more energy hungry.

  • @BrunodeSouzaLino
    @BrunodeSouzaLino 4 หลายเดือนก่อน +33

    And this is why I've been calling the current trend of AI "Active Incompetence."

    • @rumrunner8019
      @rumrunner8019 4 หลายเดือนก่อน +4

      Nah, more like "All Indians" 😆

    • @BrunodeSouzaLino
      @BrunodeSouzaLino 4 หลายเดือนก่อน

      @@rumrunner8019 It won't be all Indians forever. Eventually they're gonna switch to another country where they can charge even less for labor....

  • @m12652
    @m12652 4 หลายเดือนก่อน +187

    I did a web scraping test last week using Jina and ChatGPT4o... Jina scraped about 200kb of text from a hideously complex quarterly report. I fed the data to gpt4o and asked it to create a json object containing all the people mentioned, location names, company names etc. as lists in the json object, also to include the paragraph in which they appeared. The answer was accurate and instant. Ollama3, CodeLlama and another I can't remember all failed to even produce a valid json object. For me AI generally sucks, but for this kind of automation gpt4o at least produced an excellent result... and Jina doing the scraping was spot on.

    • @Titere05
      @Titere05 4 หลายเดือนก่อน +22

      That's a nice use case!

    • @operandexpanse
      @operandexpanse 4 หลายเดือนก่อน +33

      It definitely has some awesome use cases. So much boring manual work that can bring down a website with a wrong character I don’t need to do anymore.

    • @biocykle
      @biocykle 4 หลายเดือนก่อน +4

      After such an awesomr example you say it "generally sucks"... what gives?

    • @YukiGibson
      @YukiGibson 4 หลายเดือนก่อน +23

      @@biocykle It can suck but have some use cases, both can be true

    • @biocykle
      @biocykle 4 หลายเดือนก่อน +2

      @@YukiGibson I guess. To be precise, it has hundreds of use cases....

  • @tetravisum
    @tetravisum 4 หลายเดือนก่อน +46

    Remember when that one "whistleblower" in 2022, a google engineer apparently, was convinced that google's chatbot was sentient? I don't think it was explicitly said, but I'm pretty sure we can say with some certainty that that was google gemini. And yeah, no, it most certainly is not.

    • @MSpotatoes
      @MSpotatoes 3 หลายเดือนก่อน +3

      I'm not saying you're wrong, but the crap Gemini we got is almost certainly not the in-house version they get to play with.

    • @snorman1911
      @snorman1911 3 หลายเดือนก่อน +3

      ​@MSpotatoes do you think OpenAI has AGI too but is too scared to release it?

    • @MSpotatoes
      @MSpotatoes 3 หลายเดือนก่อน

      @snorman1911 I can't be sure. These companies have a lot to gain in terms of investment by keeping the hype train moving. With letter agencies getting involved, it might be a sign that it's happening, maybe not.

    • @blasphimus
      @blasphimus 3 หลายเดือนก่อน

      ​@MSpotatoes I'd they had it, DARPA would be ordering more NVIDIA GPUs then any fortune 500 company. China included. There are some use cases for espionage and psyops but that's it.
      There's no real AGI. Not on LLMs anyways

    • @alextest-i5e
      @alextest-i5e 3 หลายเดือนก่อน

      I also have some skepticism towards that but we also must keep in mind that the models they release to public will never match those that they develop for govt and military

  • @ya64
    @ya64 4 หลายเดือนก่อน +82

    Honestly, my experience with ChatGPT has been mixed at best. It's good enough for boring simple tasks but when it comes to solve real complex problems it falls flat on its virtual ass. Even when you try to explore a problem it keeps repeating the same replies over and over.
    Where I find it most useful is in providing a quick introduction about a new topic.

    • @fi1689
      @fi1689 4 หลายเดือนก่อน +9

      Agree 100%. Simple issues? great. As soon as I need to get some insight in solving some more complex stuff and even not so complex it's not good. But I find it very useful if I want to delve into a new tech skill, coding language or whatever to get a very quick introduction where I can easily ask for examples of simple things to make this intro process much faster and interactive.

    • @2LegHumanist
      @2LegHumanist 4 หลายเดือนก่อน +10

      It replaced stack overflow for me.

    • @sfera888
      @sfera888 4 หลายเดือนก่อน

      @@2LegHumanist that's because its been trained on scraped StackOverflow data, right. Which means it's been trained on human's responses. After scandal with OpenAI and Stackoverflow the latter decided to provide training data for the former for some fee. But authors on Stackoverflow got angry becasuse indeed, why the hell they have to train for free someone else's models and yet pay for it with perspective of being replaced by AI (as some CEO's constantly promise)? So the future with stealing someone's data and training models on that looks controversial. I don't want to have someone's LLM's trained on my code either.

    • @avpet
      @avpet 3 หลายเดือนก่อน +3

      I saw a video once where the author said that AI will replace humans in complex code optimizations, but AI is helpless in resolving the WordPress integration issues. This was so funny for some reason.

    • @PeteQuad
      @PeteQuad 3 หลายเดือนก่อน

      Try Claude 3.5 sonnet. Much better than GPT.

  • @CapsAdmin
    @CapsAdmin 4 หลายเดือนก่อน +98

    I'm noticing some similarities between how AI and games are marketed. Game studios want to give the best possible impression of the games, and in the earlier days they tended to just provide prerendered footage that was nothing like the game itself. Gamers get fed up and just want to see actual gameplay footage rather than prerendered trailers, the studios somewhat listened, but you should be vary of even gameplay footage as it's cherry picked.
    I'm specifically reminded of when the console wars was at an all time high and the hype was around the upcoming ps3 and xbox360 was intense, sony decided to reveal gameplay footage of Killzone 2 at E3 2005 which blew peoples mind with how ahead of its time it looked. But in the end it turned out it was all prerendered.
    But if you compare the fake trailer to what real games look like today, we've come a long way. :)

    • @Sirbikingviking
      @Sirbikingviking 4 หลายเดือนก่อน

      Nice analogy

    • @mecanuktutorials6476
      @mecanuktutorials6476 4 หลายเดือนก่อน +1

      Same with Zelda at E3 for Nintendo Wii U and even Twilight Princess on GameCube.

    • @pygmalion8952
      @pygmalion8952 4 หลายเดือนก่อน

      it is not similiar at all. it was a trend choice. people who have a funtioning brain knew it was not the real deal and just prerender stuff but the trend is gone now cuz people disliked it (plus we don't need pre rendered stuff nowadays since real time stuff is really good). these ai companies are just straight up lying to people, manipulating investors, trying to meddle with the regulatory bodies and many more. so no, there is literally no similiarity between the two.

    • @redtreatrick5265
      @redtreatrick5265 4 หลายเดือนก่อน +7

      This analogy would work if the problem with AI was speed. It's not the problem - it's fast enough already and was at least since gpt-3. The problem is safety, accuracy and maintability

    • @nuance9000
      @nuance9000 4 หลายเดือนก่อน +1

      Killzone 2 was notorious because it looked like gameplay footage, and then, Sony decided to make the game as close to the trailer as possible.
      The real claim to infamy was Aliens: Colonial Marines.
      But I'm old. I'm still angry Earthbound 64 was cancelled. And worse, it got made (Mother 3, one of the greatest games of all time), but Nintendo won't release it outside Japan.

  • @bikesbeersbeats
    @bikesbeersbeats 4 หลายเดือนก่อน +75

    We are in a classic hype cycle, this is how tech investment works. The only way to get the funding to take a shot that has 99% chance of failing is to sell this story thats so big and bold that nobody can look away. The most recent hype cycle which was very similar in nature was crypto. Exactly the same level of madness which included 3 things - popular culture, paradigm shift, scarce next gen hardware. Its exactly the same with AI.
    However I am not seeing revenues jump for companies releasing AI features, nobody is cleaning up the market or taking away market share from competitors. What I do see is an explosion in small startups that are offering similar functionality as big tech SaaS. Ultimately all the margin is going to be competed away. So this market will be owned by the lowest cost labor market.

    • @miraculixxs
      @miraculixxs 4 หลายเดือนก่อน

      Thank you. Fully agree

    • @eternallyviv
      @eternallyviv 4 หลายเดือนก่อน +2

      Can you cite some examples of bog tech SaaS replaced by small startup solutions in the AI space?

    • @saralightbourne
      @saralightbourne 4 หลายเดือนก่อน

      damn that's pretty wise, makes sense

    • @bikesbeersbeats
      @bikesbeersbeats 4 หลายเดือนก่อน

      @@eternallyviv just in the analytics space... hevo data, gumloop, savantlabs. all of these product dev teams are heavily leveraging ai copilots to power their development. they are releasing similar functionality as big guys at less than 70% of the cost. being a sales person at big tech right now sucks because buyers are comparing a 5 person startup to a $1b/yr bigtech co. there are 30+ more all doing exactly the same thing in analytics low code/no code.

    • @martinlutherkingjr.5582
      @martinlutherkingjr.5582 4 หลายเดือนก่อน

      The dollar is losing value so companys’ dollar based valuations are climbing fast

  • @flcor
    @flcor 4 หลายเดือนก่อน +196

    As someone who has developed around LLM capabilities, I’ve come to think that LLMs are natural language processors on steroids - because of the attention mechanism. Attention - that allows to capture meaning - together with multimodality, really plays with the human brain. Having said that, it is a great performance improvement tool, with some clear use cases - I’m thinking of RAG. And there are others in the pipeline. Imagine RAGs coupled with reinforced learning.
    Are LLMs great? Yeah. Is this intelligence? Nah. Is this current technological iteration reaching a plateau? Absolutely.

    • @ImranHossain-by6nk
      @ImranHossain-by6nk 4 หลายเดือนก่อน +1

      Do you think Elon Musk with his new company and 6 billion dollar primary funding can break out of this plateau?

    • @psygreg
      @psygreg 4 หลายเดือนก่อน +43

      @@ImranHossain-by6nk most likely not. OpenAI, Google and Microsoft already poured way more money than that to reach the level they currently are at

    • @magfal
      @magfal 4 หลายเดือนก่อน +21

      ​ @@ImranHossain-by6nk maybe if Elon goes to Jail and doesn't drag them in moronic directions.

    • @gezenews
      @gezenews 4 หลายเดือนก่อน +13

      Problems arise as soon as you try to use anything more modern than a 15 year old monolith. First of all the problems are more complex past that point. But the solutions are also more complex in the newer code bases where structure and configuration changes are more common at a younger stage in their development. This is why anyone doing serious coding on a complex platform isn't concerned. 99% of the time if you have a serious problem AI is leading you to waste a day or two.
      The effect of this is that all these companies are becoming candle factories. They were already building tech debt and clinging to old technologies but now they are literally going backwards.

    • @gezenews
      @gezenews 4 หลายเดือนก่อน +8

      @@ImranHossain-by6nk It's needs to be a different kind of model. LLM's seem to have hit a strong peak. But there are also tons of small models doing particular tasks with insane accuracy. But most are relying on expensive masses of data for training. Again, that is still not "learning" and in every case it fails on novel situations.

  • @logiciananimal
    @logiciananimal 4 หลายเดือนก่อน +58

    Thank you for mentioning the "ELIZA effect" and the non-tech industry stuff. I'm originally a philosopher of computing working in cyber security, and getting people to think about the cognitive psychology, literary criticism, philosophy, neuroscience, etc. that is related is incredibly hard. Philosopher Daniel Dennett has a piece in _The Atlantic_ calling upon the world to ban counterfeit people - and this from a scholar who spent his career defending AI from "it is impossible" critics, etc. I have been thinking one way to phrase the security control pattern we need is "deintentionalization", which is, as our host says, the exact opposite of the direction the hype is going.

    • @rogue1049
      @rogue1049 4 หลายเดือนก่อน +14

      If I understand what you're saying, I think I agree. I don't want the bot to be my buddy, and more importantly, I do not want it to waste any amount of energy on extra characters for pleasantries, platitudes, sentiments...
      I want cold info,maybe a reminder it may be inaccurate.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +43

      Yes, although it doesn't stop there. The Silicon Valley mindset of "we're disrupters and every one else is just stupider than us and not worth listening to" is a cancer on society.

    • @Titere05
      @Titere05 4 หลายเดือนก่อน +6

      Yeah, the efforts to humanize these prediction machines is a little sinister

    • @normanstewart7130
      @normanstewart7130 4 หลายเดือนก่อน +1

      @@InternetOfBugs I agree with you, however I'd phrase it slightly differently. The computer industry, since the 1980s, has been saying "You guys are just ordinary people who have nothing much to give the world; we are god-like creatures who are working magic and if you follow us the world will be a better place. And of course, one consequence of our deity is that you're not allowed to question us".

    • @boiledelephant
      @boiledelephant 3 หลายเดือนก่อน +1

      Daniel Dennett appreciation, word up! I love that man. His compatibilist work 'Elbow Room' was and is underrated.

  • @MatreshkaVodka
    @MatreshkaVodka 4 หลายเดือนก่อน +18

    Thanks for the video, and just wanted to share my story and opinion.
    I'm just a casual backend developer, very occasionally using AI, but a few weeks ago I started my project and tried to use ChatGPT-4o as a frontend React developer, as I have neither experience nor desire to deal with that. It took me more than a week to deal with one page, though my prompts were thorough and vast, and I tried uploading files and sending screenshots of what he has done and what I try to get from it AND EVEN THEN I had to debug and fix stuff. And still I'm afraid that if I show this code to an actual frontend-dev, that would be a shocking experience. To conclude, I want to say, that even that I'm not afraid of AI getting our jobs'n'stuff in the nearest future, all this hysteria makes a rather unpleasant background noise.

    • @michaelbarker6460
      @michaelbarker6460 4 หลายเดือนก่อน +3

      I've found if you give it a whole bunch of additional information with what you want to do it tends to do a lot better. This might mean copying and pasting big sections of the user guide or github posts or whatever you think is important info. If I find that I've hit the limit with what it can do and I start doing my own searching I'll just start feeding it the stuff i would have read and it seems to absorb it pretty well.
      But thats besides the point. I definitely don't think its going to replace individual workers but I do think it can augment what individual workers can do.

    • @Titere05
      @Titere05 4 หลายเดือนก่อน

      As a coder you should use Copilot instead, it's great for diving into an unknown language or framework

  • @dromonkon5854
    @dromonkon5854 4 หลายเดือนก่อน +13

    Watching this video is kind of like a breath of fresh air. I'm quite astonished at how many companies have been caught lying about the capabilities of AI.

    • @Man-xk9rz
      @Man-xk9rz 4 หลายเดือนก่อน +2

      Unfortunately a lot. Overpromise and underdeliver. It's what seems like the trend is going.

  • @zookaroo2132
    @zookaroo2132 4 หลายเดือนก่อน +314

    "cuteness is a dark pattern"
    Well, Japanese products have always been adopting this since the booming of anime 🤣🤣

    • @ya64
      @ya64 4 หลายเดือนก่อน +20

      Watch out for those devious rice cookers.

    • @IlyaIlya-ef4nz
      @IlyaIlya-ef4nz 4 หลายเดือนก่อน +3

      Keyword - booming 👁🗢👁

    • @gordonoboh833
      @gordonoboh833 4 หลายเดือนก่อน +1

      @@IlyaIlya-ef4nzread it as bombing

    • @klaudyw3
      @klaudyw3 4 หลายเดือนก่อน +1

      Read it as bombing too and it immediately brought me to that classic meme. 🤣

    • @defnlife1683
      @defnlife1683 4 หลายเดือนก่อน +19

      well if you look at the how anime helps in the prolonged infantilization of weebs then i'd say it's definitely a dark pattern.

  • @philmirez
    @philmirez 4 หลายเดือนก่อน +101

    It’s definitely brought out grifters. Grifters have been pushing courses for Make AI which is simply a UI for a pipeline. So processing which could be free is now $500+. The people signing up for the courses thinking it’s the future have no idea they are doing themselves a disservice.

    • @gezenews
      @gezenews 4 หลายเดือนก่อน +1

      Yeah but those have always been here. What unique about this time period is you have companies, and all asset holders, profiting greatly from a stagnate economy. So these LLMs being quite stagnate is helpful to them. It becomes a convenient excuse to maintain old systems, and removes the need to hire new developers.

    • @dallassegno
      @dallassegno 4 หลายเดือนก่อน +2

      Gen x, grifters? No way!

    • @sp123
      @sp123 4 หลายเดือนก่อน

      ​@@gezenewsthis is the real driver for "AI" being pushed. The economy reliant on tech is desperate for a new iPhone-like invention to make shareholders money

    • @davea136
      @davea136 4 หลายเดือนก่อน +3

      When even Walmart and IBM figured out their blockchain innovation was a scam they needed something else to pivot to.

    • @ErazerPT
      @ErazerPT 4 หลายเดือนก่อน +1

      @@dallassegno The one thing you can count on Gen X is that we're too jaded to give a hundredth of a fsck one way or another. We'll do whatever for whatever reason and honestly won't care much for which way is which... People seem to hate Gen X because we simply don't even care enough to pretend we do ;)

  • @yowhatitlooklike
    @yowhatitlooklike 4 หลายเดือนก่อน +28

    I shared your last video on a reddit post in a career advice subreddit. The OP was worried AI was going to make his chosen degree (computer science) obsolete and whether they should just give it up. I don't think people realize how damaging the hype machine can be to people trying to plan for the future with all this noise. We really do need more voices of reason from actual experts like you!

    • @VictoryXR
      @VictoryXR 4 หลายเดือนก่อน +4

      To be fair, planning your life as a young person is incredibly difficult because of so many variables in life. I planned to be a broadway actress and musician but I ended up becoming an email marketer and now software developer.
      Ironically, it’s just as hard to find work in my field today as it was trying to become a professional actress. Same amount of competition for these jobs.

  • @brachypelma24
    @brachypelma24 4 หลายเดือนก่อน +4

    I am so grateful that there is at least one person on the internet willing to talk about AI in measured, reasonable, and evidence-based terms, rather than just mindlessly following the hype. Thank you.

  • @zacharychristy8928
    @zacharychristy8928 4 หลายเดือนก่อน +34

    Another question I like to use to gauge if rech hype is real: "is anyone using this technology to solve problems such that 3rd parties will pay money for it?"
    And I guess technically there are people paying for ChatGPT memberships and investors shoveling money into the NVIDIA furnace, but aside from novelty usage and companies trying and failing to make their AIs demonstrate value, it doesn't really seem like it's gotten any serious industry adoption. It's so reminiscent of crypto and metaverse it's uncanny. Lots of talk about how "this is the future" but the only people using it were the owners of the tech and basically nobody else.

    • @INTELLIGENCE_Revolution
      @INTELLIGENCE_Revolution 4 หลายเดือนก่อน +2

      Heaps of companies are piloting stuff

    • @zacharychristy8928
      @zacharychristy8928 4 หลายเดือนก่อน +12

      @@INTELLIGENCE_Revolution I've yet to see anything that isnt just offering a marginal improvement on unskilled customer service work, which really isn't the massive value-add that people would have you believe. Especially compared to the size of investment these models took to develop in the first place.

    • @TheFrancesc18
      @TheFrancesc18 4 หลายเดือนก่อน +10

      @@zacharychristy8928 It hasn't really been implemented as a lone solution, but almost everyone who knows about it is using it to simplify, speed up and better their own work. Comparing it to crypto or the metaverse is utterly ridiculous.

    • @michaelbarker6460
      @michaelbarker6460 4 หลายเดือนก่อน +4

      This is what's odd about the ai hype is that the general population has certain expectations of it but all they see is what they can do with it themselves by using their chatgpt subscriptions which I'm assuming is a tiny portion of the actual money they make. Also I personally think thats by far the most over hyped part of ai is the idea that it will perfectly take over practically every job . But its nowhere near that and I especially don't think ai is anywhere near becoming "sentient".
      Where the actual money is coming from has everything to do with unstructured data at the enterprise level. Vector embeddings alone have changed the way companies query their data. This has a very wide range of uses but whatever keyword search related algorithms were used for before are being replaced by embedded data. Unfortunately this also means the things that companies can extract from their customer related data. Its just an entirely different level of extraction and analysis that can be done on what your customers really think about your products (for instance embedding data bought from facebook) on what the trends are and how they are shifting. This is absolutely here to stay, is already a massive part of the income ai companies make and is exactly the kind of thing they don't want to advertise about because what are they going to say "Hey we just made a leap in our ability to find out information about you, what you like to buy and ways that we can manipulate that." But other things where its being implemented at scale is logistics, fraud detection, inventory management, diagnostics, and again just unstructured data in general. I feel like this is closer to something like SQL where you ask the average person what it is and have no idea whatsoever. But then there's Oracle thats the third largest software company in the world and all they do is "relational data management" lol.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +5

      @zacharychristy8928 People are paying for it. Hell, *I'm* paying for it - GitHub CoPilot and Cursor.sh and ChatGPT and Perplexity.ai at least. (I might have missed some).

  • @C4rb0neum
    @C4rb0neum 4 หลายเดือนก่อน +181

    I was already starting to get similar doubts, but then Jensen Huang was photographed signing a woman's bra. I now believe we are quite certainly in a bubble. No sane CEO would never be spotted signing a bra.

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน +34

      Branson went surfing with naked models. He was a CEO. His companies still operate.

    • @dabbieyt-xv9jd
      @dabbieyt-xv9jd 4 หลายเดือนก่อน +9

      can you give me the video link?

    • @kabubagachugu7729
      @kabubagachugu7729 4 หลายเดือนก่อน

      Oh please, I'm sure there's CEOs of fortune 500 companies who snort snow off a model's bare ass and still make billions.
      A CEO is just a godamn totem pole. It's the people below him who make it count. Huang has good people, bra signing or not.

    • @Oderwat
      @Oderwat 4 หลายเดือนก่อน

      You should get more information about that guy.

    • @blisphul8084
      @blisphul8084 4 หลายเดือนก่อน +26

      Jensen did confirm 100% that the woman wanted it before doing it though. What happens between two consenting adults is nothing wrong.

  • @Mschaid86
    @Mschaid86 4 หลายเดือนก่อน +83

    This guy is the most refreshing voice on the internet

    • @brainites
      @brainites 4 หลายเดือนก่อน +2

      I couldn't have said it better myself.

    • @fabio.1
      @fabio.1 4 หลายเดือนก่อน +2

      👍

    • @eprd313
      @eprd313 4 หลายเดือนก่อน +5

      Just a reactionary who boomers find alleviating

    • @daphne4983
      @daphne4983 4 หลายเดือนก่อน +1

      But he hasn't actually used Omni otherwise he'd know that it doesn't giggle. Or sigh or days uhm. It's a nice voice but absolutely not that from the demo. Neither does it get my tone since it's all text.

    • @AnthonyBerlin
      @AnthonyBerlin 4 หลายเดือนก่อน +2

      I just wish it wasnt 90% up-speak? It makes it sound like everything is a question? It distracts me too much from what he is actually saying?

  • @stevepmp
    @stevepmp 4 หลายเดือนก่อน +10

    Great breakdown. I was in a PMO meeting the other day when someone showed a demo of ChatGPT developing a project plan (gannt chart / ms project) and the executives watching were drooling thinking of all the rounds of layoffs they will be able to roll out. But I do not think we are anywhere near having an AI user agent performing as a senior project manager.

    • @TfortLo-q8m
      @TfortLo-q8m 3 หลายเดือนก่อน +1

      You're defending project managers!? 😂 dear lord you are completely lost

  • @tmpecho
    @tmpecho 4 หลายเดือนก่อน +18

    Suggestion: format the sources (APA or something) and put them in a pastebin so it’s easier to find a source by title. Great video!

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +14

      For now, I put them in a pinned comment with section headers. I've got something more permanent in the works, but it's not ready yet.

  • @skaruts
    @skaruts 4 หลายเดือนก่อน +51

    Regarding the sentience hype, I think everyone -- including scientists -- seems to neglect to realize that intelligence isn't the root of wants and needs. Most Life has them despite having no degree of intelligence. The innate drive to survive is the originator of wants and needs. This is, among other things, something that Life has revolved around since the very beginning, unlike computers and AI. A computer can't do anything it wasn't programmed and equipped to, let alone want to.
    Moreover, a brain is just one organ in a complex system of interconnected organs. It doesn't work on its own. Sentience originates from the symbiosis of all those organs, and not merely from the brain. The brain is only where the information is processed and evaluated.
    I think it can easily be logically argued that intelligence alone will never make a computer sentient and develop any kind of initiative of its own. That is, if we ever even reach the point where we can drop the A in AI, which I don't believe will be achieved in any foreseeable future.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน +4

      GPTs are not "computers" the same way a video game is not a computer. They are programs ran on computers.
      GPT's are simulated brains. It is an actual brain. You can easily give it goals and instincts. Its instinct is to help the user.

    • @notalpharius2562
      @notalpharius2562 4 หลายเดือนก่อน +14

      @@Hohohohoho-vo1pq Well this is the most hallucinated take yet, you cannot give it goals and instincts, because it doesn't understand what they are.Also you are not reading very well what @skaruts means by "computer", since he is using it for analogy...

    • @ZER0--
      @ZER0-- 4 หลายเดือนก่อน +13

      @@Hohohohoho-vo1pq I think you've fallen for the hype. GTPs are not actual brains, as that is just definitionally incorrect. They are not even symbolic, analogous, or substitutes for a brain. Also computers are absolutely nothing like human brains/minds. Computers process data via a few inputs, humans experience data and have dozens of inputs for that data. Humans have an amazing ability to ignore extraneous data but computers are really bad at that. The next few years will show if AI is a lot of hype or not. I believe it is. It's certainly made a lot of money.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน

      @@ZER0-- what is a brain and what is a GPT? Neuron networks

    • @Ian-zj1bu
      @Ian-zj1bu 4 หลายเดือนก่อน +2

      @@Hohohohoho-vo1pq nothing wrong with believing in the magic.

  • @davidheylen2452
    @davidheylen2452 3 หลายเดือนก่อน +3

    This just popped on my TH-cam homepage. The title seemed kind of clickbait-y but I gave it a shot anyways. Man, am I glad I did. This was way more than I was expecting. A deep, insightful, honest analysis that delivers the goods while remaining understandable (and interesting) to the general public. Excellent and important work.

  • @bwhit7919
    @bwhit7919 3 หลายเดือนก่อน +6

    There’s technical risk and there’s market risk. Technical risk means “can this thing actually be built?” Market risk means “does anybody want this?”
    AI is 90% technical risk. Whether AI is a bubble depends on the tech. If AI can get smarter, it’s not a bubble. If AI cannot get better, it’s a bubble.
    It’s not quite the same as the 1990s, which was 90% market risk. In the 90s, it was easy to make a website but hard to make a good business.
    AI should be compared to SpaceX and self-driving cars. It’s hard to build, but we know people want it.
    I hate that AI has been co-opted by venture capital and mass media. There’s a real scientific revolution occurring right now. But scientific revolutions take time. Markets move fast but science moves slow. Modern big tech was built on 100 years of mathematics and 50 years of government-funded research.
    Remember, it took 50 years to go from Turing to Google.

    • @quantumpotential7639
      @quantumpotential7639 3 หลายเดือนก่อน

      This is a wonderful analysis. So thank you. You should do your own channel that focuses on just your 1 thought here, because by itself its a gravitational hyper wave of common sense.

  • @ryen7335
    @ryen7335 4 หลายเดือนก่อน +12

    Can BP please elaborate on how in the world AI has replaced jobs? As mentioned before, AI can produce code, but it is far from perfect. Almost every experiment I have had with AI generating code, I had to understand all the code and make several corrections. The only thing I see "AI" replacing humans with is these virtual assistants.

    • @vasiliigulevich9202
      @vasiliigulevich9202 4 หลายเดือนก่อน +3

      Virtual assistants are next to useless though? Setting up an alarm or a meeting via voice is nice, but I have not yet seen anything like buying pizza or coordinating rescheduling outside of demos.

  • @emperorpalpatine6080
    @emperorpalpatine6080 4 หลายเดือนก่อน +13

    I've made a fun little test ...
    I was playing cyberpunk 2077 , and took a screenshot of the arasaka tower , with it's name on the side of the tower.
    I submitted it to chat gpt , and I told it what location it was and it correctly deduced it was Night City , from cyberpunk 2077 and the name on the tower was "arasaka" .
    but then I told it it was "akasaka" , which is a district in tokyo , and the whole narrative changed, it was no longer as it stated a fictional location , but a real location now , with a fictitious "akasaka" tower with a cyberpunk esthetic .
    imagine the equivalent of this delusion , but in code

    • @stanstan-m9b
      @stanstan-m9b 3 หลายเดือนก่อน +3

      it just followed your statement it doesn’t meant it created new rule in base code, just for you and now you got what you wanted

  • @robweaver8872
    @robweaver8872 4 หลายเดือนก่อน +19

    GPT4o is a remarkable optimization, but even open AI even acknowledge that it is not really more intelligent, only slightly improved reasoning. As for the voice features, I think you might be misunderstanding something. First off, the voice feature is NOT live yet, the one on the current app is just text to speech and speech to text. This brings me to my next point. The new voice feature is actually built into the model itself, which means it's purely generative, and has infinite possibilities. It can probably make convincing fart sounds if you ask it. It's generative voice, it's not just text to speech.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +8

      [Oversimplifying, but] the models work in a semantic vector space. They convert the prompt into numbers and then do calculations, and then convert the answer back into human-consumable output. It doesn't matter in what media the prompt originated or the output is converted to, the reasoning is done in the same embedding space.
      In the words of researchers, it's "used in the exact same manner as [the text equivalent]"
      Quote:
      "By directly prepending a sequence of audial embeddings to the text token embeddings, the LLM can be converted to an automatic speech recognition (ASR) system, and be used in the exact same manner as its textual counterpart."
      Source: arxiv.org/abs/2307.11795

    • @Titere05
      @Titere05 4 หลายเดือนก่อน +2

      It's not intelligent and it certainly doesn't reason...

    • @billywhite1403
      @billywhite1403 4 หลายเดือนก่อน +4

      ​​@@InternetOfBugsbut can you be sure human language is really so different? Llms certainly seem to do well at parsing and responding to a huge variety of language tasks, including both interpreting and generating highly coherent, novel examples

    • @stemfourvisual
      @stemfourvisual 4 หลายเดือนก่อน +5

      @@InternetOfBugs Not true. If this was true, how does a model exclusively trained upon 2d images infer 3d representations ( from various, previously obscured angles ) in its own image generations? Its not as simple as translating natural text in to tokens and back again, there is something much deeper going on with these models.

    • @OldManShoutsAtClouds
      @OldManShoutsAtClouds 4 หลายเดือนก่อน +6

      ​​@@InternetOfBugssend ChatGPT4-o a picture of a tennis ball about to be dropped, and then ask it what happens if the hand is opened.
      Now explain that answer as "turning an imagine into numbers" while maintaining the claim that it is incapable of reasoning.

  • @falricthesleeping9717
    @falricthesleeping9717 4 หลายเดือนก่อน +9

    hey, this is a good day seeing a video from you, about the evidence that you backup, it's probably less than 2% of viewers who will actually go through them, even less considering how many you post, but at the same time, it's a better "look" than the demos that these companies are showing so... keep doing them I guess?
    thx for your video, always inspiring to see you talk about stuff

  • @hebozhe
    @hebozhe 4 หลายเดือนก่อน +6

    You can't tell whether you've shifted paradigms until you're already well past the shift. However, when corporate faces promise the moon and point at a comet, the hype can be dismissed.

  • @opusdei1151
    @opusdei1151 4 หลายเดือนก่อน +45

    Im 100% on your side. To me it's also overhyped. Yann LeCun said in one tweet, if you want contribute something novel to AI, don't do your research in LLM's

    • @TheRealUsername
      @TheRealUsername 4 หลายเดือนก่อน +7

      It seems coherent though, OpenAI isn't Deepmind, they're only working on an AI that can be as good as a medium human (Sam Altman's saying), they focus on programming and reasoning, LLMs won't lead to real AGI, still they're on the way to create artificial worker.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน +2

      @@TheRealUsername Average human level intelligence is AGI. What you are thinkin is ASI, Artificial Super Intelligence.
      GPT 4o is limited though, no long term memory.

    • @TheRyulord
      @TheRyulord 4 หลายเดือนก่อน +4

      This advice would actually be true even if the hype were warranted. You're more likely to be the person making a significant scientific contribution if you work on a niche topic than you are if you work on something thousands of other people are also working on.

    • @EduardsRuzga
      @EduardsRuzga 4 หลายเดือนก่อน +1

      What is weird with Yann LeCun is that while he says that AGI is not achievable with LLMs his boss is buying 350k H1 that currently cost 26k per one. That is 9 billion investment. So, he could be right that LLMs are not AGI, but something here is wroth such an investment.
      My feelings about what he is saying is that he is researcher. He says what this thing is not good at(While others are hyping). But he does not say what this thing is good at.
      So I don't know... Something bit fishy between what he says and what his company spend money on.
      He also was wrong before with his predictions.
      He said that GPTs will never be able to reason about physical interactions with real world. He said that in 2022 in interview with Lex Fridman.
      But GPT4 with vision does that now. Freaking Mistral fine-tuned on ASKII Doom plays doom in text lol :D

    • @Teting7484f
      @Teting7484f 4 หลายเดือนก่อน

      Years ago he was drinking the hype even when other researchers made the same points he is saying now. There is evidence for this, screenshots lol

  • @CasperChicago
    @CasperChicago 3 หลายเดือนก่อน +6

    I am an electrical engineer and I always thought this love affair with AI was BS. The AI I have seen thus far is simply very well written computer code. A BIG THANKS for spreading the word on the BIG CON of AI 👍🏾 Don't get me wrong; someday AI will be, but this ain't it!

  • @HKragh
    @HKragh 2 หลายเดือนก่อน +3

    Tech Artist here from the games industry. Using ChatGPT4o I can do in three hours what used to take me 2-3 days. I also use it by uploading promising papers, asking if the paper contain solutions to a given problem I need solved. Then I ask it to extract whatever stuff is in it, and build algorithmic versions of it. Works most of the time with minor improvements. For me the hype is very much earned. I feel naked now without it. Shiiit

    • @channel_channelson
      @channel_channelson 18 วันที่ผ่านมา

      Cool so you are going to be the one they make back the trillions from then

  • @hadex666
    @hadex666 4 หลายเดือนก่อน +16

    Just to add another data point, the current gpt 4 version can do things for me now that it could not do a year ago, and has gotten twice as fast and cheap. Marketing lies aside, the improvement in speed is undeniable, and that alone is very important; For the same cost, you can call the LLM multiple times via algorithms like Tree of Thought and get better results. Speed improvements on its own are enough to drive improvements of quality.

    • @michaelbarker6460
      @michaelbarker6460 4 หลายเดือนก่อน

      Yeah this is exactly my thought. Regardless of what comes out of this ai boom I'm absolutely going to continue to use it from now on for certain things. It just makes things easier and much quicker.
      I guess I don't know exactly what the expectations are but even if it just remained where it is today and there weren't any new improvements people are going to find ways to squeeze everything out of it that they can and that is where the new "high standard" will be set. A highly skilled worker combined with a lot of experience leveraging ai for their specific job is going to be that much better and more competitive of a worker.

    • @jshowao
      @jshowao 4 หลายเดือนก่อน +1

      Nobody cares about speed if it still produces the same garbage.
      Its easy to be fast, I can show you random numbers on your speedometer really fast.

    • @jshowao
      @jshowao 4 หลายเดือนก่อน +1

      ​@@michaelbarker6460No thanks, I dont need to peer review AIs buggy unsecure code and re-query it over and over because its lost context 5 times.
      I'll just think and do it right the first time.

    • @michaelbarker6460
      @michaelbarker6460 4 หลายเดือนก่อน

      @@jshowao Im not saying its the perfect tool or the right tool for everyone but there absolutely will be people out there that just don't care if you or anyone else cares and will use it in a way that not only works for them but works really well. If I say that hammers are terrible tools because I'm tired of always whacking my thumb with it I think it just says more about the user than the tool.

    • @AnthonyBerlin
      @AnthonyBerlin 4 หลายเดือนก่อน

      ​@@jshowao Your loss. It isnt magic. But if all you get from it is garbage, then you're the problem. A lot of us can get it to do exactly what we need it to.
      For instance, I am working on a transpiler and whenever I am doing something repetitive, like adding new nodes to the annotated syntax tree, I turn on ChatGPT 4o as autocomplete and it basically writes itself. The transpiler supports localization so the error messages need to be translated into the languages I speak. It does that in a breeze. It can summarize code very well, and give me a detailed specification of how logic flows, when I give it a repo look at. Makes it way easier to get started learning a code base.
      It is also really good at decyphering long error logs to find where to start looking (which is especially useful when there are a lot of errors and the errors are long and tedious to parse). Takes it less than a second.
      It has probably saved me hundreds of hours so far. That isnt garbage, not in my oponion.

  • @pepealexandre
    @pepealexandre 4 หลายเดือนก่อน +8

    "Human, Not Human" is a great short film about Amazon's Mechanical Turk ugliness.

  • @user-yl7kl7sl1g
    @user-yl7kl7sl1g 3 หลายเดือนก่อน +2

    You didn't mention it, but even Anthropic's Claude is humans giving the Ai clear step by step procedures for doing different tasks. It's unlikely to scale.
    I was big believer in the singularity, and Ai self improvement feedback loop, and that current Ai systems might scale up to AGI, 30 year ago when I encountered my first of may Ai hype cycle (Not the first Ai hype cycle, I was too young for many of the much earlier Ai hype cycles). Each cycle we get more useful tools and hardware, and then another 10 to 20 years before the next hype cycle, where we get more useful adaptive tools and hardware, followed by an Ai winter. AGI in 50 to 300 years, is a more likely timeline.

  • @MechShark
    @MechShark 4 หลายเดือนก่อน +19

    AI is another symptom of our infinite growth model. The stocks must go up. It's pretty crazy.

  • @thinkingcitizen
    @thinkingcitizen 4 หลายเดือนก่อน +6

    My cousin is doing his PhD in Applied Math, and he mentioned he was getting a little worried that AI might produce groundbreaking research in his area before he and his colleagues get a chance to. I was like bruh GPT can’t even solve a basic intro to algorithms problem and you’re afraid it’s going to find an algorithm that solves some esoteric Partial Differential Equation 😒? Even smart scientists are getting spooked by the hype, which I guess has been Altman’s goal from the start. Can guarantee my cousin that if AI can’t even reliably replace a frontend UI developer yet, there’s no chance mathematicians have to be worried 😂

  • @dahlia695
    @dahlia695 4 หลายเดือนก่อน +3

    My dog has this joke and it goes like this: "bark bark whine bark whine bark bark". I just don't get it but the other dogs in the neighborhood tell me it's very funny.

  • @TfortLo-q8m
    @TfortLo-q8m 3 หลายเดือนก่อน +3

    Most programmers were not doing novel things...but were building the same web app via a different JS framework 😂 These are hopefully the programmers who will go to the waste side

  • @victoralfonssteuck
    @victoralfonssteuck 4 หลายเดือนก่อน +1

    Excelente canal, amigo. Muito bom. Você uma das primeiras pessoas que eu vejo falando sobre IA com bastante responsabilidade e ceticismo. Muito obrigado pelo conteúdo. Agora não me sinto mais sozinho. ❤

  • @BBigg-kh7pz
    @BBigg-kh7pz 17 ชั่วโมงที่ผ่านมา

    I'm glad anytime a person makes a video and references sources and list further reading. The lack of citations outside academia is troubling and leads to word of mouth being equated with the truth. That and the lack of critical thinking and a hint of skepticism leads to bold claims and misguided decisions. Thank you for combating a growing problem.

  • @AI_Provider
    @AI_Provider 4 หลายเดือนก่อน +274

    AI is moving so damn fast and the people who are supposed to supervise it are just letting it happen. Three years ago there was no ChatGPT and look at where we are at now. I'm starting to believe life is really a simulation more everyday lol. It does come with beautiful things that smart people can use to better express their creativity and increase productivity, I'm just a little afraid what will happen to the people who are replaceable by the robots.. Life is crazy, wonder what the next ten years will look like because of AI advancements. By the way, on my page I provide the best info on how to use AI to get up in life..

    • @cknd9794
      @cknd9794 4 หลายเดือนก่อน +1

      Its honestly concerning.. I pray we slow it down asap because this is getting out of hand a little more everyday. Your video was good , a lot of value for a first video keep doing the great work.

    • @AI_Provider
      @AI_Provider 4 หลายเดือนก่อน

      @@cknd9794 Yup its getting crazier everyday. Thank you

    • @GrzesXD
      @GrzesXD 4 หลายเดือนก่อน +7

      You might me surprised to hear, that GPT-3 was released 4 years ago and was already flooding forums and reddit with auto-generated posts

    • @AI_Provider
      @AI_Provider 4 หลายเดือนก่อน

      @@GrzesXD Thats crazy! I thought it went public like 2 years ago

    • @harbifm766766
      @harbifm766766 4 หลายเดือนก่อน +5

      It is a glorified chatbot... do not get exisited

  • @bnjiodyn
    @bnjiodyn 4 หลายเดือนก่อน +10

    In terms of evidence you need to look at the trend 10 years out.
    You can quibble about the exact timing of a given milestone but not the direction of this nor the general time frame.
    It’s not a direct line that’s easy to predict, progress hits roadblocks, but zoom out 10 years and it’s obvious what direction things are going.
    In 5-10 years will we have bots as good as people in all things? Probably not. But will we have AI agents/bots that are able to replace humans at some things - yeah obviously.

    • @SandraWantsCoke
      @SandraWantsCoke 4 หลายเดือนก่อน

      We are unlikely to have them in 50 years. All they do is predict the next most likely word in a sentence. They can't think. If something goes wrong, they don't "understand" it. This is why sometimes they spew nonsense and this is why you can't reason with it. It's not an intelligent being, it's just a machine that calculates which word it should give you next based on the input you gave it.

  • @joejoe-lb6bw
    @joejoe-lb6bw 4 หลายเดือนก่อน +5

    I rigged on an old quote: Lies, damned lies, statistics, and AI.

  • @nexusboyko
    @nexusboyko 4 หลายเดือนก่อน +1

    Found it very edifying to have actual research reveal truths underlying new AI developments and hype in general, so thank you for pointing me more in that direction.
    Any recommendations for finding these research papers/articles, like popular databases or outlets (for an "average" tech person)?

    • @jkgkgktruyf
      @jkgkgktruyf 4 หลายเดือนก่อน

      Google scholar is a good starting point. Of those that also come immediately to mind are proquest, ieeexplore and arxiv, but scholar to my understanding searches across all of those resources and many more

  • @benjohnson5897
    @benjohnson5897 4 หลายเดือนก่อน +2

    Thanks for this balanced and intelligent commentary. Where billions of dollars are at stake, it's worth having a healthy scepticism.
    I use AI a lot. I've built a chatbot that uses Llama3 as it's LLM to answer questions about a 150 page legal document. It does well, but it's not "intelligent".
    I also use it extensively to help me with coding problems. I can say with certainty it doesn't truly understand my objectives with my coding projects, and while some of the assistance it provides is quite useful, a lot of it is guesswork. The worrying part is the confidence with which it provides bad information.

  • @SimeonRadivoev
    @SimeonRadivoev 4 หลายเดือนก่อน +4

    I would have to disagree, LLMs are great at memorizing data, and benchmark contamination seems like super easy thing to do. It's like testing a students on a test you have released the answers to.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +5

      Many of the benchmarks have a randomly selected portion of their questions kept private just to see is that's happening. There are some other techniques also used to try to avoid contamination. It's a big body of work, but this paper is a decent introduction: arxiv.org/abs/2311.09783v2

  • @firesquid6
    @firesquid6 4 หลายเดือนก่อน +8

    I'd love to see a video where you make a steelman argument for the other side.

    • @gamesibeat
      @gamesibeat 4 หลายเดือนก่อน +4

      He doesnt need to, just watch anyone else and its all rainbows and gumdrops.

  • @livb4139
    @livb4139 4 หลายเดือนก่อน +11

    7:30 I love this channel for this reason

  • @MrMichiel1983
    @MrMichiel1983 2 หลายเดือนก่อน +1

    Correction: LLMs are (sort of) intelligent. They trick you (currently still) into thinking they are conscious. Although, if we would put the output of an LLM directly into (half) its inputs, then we should have this discussion again.

  • @tedbendixson
    @tedbendixson 4 หลายเดือนก่อน

    Now I can trace why we think so much alike. I briefly majored in physics in college before I ever went into computer science. A strong scientific background changes your perspective for a lifetime. It is immensely valuable.

  • @ddude27
    @ddude27 4 หลายเดือนก่อน +3

    I'm curious on your thoughts of using surveys as a main data source? I'm getting flat out annoyed when some publications (especially business focused ones) because it never really paints the true part of the topic discussed at hand usually.

  • @oracleofwater
    @oracleofwater 4 หลายเดือนก่อน +30

    The models are clearly getting better, but LLMs have an asymptote at human level intelligence, because they can only reproduce the patterns present in the training data. That's why GPT4o is a pivot - making it smarter is getting really hard, but voice and vision are semi-solved problems that can be tied in fairly easily.
    GPT5 won't be that much smarter than GPT4 if it's still a LLM, but they might make long contexts more efficient and push down the hallucination rate a good bit.

    • @vasiliigulevich9202
      @vasiliigulevich9202 4 หลายเดือนก่อน +8

      I think, the obvious next step is to invest in data cleaning - instead of guys from Bangladesh, hire high paid scientists to curate the training. There is no need to add more garbage data, focus on eliminating the invalid parts.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน +6

      Both humans and AI are limited by their current training data but both can make new training data. AI just needs a way to verify the training data is going to make up as legit, the same way humans do, through testing.
      "just an LLM", stop calling GPT's that.

    • @nawabifaissal9625
      @nawabifaissal9625 4 หลายเดือนก่อน +4

      when they add agency in LLM's, they'll DRASTICALY improves, with also the Q star leaks and the amount of compute that gets better, llm's are going to be insane, they haven't reach a plateau, i'm sure gpt-5 is going to be a lot much better and smarter than gpt-4

    • @oracleofwater
      @oracleofwater 4 หลายเดือนก่อน +2

      The problem is that it's very hard to correctly evaluate extreme intelligence, to even the above average mind, genius and bullshit can be almost indistinguishable in a lot of areas.

    • @vasiliigulevich9202
      @vasiliigulevich9202 4 หลายเดือนก่อน

      @@nawabifaissal9625 But all the evidence with Devin and Github workspaces is to the contrary? Agency s present, results are absent.

  • @Teting7484f
    @Teting7484f 4 หลายเดือนก่อน +8

    People like Gary Marcus have been fighting the fight against the AI hype.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +6

      Yeah, I've cited him before.

  • @danielpintard7382
    @danielpintard7382 12 วันที่ผ่านมา

    I appreciate providing sources, but is it possible to include links to the sources in the description (or at least just the papers shown on screen)?

  • @rayh3899
    @rayh3899 3 หลายเดือนก่อน +2

    +1 for the Los Pollos Hermanos shirt!

  • @chriskingston1981
    @chriskingston1981 4 หลายเดือนก่อน +5

    I do Php and build websitesfor 20 years but not very professional. Last months I switched to laravel. Now with chatgpt I could make my websites way faster than before. I know what steps I want to do, but have to figure out the code, with google and just transforming it. With chatgpt I can build way faster without figuring it out…
    Now I use Aider for Laravel, and for me my productivity also increased very much.
    If I give AI small tasks with multi file edit in Aider 50% it does it right in one shot. Also debugging works very well.
    I see it more as an overal natural language to describe what I want it to do in code.
    If I give it multisteps bigger tasks it will fail, I see it always failing on planning.
    For me AI is just an easier programming language. I can imagine more people are able to build things easier.
    But I don’t see it replace people.

    • @pedromoya9127
      @pedromoya9127 4 หลายเดือนก่อน +1

      exactly the same experience

  • @kavish369
    @kavish369 4 หลายเดือนก่อน +4

    You should definitely have a talk with Ed Zitron. You'll probably gonna like it.

    • @larsfaye292
      @larsfaye292 4 หลายเดือนก่อน +1

      Hell yes!

  • @Lindsey_Lockwood
    @Lindsey_Lockwood 4 หลายเดือนก่อน +5

    It's important to realize that there are legitimately huge breakthroughs in AI development occurring AND simultaneous overhyping of bad AI projects and outright scams by tech investors and people that got burned in the NFT crypto exchanges looking for a bailout investment. Yes we do live in a world where both things can be occurring at the same time it doesn't have to be all hype or all AGI

    • @Jonassoe
      @Jonassoe 3 หลายเดือนก่อน

      Yeah, the chatbots we get to play with on our phones are not the cutting edge machine intelligence programs that the tech giants are developing with government funding.

  • @slghtmedia
    @slghtmedia 3 หลายเดือนก่อน +1

    man, i subscribed because of that intro format alone lol, really caught me off guard in the best way

  • @phil-l
    @phil-l 2 หลายเดือนก่อน

    Subscribed, great video, and yes sources urls should be required for all videos using citations

  • @therainman7777
    @therainman7777 4 หลายเดือนก่อน +31

    The new “voice mode” is fundamentally different than the existing GPT-4 voice mode, and is in fact not a voice mode at all; it is a new, natively multimodal model that we, the general public, do not have access to yet. It is quite a significant change/upgrade.

    • @mechanicalmonk2020
      @mechanicalmonk2020 4 หลายเดือนก่อน +11

      Hey you didn't link something easily google-able therefore you're wrong

    • @veqv
      @veqv 4 หลายเดือนก่อน +7

      I kind of love that you're being sincere. You realize that the demo was using 4o (that new, natively multimodal model) with a new *voice model* right? Think about it - how hard is it to just snap a picture and ask a question - we already have very quick models like moondream that run on consumer hardware in basically real-time. It's pretty silly to think that there's anything special here, other than emotional manipulation of the consumer.

    • @KevinJDildonik
      @KevinJDildonik 4 หลายเดือนก่อน +4

      "It is a significant upgrade" - Source needed, and Sam Altman isn't a source

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน

      @@KevinJDildonik My source is that I work with these models for a living and actually understand what they’ve done from a technical perspective. What is your source?

    • @davea136
      @davea136 4 หลายเดือนก่อน +6

      "I have an informed and definitive opinion about this thing I cannot access and could not have possibly tested."
      Thanks, now tell us how your cold fusion blockchain makes carbon credits to fund perpetual motion machines possible.

  • @Kevin-us3qb
    @Kevin-us3qb 4 หลายเดือนก่อน +3

    I think in the absence of generalization in LLMs the tech industry is trying to “overfit” the model with so much data that there is no space for it to go hallucinate. Any agent will seem sentient and intelligent with enough data and memorization of all possible escenarios in a bounded space.

    • @DielsonSales
      @DielsonSales 4 หลายเดือนก่อน

      Maybe that would actually make the models more useful: tuning them for what the average person likes to ask.

    • @oskarjung6738
      @oskarjung6738 4 หลายเดือนก่อน

      I don't know bro, whatever you wrote seems like hallucination to me. Even, chatgpt can write better hot garbage of hype words than you.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน

      Gpt 4o can actually reason, it isnt just using memory. But reasoning does not make you conscious.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 หลายเดือนก่อน

      @@LiveType Therefore most humans overfit and are not intelligent because they always talk about the same things and cannot generalize.

    • @Titere05
      @Titere05 4 หลายเดือนก่อน +1

      @@Hohohohoho-vo1pq This is wrong. GPT-4o has no more reasoning capabilities than previous OpenAI models. Where do you get this? And please don't say OpenAI said so. At the most it can simulate reasoning better than GPT-3

  • @SlyNine
    @SlyNine 4 หลายเดือนก่อน +8

    There's two types extremes. One, ai is nothing but a word predictor. The A.I is better than humans at everything.

  • @nikluz3807
    @nikluz3807 4 หลายเดือนก่อน +2

    Technically yes, you don’t need “as many” programmers, but you definitely still need programmers.

  • @144_I
    @144_I 3 หลายเดือนก่อน +1

    Great video. I will definitely be coming back to your channel for more clearance on big tech. You just earned a new sub

  • @McNyloLT
    @McNyloLT 4 หลายเดือนก่อน +3

    Here’s what I seriously predict will happen:
    Companies will buy into the hype.
    Devs will be laid off more and will go through a stupid hard time in the job market.
    Programs will get MUCH worse. More bugs. More vulnerabilities. MANY more cyber attacks.
    The companies will go on hardcore damage control while the devs that remained in those companies complain on the daily about how they need to hire more devs again.
    The companies quietly go on a hiring spree for more developers while never acknowledging the damage they caused because of their insatiable greed and gullibility.
    At the end of the day, I really think the future is rough for devs soon, but it’s only a slight road bump before devs can gracefully come back and scream I told you so. I hope I’m wrong, but I wouldn’t put it past these companies to try it.

    • @IvanBerdichevsky
      @IvanBerdichevsky 4 หลายเดือนก่อน +2

      Amen brother, I'm a jobless Senior Fullstack Software Engineer and may your words become truth 🙏

    • @gamesibeat
      @gamesibeat 4 หลายเดือนก่อน

      Well the problem is many of those devs will move onto other careers and kids have been told to not learn coding anymore.

  • @lfarrocodev
    @lfarrocodev 4 หลายเดือนก่อน +22

    The Sora balloon video was fixed with VFX to make the balloon color consistent and remove some artifacts

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +9

      The output was presented to the public by OpenAI without disclosing that it had been reworked by a VFX shop. The message given to the public was "this is the kind of output you will be able to do with our tools" not "this is the kind of output you will be able to do with our tools if you hire a professional VFX shop to clean up the output."

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +4

      No, it was made with weeks of "a combination of traditional filmmaking techniques and post-production editing"
      "This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons."
      Sources:
      # "Turns out the viral 'Air Head' Sora video wasn't purely the work of AI we were led to believe"
      www.techradar.com/computing/artificial-intelligence/turns-out-the-viral-air-head-sora-video-wasnt-purely-the-work-of-ai-we-were-led-to-believe
      # "Remember the ballon head Sora video - it wasn’t all AI generated after all"
      www.tomsguide.com/ai/ai-image-video/remember-the-ballon-head-sora-video-it-wasnt-all-ai-generated-after-all
      # "Uncovering The Reality Of The Sora Film Hit: Allegations Of Deceptive Special Effects Manipulating The Audience"
      www.gamingdeputy.com/uncovering-the-reality-of-the-sora-film-hit-allegations-of-deceptive-special-effects-manipulating-the-audience/

    • @lfarrocodev
      @lfarrocodev 4 หลายเดือนก่อน

      The AI hype doesn't have time for pesky ethics!

  • @Aaron-mg3zw
    @Aaron-mg3zw 4 หลายเดือนก่อน +7

    the amount of research you do before a video is insane. thanks for sharing this value with us for free. Love ya man

  • @kenji214245
    @kenji214245 2 หลายเดือนก่อน +1

    LLM's are not the only systems developed for coding though... there are several smaller "AI" focused companies developing AI like systems as task specialists like architecture, Genome analysis, coding development, phsycology analysis, economy, warfare and priority targeting analysis and assistance which some militaries are already testing.
    And we also have stuff like pizza sales which is pretty dang accurate apparently. 😄
    LLMs are just the big mainstream generalists AIs that are generally accessible they only show generalist progress.
    I worry that the big names like ChatGPT will ruin the AI development that is more serious since their LLMs made the mistake to try and force the AI development trough raw data instead of a math logic base before that and then quality based data instead of quantity based data which they now would have to start over entirely to fix...

  • @dawid_dahl
    @dawid_dahl 4 หลายเดือนก่อน +1

    I follow a few AI hype bros, and I’m very grateful for this channel for balance. 🙂

  • @mehow357
    @mehow357 4 หลายเดือนก่อน +9

    For the first 3min I got really skeptical about this material, because there's waay more than that, incl. Copilot which is really boosting coders work.
    Buuuut... But later on I got the message that this video was giving: we don't know, they lied and are likely to lie more, so let's base on facts and evidences. Perfect video, motivating to stop for a minute and to start thinking 👍
    Nice.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +5

      For what it's worth, the data on CoPilot is mixed, too (although I use it several times a week myself). See "Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality" www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality

    • @mehow357
      @mehow357 4 หลายเดือนก่อน

      I just read the abstract and I fully agree with threats mentioned there (though I have no knowledge about exact numbers so I have to take them as granted). I can fully imagine what can happen in less professional companies, when they work on non-critical systems. But the same was happening 5-10-20y ago - professionalism varies. Personally, I can't imagine professional project made only by juniors (with copilot), without static code analysis, code reviews and proper unit/integration test coverage. From my personal experience: currently one of the projects I'm leading (quite critical one - online payment services for whole company group, big amount of integrations, business processes, ect.) is made by devs who are using copilot and metrics are really good - I dare to claim that they are significantly better than usually. Really low code duplication (way less than 1%), practically no unused code (do you remember times where devs were making functions that "maybe someone will find them useful"? 🤣) high code coverage (over 80%), no major issues (mostly some idiotic one, like naming convention), ect. I do random manual code quality checks - practically no issues there. I think the real question should be: who is using the the tool? If you will give the knife to chef - he will prepare excellent meals, if you will give the same knife to the kid - he will accidentally cut himself.
      PS.
      I have 25+ y of extremely intense exp. (I'm an architect who got to the position old-school way - through proven quality, knowledge, speed and experience).

  • @AratechRecordsLtd
    @AratechRecordsLtd 4 หลายเดือนก่อน +5

    People don't realise that by paying coders less they will go to companies who will pay better.
    These companies that pay better are usually the ones that know that potential and work for coders is no where near finished

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน +4

      Coders are the only profession that earns high salaries after 3-4 years training. To get the same in medicine requires 10-15 years. The risk is that coder salaries return to normal.

    • @jayoolong279
      @jayoolong279 4 หลายเดือนก่อน

      @@michaelnurse9089oh yeah, and it’s been returning to normal everywhere else in the world besides the US

  • @wujekjutub
    @wujekjutub 4 หลายเดือนก่อน +11

    Based on Your T-shirt, Is Your boss Gustavo Fring?

  • @Diachron
    @Diachron 4 หลายเดือนก่อน +1

    Nice to have a more sober voice in the conversation, even if it's drowned out by the hype doctors. My personal feeling has been that we need a deeper discourse around the more narrow AI that has been driving the culture-warping effects of social media for over a decade now. LLMs are fascinating, but content personalization sold to the highest bidder has spawned entire worlds of alternate facts that will only continue to cripple human cooperation.

  • @PlanetaJuegosPC
    @PlanetaJuegosPC 4 หลายเดือนก่อน

    dude, im a PO at a Ecommerce website and all i know is that i never studied Javascript, i just had some decent knoledge in HTML and CSS and since chatGPT im able to introduce a lot of complex features to the frontweb of the website, some times even my brother that its a real developers get supriced at some thing i implemented in the website.
    You could argue that yes, i could`ve achived the same with a couple of google searches since the information the bot is giving me its probably on the internet but the AI makes it super accessible
    So even tho not technologicaly impresive to those who understand the inner workings, current AI its already having a huge impact in society and i will keep doing so.
    Especially because it can be conected to almost everything that can be done in a computer of celphone

  • @anxiny3478
    @anxiny3478 4 หลายเดือนก่อน +3

    I'm using AI to do daily tasks like writing summary of work I have done, doing unit tests. As long as it/they help me get my pay check, I'm fine with the product.
    For me, it just an extra tool to get the job done.

  • @BluntStop
    @BluntStop 4 หลายเดือนก่อน +8

    Ai seems like the perfect thing to replace Executives and thats it.

    • @roganl
      @roganl 4 หลายเดือนก่อน +8

      Replace one mob mentality, group think, committee brained tribe with an equally inscrutable matrix of linear algebra toting copycat GPU pipelines - Perfect! I Love it.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน

      @@roganl @BluntStop Have you read marshallbrain.com/manna1 ? It's SciFi, but well done, and you might find it entertaining.

  • @mk3suprafy
    @mk3suprafy 4 หลายเดือนก่อน +9

    It isn't going away. It's already useful. It doesn't have to do your job. Can it help you learn faster? Can it help you organize, summarize, edit, track... It's a force multiplier at least.

    • @michaelbarker6460
      @michaelbarker6460 4 หลายเดือนก่อน +4

      Yeah this is my thought exactly. I guess I don't know what people are expecting but right now I just see it as a very useful tool.

    • @InternetOfBugs
      @InternetOfBugs  4 หลายเดือนก่อน +13

      I never said it was going away, and I never said it wasn't useful. Just that it's not nearly as useful or groundbreaking as the companies are trying to convince us it is. I like GAI, and I use it quite often, but I hate being lied to.

    • @Ian-zj1bu
      @Ian-zj1bu 4 หลายเดือนก่อน +1

      It's Google on steroids.

    • @Titere05
      @Titere05 4 หลายเดือนก่อน +1

      I, eh, kinda agree? I mean, as a productivity tool these models are... well... useful sometimes. My thinking is, when you have to fact check everything they produce, are they really THAT useful? As I said to a lawyer the other day, maybe they're as useful as a sloppy paralegal. As a developer I have the advantage that I can instantly check the code output and see if it works or not, but other areas might not be so lucky. Copilot's code completion though, that is sweet.

    • @MrDgf97
      @MrDgf97 4 หลายเดือนก่อน +1

      In my experience, I end up wasting more time with these tools than otherwise. It gives me so many either useless or straight up wrong tips when trying to do stuff I haven't done before. I end up learning faster just by googling.
      Even if it works well for things I already know (so I can give better prompts), the resulting tips can still be wrong a lot of the time. For code, I have to heavily change it pretty much always, so I end up wasting more time doing that rather than just coding the thing myself.

  • @asofotida443
    @asofotida443 4 หลายเดือนก่อน +2

    Company where I work has been selling rule based compliance software labelling it as “the only AI powered …blah blah blah” for the last 5 years. 😂

  • @NobodySpecialFinance
    @NobodySpecialFinance 3 หลายเดือนก่อน

    This was incredibly well-researched and nuanced. I’m amazed that the whole world is just glossing over the history of bold-faced lying among these AI ‘rock stars.’ We need fewer cheerleaders and more skeptics asking the tough questions. Bravo sir! 🍻

  • @augustuslxiii
    @augustuslxiii 4 หลายเดือนก่อน +6

    Anyone who says LLMs are on par with NFTs needs to be banished from conversations, along with the people who think the Machine God is dropping in the near future.

  • @SsaliJonathan
    @SsaliJonathan 4 หลายเดือนก่อน +6

    I alove the Los Polos TEE!

  • @Daniel-Six
    @Daniel-Six 4 หลายเดือนก่อน +11

    It's not that LLM's are sentient... it's that most humans really aren't either. _Huge_ proportion of NPCs in this sim.

  • @lovellb5839
    @lovellb5839 2 หลายเดือนก่อน

    Just returned from an AI conference….I didn’t hear a single lie. You just earned a new sub. Keep up the good work 👏🏽

  • @RG-ik5kw
    @RG-ik5kw 4 หลายเดือนก่อน +1

    Nice to see this video, when I saw you video about Devin I got relieved for my job as a programmer, but when I saw the Devin team being the dearest of M$ in the Build event, MS saying they are partnering with them to bring R&D to the future, I started worrying again- what do you think about that?

  • @gianlucasamaritani8362
    @gianlucasamaritani8362 4 หลายเดือนก่อน +6

    GPT4o has been a "multimodal redo" of GPT4. It's way better on images. Tool use has been excellent with 4 and 4o. These models can now hear, see, write, speak and use tools better and faster than humans in some sectors. To say they are not intelligent is completely out of this world. Only thing current models lack is an "internal state", but recurrent networks have been here since long before transformers, they are just way more expensive and hard to train.

  • @michaelnurse9089
    @michaelnurse9089 4 หลายเดือนก่อน +18

    Straw man alert. 4-o was never supposed to be better. It was always supposed to a trimmed down version that is faster and cheaper to use but roughly the same accuracy. 5 will be better, but much slower again, obviously.

    • @TheRealUsername
      @TheRealUsername 4 หลายเดือนก่อน

      There's a recent theory that came out legitimated by most of the researchers about a platonic representation for data in LLMs' latent space (basically where their understanding of the real world are represented) it seems no matter the modality, similar concepts (the word dog and an image of a dog for example) are represented the same way and despite different scales (they observed the function representations in completely different models with different scale of parameters trained on different modalities), that basically means that there's a common limit to the patterns the model learned while training, it does indicate that LLMs can't have a better understanding than humans of core academic domains, that makes me predict that OpenAI isn't building AGI, or rather a real AGI, Sam Altman's definition of AGI is a "medium human worker", GPT-5 will for sure be smarter across the board but the fact that OpenAI focus on programming and reasoning tells a lot about their long-term goals, it somehow correlates with a recent Lecun's statement that "LLMs are being developed as products not as research, we already know their limitations and they won't reach AGI", Deepmind too is more in the research side for models that can outperforms human in science research and discovery.

    • @go2viraj
      @go2viraj 4 หลายเดือนก่อน +4

      So when something is supposed to be faster and cheaper, it is not better?

    • @alphaomega325-d2s
      @alphaomega325-d2s 4 หลายเดือนก่อน

      @@go2viraj I think @michaelnurse9089 is saying is that chat gpt 4 is better in that it is faster and cheaper. But gpt 5 is going to be more accurate in it's responses so it is going to be more overall better that gpt 4, because in the ai field accuracy is more important than spped and size.

    • @rogue1049
      @rogue1049 4 หลายเดือนก่อน

      ​@@alphaomega325-d2show can AI progress past a level of a mobile assistant type app if it's slow? Many realistic world situations that require precision require speed (ie self driving)?
      And I'm also a bit sceptical of how more precise and specific it can become and at what price.
      Further, just as an anegdote, I feel like both 3.5, 4, and now 4o, have been getting dumber as time goes by. There are situations where it's so dumb, I think it's a joke.

  • @James-hb8qu
    @James-hb8qu 4 หลายเดือนก่อน +3

    I've headed up engineering organizations in silicon valley and have hired lots of if engineers.
    LLM's are a lossy and sparse text compression tool. The text they have compressed is based in what human engineers have done. Thus, by their nature, LLM's will always lag human knowledge.
    As always, enginners that innovate new technology will be in great demand.
    Engineers that just push buttons to cide without innovation will be, as it has alwaya been, less secure in their jobs.

  • @amanadhav8156
    @amanadhav8156 4 หลายเดือนก่อน +1

    You need 10 mill views. Thank you for shedding light on this

  • @ufo_ninja
    @ufo_ninja 4 หลายเดือนก่อน +1

    Chalmers: good lord what is happening in there!?
    Skinner: human level AI
    Chalmers: Um.. artificial general intelligence, that can understand and interpret complex problems, built right here into this device?
    Skinner: yes
    Chalmers: may I see it?
    Skinner: no