Why AI products suck

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2024

ความคิดเห็น • 33

  • @MarcoMugnatto
    @MarcoMugnatto 3 หลายเดือนก่อน +6

    One thing you may not have noticed is that there is a strong resistance from current influencers towards AI. They are influencers of the smartphone era, having grown accustomed to it, and they find it difficult to detach themselves. If none of them are able to do so, the solution will come from a new generation without this constraint. This has weighed much more than the supposed deficiencies of the products.

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน

      you’re definitely right that the influencers of the smartphone era are now entrenched in a smartphone-centric world, but i’m not convinced that a smartphone-centric world is going anywhere anytime soon

  • @techsuvara
    @techsuvara 3 หลายเดือนก่อน

    Thanks for your contribution to shining a light on the realities of this product. By the way, you can screen record the iPad, makes it easier to present it instead of using a camera to record a screen. However, doing it this way may take away from your style. :)

  • @Timlockwood8818
    @Timlockwood8818 3 หลายเดือนก่อน +4

    1:50 I don’t think that’s entirely true. Most LLMs have some level of “reason” and would acknowledge that putting glue in your pizza is a bad idea. I think Google was just using a much smaller model meant only to summarize.

    • @bornach
      @bornach 3 หลายเดือนก่อน +1

      Its reasoning capability is limited by the examples of reasoning in the training data. OpenAI has scraped Stackoverflow questions and their worked example answers to help their GPT models generate output that resembles a person reasoning out a solution. But the training data also contains a lot of shitpostings in Reddit comments. This is why they need annotation gig workers in Africa and Asia to clean up the data and provide human feedback during the InstructGPT fine-tuning.

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน

      data quality is a huge piece of the puzzle. unfortunately, so is data quantity!

    • @freecivweb4160
      @freecivweb4160 2 หลายเดือนก่อน

      Google was good at search. Am I the only one who thinks everything else good they did was merely acquisitions or copying things that others had done? As their search deteriorates, their only value will remain in their monopolistic acquisitions, such as TH-cam.

  • @sneedchuck5477
    @sneedchuck5477 3 หลายเดือนก่อน +2

    If the only thing you can reliably ask AI is questions that have a very clear cut (and thus probably well known) answer, what does AI offer over just looking it up anywhere online?

    • @justinbaker2883
      @justinbaker2883 3 หลายเดือนก่อน

      Problem is internet search sucks now with SEO and general Google ADs and video pushing. AI coming from Bing made it sound like we would get back to the glory days of search, where the LLM could sift through all the SEO nonsense and actually bring back the answer. But with hallucination we back to square one. It's demoralizing. Things are getting worse, shown a fake future where it's fixed, only to be shown that this is even worse solution than the original problem. I'll just dig through old Google search, thx

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน +1

      agreed!

  • @2phonesbabyken
    @2phonesbabyken 3 หลายเดือนก่อน +3

    Do I gatekeep this channel or hope this video blows up?

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน

      how kind of you :) thanks for watching!!

  • @freecivweb4160
    @freecivweb4160 2 หลายเดือนก่อน

    Our start-up is now introducing a wearable AI called Rabbit-hole. Stay tuned for upcoming announcements.

    • @uncoverage
      @uncoverage  2 หลายเดือนก่อน

      🐰

  • @stevensonrf
    @stevensonrf 3 หลายเดือนก่อน +5

    Is that a new iPad M4 I see before me😄

    • @elwire
      @elwire 3 หลายเดือนก่อน +1

      I think the camera placement on the iPad tell it´s not M4.

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน +1

      that’s right! still an old iPad Air!

  • @marklsimonson
    @marklsimonson 3 หลายเดือนก่อน +2

    I've often wondered about the fact that AI (at least so far) is built on language models and processing. But human language is just the means we use to convey ideas and concepts to each other. As such I think it is never going to be able to capture human reasoning (and meaning and understanding) as long as it's built around language. It's a bit like thinking that a parrot can think like a human since it's able to mimic human speech. Language and speech is just the outer layer of what's happening in our brains, not the core of thinking and reasoning.

    • @bornach
      @bornach 3 หลายเดือนก่อน +1

      I wouldn't say never. It is more a case of token sequence data being a very inefficient format for providing sufficient training data to allow the machine learning model to fully generalize the skill it is supposed to learn. For example, the Large Language Models are famous for making human-like errors in arithmetic. But that could be solved by providing many more examples of arithmetic being applied in all the different problem domains for which the AI is being trained. This rapidly becomes an exponential explosion of data required for training in order to chase down all the edge cases where an insufficiently trained AI fails. Solving a quadratic expressed with 2 digit numbers, then 3 digit numbers, then 4... Now solve a cubic with 2 digits, etc

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน

      @marklsimonson i love the idea that language is the outer layer of our brains, and it makes me wonder if that’s why it’s a good UI layer for the computer (as long as it’s implemented well).
      thanks for the great comment!

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน

      @bornach do you have an example of a paper or something that talks about how LLMs make human-like errors in arithmetic? i hadn’t heard of that but it sounds fascinating

    • @marklsimonson
      @marklsimonson 3 หลายเดือนก่อน

      @@uncoverage Considering how easily we misunderstand each other through spoken and written speech, yeah, could be a problem.

  • @BenjiManTV
    @BenjiManTV 3 หลายเดือนก่อน +1

    F*+€ Smith??? 😂

  • @samvirtuel7583
    @samvirtuel7583 3 หลายเดือนก่อน +1

    You are on the wrong track, you want to regress to the days of expert systems.
    LLMs are perfectly capable of reasoning, they simply lack precision in their weights, which requires a lot of resources.

    • @oxygenkiosk
      @oxygenkiosk 3 หลายเดือนก่อน

      Exactly, and those recourses are a) getting cheaper and more efficient b) are being invested in. It is a revolution which is in it's infancy, you can't blame co's seeking to make it early with half baked products, however the big picture is way more important.

    • @uncoverage
      @uncoverage  3 หลายเดือนก่อน +1

      the major resource that is not getting cheaper is high-quality data!

    • @bornach
      @bornach 3 หลายเดือนก่อน

      @@uncoverage Hence the pressure on Scale AI and competitors to race to the bottom chasing the cheapest data annotation gig workers that still do a reasonable job at sorting between good LLM responses from the bad ones mimicking Reddit shitposts. There is a boom in data annotation gigs in India which can answer a microtask a lot cheaper than a domestic based Amazon Mechanical Turker. But with that comes growing stories of abuse of humans working in the training data gig economy.