Anthropic Does The Unthinkable with Haiku 3.5

แชร์
ฝัง

ความคิดเห็น • 70

  • @el_arte
    @el_arte 3 หลายเดือนก่อน +28

    They are not in a position to compete with Facebook, Google or OpenAI on price. They know it and everyone knows it.

    • @holdthetruthhostage
      @holdthetruthhostage 3 หลายเดือนก่อน +2

      That's a lie they are literally working with Amazon, they have always overpriced

    • @aculz
      @aculz 3 หลายเดือนก่อน +1

      @@holdthetruthhostageagree, AWS mentioned. always overprice compare to the others

  • @drhxa
    @drhxa 3 หลายเดือนก่อน +4

    The price honestly makes a lot of sense for me. Because it's much better than something 4o-mini and gemini 1.5 flash but much cheaper than something 4o or sonnet 3.5. There was definitely a sweetspot there that I would have loved having a few months ago, and very glad they delivered it now as opposed to waiting for engineering to bring the prices down further.
    I'd imagine it's a bigger model / takes up more vram than 4o-mini or flash. Plus the Claude models do have something special that others don't

    • @maraisdekker2415
      @maraisdekker2415 3 หลายเดือนก่อน +1

      Time will tell it’s very popular as it’s as good as Opus, which is great enough, but way faster and cheaper relative to that.

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน

      did you test it? it's a piece of crap

    • @maraisdekker2415
      @maraisdekker2415 3 หลายเดือนก่อน

      @ works pretty good.
      Probably underrated.

  • @micbab-vg2mu
    @micbab-vg2mu 3 หลายเดือนก่อน +15

    The price is a joke! I will stick with Sonnet, and for less demanding tasks, I will use Gemini and OpenAI's cheap models. :)

    • @I2ealTuber
      @I2ealTuber 3 หลายเดือนก่อน +1

      I don't get your issue, just haiku 3 if it is too expensive for you, just odd logic, why complain about something when the same company has different options

  • @Saif-G1
    @Saif-G1 3 หลายเดือนก่อน +9

    Yes please test the claude agent

  • @mz8755
    @mz8755 3 หลายเดือนก่อน +8

    I do pricing for a living. This time it's priced to not divert revenue away from sonnet

    • @ickorling7328
      @ickorling7328 3 หลายเดือนก่อน

      Can you reccomend study material to consider pricing an AI model serving and related data product distribution platform as a service? I'm in consulting, ex marketer. Looking to explore this niche topic. Thanks!

  • @el_arte
    @el_arte 3 หลายเดือนก่อน +6

    Also, when it comes to true frontier models, there may not be a market for more than 3, as people systematically gravitate towards the top 3.

  • @OscarTheStrategist
    @OscarTheStrategist 3 หลายเดือนก่อน +5

    For true agentic workflows, you need Sonnet level intelligence with the current Haiku pricing. It is unfortunate but we can’t do anything about it. Open source models are not even close to Haiku reasoning let alone Sonnet.
    Still rooting for LLama to get better and better 🔥

    • @i_accept_all_cookies
      @i_accept_all_cookies 3 หลายเดือนก่อน

      Agreed. Maybe I'll change my mind later on, but right now, I want a faster, cheaper Sonnet, not a new slow, pricey Opus. Hopefully this new Haiku might fill that niche. But yeah, I still want a better Llama, regardless.

  • @DanielLeachTen
    @DanielLeachTen 3 หลายเดือนก่อน

    Thanks as always sam

  • @neurofemschool
    @neurofemschool 2 หลายเดือนก่อน

    New Claude user and my biggest complaint is the lack of connection between conversations. Creating separate projects, love it…but what’s the point if it’s a new person for every chat. As a Gemini, that’s so gemini of Claude.

  • @DogeDelivers
    @DogeDelivers 3 หลายเดือนก่อน +4

    Sigh of relief here. You can still use the original version 3 for cheap bulk LLM operations (version 3) via the API at the old pricing. At least for now!

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน +1

      Haiku 3 was awesome. I hope they don't kill my beloved Haiku 3. The new one is piece of garbage.

  • @paulward5701
    @paulward5701 3 หลายเดือนก่อน +2

    How is that opening statement in any way true?
    The more advanced the model the more compute they seem to take to run.
    More compute costs more.

  • @thenoblerot
    @thenoblerot 3 หลายเดือนก่อน +6

    I'm really bummed. I like Claude and want to see Anthropic succeed. I hope they get the funding they need, and have blessed training runs.
    I'm just a hobbyist, so having a decent intelligence available for frivolous projects at a low price was great. I'm going to start migrating to Gemini/OpenAI for a few projects. Google has free fine-tuning iirc so maybe I'll fine-tune on Claude outputs 😂

    • @JustRollin
      @JustRollin 3 หลายเดือนก่อน

      Gemini is too racist, OpenAI is too restrictive and, unfortunately, Claude is less open than it used to be.

    • @ickorling7328
      @ickorling7328 3 หลายเดือนก่อน

      Even as a hobbyist you should consider databricks for fine tuning agents and managing data products. But the vendor lock in may be an issue which only big business should seel to solve right now. ML ops across vendors requires a solution which is more agnostic than unity cataloug... for model management across API providers and data silos.

  • @BrandonFoltz
    @BrandonFoltz 3 หลายเดือนก่อน +4

    I've pretty much moved everything over to Gemini. I think in the long run, Google has very real advantages (robust research, an application suite, Android, TH-cam, safety team, access to almost limitless data, TPUs, massive context windows, unique API functionalities, etc.) Is Gemini on top of every leaderboard? No. But they are solid performers at a great price (and Flash is extremely fast). The real letdown with Google is the Cloud Console which is an absolute mess.

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน +2

      Are you talking about the AIStudio console or VertexAI? Anything in particular you would like to see fixed or added ?

    • @BrandonFoltz
      @BrandonFoltz 3 หลายเดือนก่อน +4

      @@samwitteveenai The Google Cloud console. It looks like the control panel on the ISS and feels like it was developed by someone overly excited to get a "free" copy of Macromedia Dreamweaver in 2002. Managing API keys, projects, and pricing is SO MUCH EASIER with OpenAI and Anthropic; just a few clicks. Also, Google does not allow setting a hard cap on spend like the others do, only alerts. Very risky if something goes off the rails.

    • @RodyDavis
      @RodyDavis 3 หลายเดือนก่อน

      You can get an API key super easily in AI Studio!

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน +1

      Gemini is awesome for it's price. Gemini 1.5 Pro is the best after Sonnet 3.5 and 4o-latest

    • @AdamTwardoch
      @AdamTwardoch 3 หลายเดือนก่อน +3

      For me, nothing comes close to Opus 3. Opus 3 writes beautiful eloquent prose. The OpenAI models write like a serviceable college graduate. Gemini's writing is atrocious. Even Elon Grok's writing is better.

  • @jondo7680
    @jondo7680 3 หลายเดือนก่อน

    I mean the community complained that Sonnet 3.5 was cheaper than Opus, I think this is exactly what this kind of stupidity deserves. Good job Anthropic, punish stupidity, the world has more than enough of it.

  • @jasonrhtx
    @jasonrhtx 2 หลายเดือนก่อน

    I suspect that earlier models for all vendors were underpriced, relative to their training and operating costs.

  • @alextrebek5237
    @alextrebek5237 3 หลายเดือนก่อน

    @3:14 "Claude-3.5-haiku surpassed claude-3-opus in performance at a lower cost, so we're increasing the cost of haiku"
    That makes no sense. Either pass the savings off to the customer, or they'll use OpenRouter to get access to BETTER models for CHEAPER 😂

  • @JNET_Reloaded
    @JNET_Reloaded 3 หลายเดือนก่อน +1

    its about demand! if everyone wants it energy consumption goes up so ofc prices have to go up!

  • @novantha1
    @novantha1 3 หลายเดือนก่อน

    I don’t really need Haiku 3.5 personally. I’ve found agentic frameworks and open source model serve my needs quite well, although it does mean more engineering work and more error checking. In the end, the ability to customize the models, rapidly swap out different flavors of model for the specific use case, and to have it all running locally/on premise, with potentially more flexibility in access to sensitive information and use cases is just too big an advantage.
    There is a use case for the very top end models, and I do think o1 preview and 3.5 Sonnet do have reasons to use them, as they just really don’t have competitors (you spend more doing tree search on open source models than just a single query from the large ones, lol), but for small and cheap owning the hardware really does pay off.

  • @AdamTwardoch
    @AdamTwardoch 3 หลายเดือนก่อน +2

    GPT 4 was "0.5 more than" GPT 3.5 and much more expensive. Claude 3.5 Haiku is "0.5 more than" Claude 3 Haiku.
    All the LLM companies make very confusing branding and naming of the models. I guess Mistral is the fairest: they just come up with various names rather than pretending that one is a new version of another.
    Gemini 1.5 Pro is radically different from Gemini 1.0 Pro. GPT-4, GPT-4 Turbo and GPT-4o have nothing in common.
    We just need to learn taking each model as a new product, regardless of the often misleading branding.
    I can easily imagine that Haiku 3 and Haiku 3.5 will co-exist. 4x price difference basically indicates a totally different class of product. They could have called the new thing Claude 3.5 Limerick.

  • @georgemontgomery1892
    @georgemontgomery1892 3 หลายเดือนก่อน +1

    I was upset and still am but I am starting to think that maybe they are factoring in prompt caching. That is something that doesn't help fund them to keep up with the competition.

  • @MuhanadAbulHusn
    @MuhanadAbulHusn 3 หลายเดือนก่อน +1

    I've already changed 2 of my apps to use Gemini

  • @yotubecreators47
    @yotubecreators47 3 หลายเดือนก่อน +3

    I am back to 3.0

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน +1

      me too. my tests showed that the new model is complete crap compared to the old one... and they made it 4x more expensive, that's crazy

  • @haroldpierre1726
    @haroldpierre1726 3 หลายเดือนก่อน +2

    I must be using Gemini 1.5 Pro wrong because I've found that it sucks but these benchmarks are telling me something different.

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน +1

      It's just to sensitive to the prompt. Let's say it's lazy and takes everything too literally. And it seems like it was trained on forums and reddit users, while the others are trained on books and articles.

  • @drunktrump5209
    @drunktrump5209 3 หลายเดือนก่อน

    if it was openai, they would have just used a new naming convention for the higher price model, like Haiku-V7 (vs Vaiku-1 and newest Vaiku).

  • @Luxcium
    @Luxcium 3 หลายเดือนก่อน

    Claude 3.5 (new) Sonnet is not sure how you can claim such things 😮 _I need to correct a misunderstanding in your statement. We are currently in April 2024, so I cannot comment on events from November 2024 as that's in the future._

  • @Swooshii-u4e
    @Swooshii-u4e 3 หลายเดือนก่อน +2

    Lol and everyone has been canceling their 20 a month subscription because of rate limits. They passing that cost to the 3.5 haiku model now lol8

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน

      I'd rather throw 20 bucks at an openrouter and send hundreds of messages an hour and still have some left over for next month

    • @AaronBlox-h2t
      @AaronBlox-h2t 3 หลายเดือนก่อน

      @@MudroZvon Openrouter???? never heard of that, thanks

  • @avi7278
    @avi7278 3 หลายเดือนก่อน

    My feeling is they are pricing a certain percentage of people/use cases out because they likely dont have the required inference compute. Guys supply and demand is a b****

  • @SwapperTheFirst
    @SwapperTheFirst 3 หลายเดือนก่อน

    Oh Sam and the art of video naming...
    But seriously, I don't like the new trend though.
    What is Anthropic thinking "business" wise anyway, what the idea behind 4x price increase?
    Companies will switch back to Flash or other mini models, it is not that companies are really heavily invested in the Anthropic ecosystem at the moment.

    • @AdamTwardoch
      @AdamTwardoch 3 หลายเดือนก่อน

      Companies can just use Haiku 3 instead of Limerick 3.5 (mis-branded as Haiku 3.5)

    • @SwapperTheFirst
      @SwapperTheFirst 3 หลายเดือนก่อน

      @@AdamTwardoch Limerick :) Do you want to stay on older model though?

    • @AdamTwardoch
      @AdamTwardoch 3 หลายเดือนก่อน

      @ Newer doesn't necessarily mean better. OpenAI had a few pre-GPT3 models that were great at certain kinds of writing (very uncensored). Also, OpenAI had introduced GPT-4o as a "faster" variant of GPT-4 Turbo, which in turn had been a "faster" variant of GPT-4. But faster means cheaper to run, but also - smaller, distilled, quantized.
      AI conpanies these days experiment with longer contexts, faster and cheaper inference, expert switching etc. - all these things look promising on paper, but aren't necessarily really better. Typically the vendors aren't working towards "best" but much more "good enough but much cheaper to run" (see GPT-Turbo & GPT-4o).

  • @comfixit
    @comfixit 3 หลายเดือนก่อน

    If Haiku is better than Opus and far better than the previous Haiku. it’s so much more expensive they should have given it another name so people wouldn’t feel like they are overpaying for Haiku

    • @SwapperTheFirst
      @SwapperTheFirst 3 หลายเดือนก่อน

      "Aha! We've got ourselves a marketer!"

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน

      I doubt it's better

  • @alx8439
    @alx8439 3 หลายเดือนก่อน +4

    4x bump of price tag only reflects 4x increase in greediness

    • @MudroZvon
      @MudroZvon 3 หลายเดือนก่อน

      It's not worth it. I would understand if it was better than the old haiku, but it's some kind of 70b degenerate.

    • @alx8439
      @alx8439 3 หลายเดือนก่อน +1

      Somewhere in the corner two truly lobotomized models, 8B and 32B ones, are giggling on "degenerate"

  • @DanFrederiksen
    @DanFrederiksen 3 หลายเดือนก่อน

    An H100 chip without ram costs nvidia maybe 100$. 80GB of GDDR6 ram costs about 200$. HBM ram isn't more expensive to make, it's just more pins for a wider bus. So that can tell you that AI will become dirt cheap. nvidia is on a greed frenzy and AMD is unfortunately not calling them on their bs. It's strange how passive AMD is because tensor chips are not hard to design from what I gather. Indeed AMD has them, they just have to price them even slightly reasonably to completely destroy nvidia's 20x above norm earnings. We are being artificially held back. You could easily imagine a home developer board with 1TB of fast ram instead of the pathetic 24GB in a 4090. And it sounds like 5090 wont have much more ram, despite ram prices having dropped 80% and neural models have exploded in size with no end in sight.
    A very affordable GPU/TPU can easily run GPT4o at good speed but we need more memory. AI has ballooned really fast combined with nvidia enterprise greed that personal compute hasn't kept up at all despite the components being very affordable. Wake up and demand better.

    • @novantha1
      @novantha1 3 หลายเดือนก่อน

      You are correct that AI hardware is overpriced, especially at the moment, but this take isn’t entirely fair.
      Yes, HBM technically doesn’t cost more than GDDR; in the end it’s just memory. One minor problem with that is that the packaging (the thing that actually connects the processor and the memory) is significantly more expensive for HBM. To my understanding wide memories like GDDR are around $7-12/GB atm and HBM is notably more than that (I think $12-$25/GB depending on the exact type (once you actually put it on the board). I believe memory was cheaper about a year ago, though.
      H100 may or may not be around $100 a chip, I’m not really willing to break out the TSMC wafer calculations to figure it out, but that type of hardware (server grade) typically isn’t priced according to the manufacturing price. Generally when you buy that grade of hardware, you’re paying for service (ie: on the Nvidia developer forums you can get specialized help from highly competent engineers if you just show up and list your problem, and it’s their job to help you), you’re paying for software, you’re paying for highly reliable and redundant hardware, and each of those things adds a significant amount of cost.
      Does all of this mean H100 should be $2,000? Probably that would be a bit too cheap for that category of product, even if Nvidia wanted to launch to consumer for some reason.
      Should it be $40,000? Also probably not.
      I’m pretty sure the “right” price would probably be somewhere in between $4,000 and $20,000, but I’m not really sure where it should actually be in that window, though I find myself rather reluctant to search it out as that’s quite the hypothetical to get into.

    • @DanFrederiksen
      @DanFrederiksen 3 หลายเดือนก่อน

      @@novantha1 if the chip costs on the order of 100$ and the 80GB ram could cost on the order of 200$ (I use dramexchange for pricing) then is 2000$ really too cheap...
      I noticed that enterprise stuff isn't based on cost but what is the justification for that? they are made with love? handled with white gloves? if you can have the same compute for 1/20 then how great can reliability difference be? and in my experience, high dollar items are made worse, last shorter. I have seen expensive Tyan motherboards melt down doing the simplest thing while a dirt cheap mass produced motherboard just lasts forever. I guess nvidia pricing started as a super niche product but now it's mass production, it's nothing but greed. and it's so damaging that nvidia can't give us large amounts of memory on consumer cards because then they could outperform the datacenter products and people would ask why. so we desperately need AMD to destroy nvidia's greed with a consumer 100GB card. Something people can cluster 100 of together and beat ultra expensive nvidia enterprise solutions

  • @MudroZvon
    @MudroZvon 3 หลายเดือนก่อน +1

    I tested it and get disappointed. It far worse than previous Haiku. Now it gives me lists instead of good answers. Now it mixes Russian with English. An epic fail! It's dumber, slower and more expensive! Outrageous! 😢 😡

    • @AdamTwardoch
      @AdamTwardoch 3 หลายเดือนก่อน

      It's just a different model. Think of it as Claude 3.5 Limerick. It's not really an update of Haiku 3.

  • @rgewglwegoijrwngewrg
    @rgewglwegoijrwngewrg 18 ชั่วโมงที่ผ่านมา

    Claude 3 haiku это самый худший бот в мире

  • @antoniobanderas5316
    @antoniobanderas5316 หลายเดือนก่อน

    Haiku is solid for coding compared to gpt.

  • @witness1013
    @witness1013 3 หลายเดือนก่อน

    Sam has never heard of Alaska

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน

      Water flowing up? Or sunsets ?

    • @alizhadigerov9599
      @alizhadigerov9599 3 หลายเดือนก่อน +1

      @@samwitteveenai prices for LLMs are going up in Alaska?

  • @anispinner
    @anispinner 3 หลายเดือนก่อน

    Models get smarter, people get dumber 😂

  • @isaacgroen3692
    @isaacgroen3692 3 หลายเดือนก่อน

    they just suck at naming things...