What Makes Large Language Models Expensive?

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 พ.ค. 2024
  • Explore watsonx.ai → ibm.biz/IBM_watsonx_ai
    Amidst the buzz surrounding the promising capabilities of large language models in business, it's crucial not to overlook a practical concern: cost. In this video, Jessica Ridella, Jessica Ridella, IBM’s Global Sales Leader for the watsonx.ai generative AI platform, delves into seven pivotal factors crucial for understanding generative AI in the enterprise. She explores elements influencing cost, such as model size and deployment options, while also shedding light on potential cost-saving strategies like harnessing pre-trained models. By the video's conclusion, you'll gain a comprehensive understanding of the factors influencing costs and discover optimal strategies for the efficient utilization of large language models in your enterprise.
    Get started for free on IBM Cloud → ibm.biz/sign-up-now
    Subscribe to see more videos like this in the future → ibm.biz/subscribe-now

ความคิดเห็น • 70

  • @aqynbc
    @aqynbc 4 หลายเดือนก่อน +4

    Another excellent videos that makes you understand the fundamentals of an otherwise complicated subject.

  • @emil8367
    @emil8367 4 หลายเดือนก่อน +1

    Very interesting and useful. Thanks for explaining so many topics !

  • @benthiele
    @benthiele 4 หลายเดือนก่อน

    Incredibly helpful video. Please make more!

  • @saikatnextd
    @saikatnextd 3 หลายเดือนก่อน

    Thanks Jessica for this video, really eye opening and introspective at the same time.......

  • @attainconsult
    @attainconsult 4 หลายเดือนก่อน

    this is a great start to costing running models, I think you need to think/explain more along the lines of business i.e. adding in all biz file/google/365 docs, biz emails, other biz data sales cash flow, stock usage, forecasting usage of consumables lettuces coffee... all the things biz work off

  • @seanlee2002
    @seanlee2002 4 หลายเดือนก่อน

    Excellent explanation. A great understanding of how AI works

  • @shaniquedasilva1856
    @shaniquedasilva1856 4 หลายเดือนก่อน +3

    Great video Jessica and so informative!! I’m working on a project now implementing Gen AI (gen fallback, generators). Identifying proper use cases are so important to yield the best results while thinking about the # of LLM calls.

    • @unclenine9x9
      @unclenine9x9 2 หลายเดือนก่อน

      Yes, we need to select the suitable LLMs for pickings up the request with cost effective way. Thus the cost of operation should be lowered.

  • @imanrezazadeh
    @imanrezazadeh 4 หลายเดือนก่อน +5

    Excellent explanation! A minor note: the analogy of curtain makes sense, but then you mentioned fine-tuning makes structural changes to the parameters, which is not accurate. It just changes the values of the parameters.

    • @aymerico11
      @aymerico11 4 หลายเดือนก่อน

      How does it change the value ? Is it token change ? Basically it means that once you've tuned your model f(x) no longer equals y but actually z right ?

  • @KP-sg9fm
    @KP-sg9fm 4 หลายเดือนก่อน +8

    Can you make a video talking about smaller more effecient models (Orca, Phi II, Gemini Nano, etc)
    Do they have a future, and if so, what does it look like?
    Will more sota models leverage the techniques used by smaller models to become more effiecient?
    Or will they always remain separate?

    • @teleprint-me
      @teleprint-me 4 หลายเดือนก่อน +2

      There are pros and cons to each approach. Larger models are scaled in a way that makes their capabilities proportional to their parameters. So, larger models are smarter and that will always be the case.
      Both techniques feed off of one another, so improvements in one will lead to improvements in another.
      It's cheaper and easier and faster to iterate over smaller models and any gains made throughout the process are applied to larger models.
      Not sure if this helps. Anyone can feel free to correct me if I misrepresented any information.

  • @bastabey2652
    @bastabey2652 4 หลายเดือนก่อน

    I once attended a whole day IBM sales presentation in Delhi for telco CRM/Billing system.. it was an educational experience more than sales.. IBM sales is really good

  • @user-mt5zb7qs7q
    @user-mt5zb7qs7q 4 หลายเดือนก่อน

    Great explanation Jessica

  • @gamingbeast710
    @gamingbeast710 4 หลายเดือนก่อน +1

    awsome , 100% focued :D thx for the professionalisme :D

  • @aymerico11
    @aymerico11 4 หลายเดือนก่อน

    Very good video thanks a lot !

  • @markfitz8315
    @markfitz8315 4 หลายเดือนก่อน +1

    very good - thanks

  • @oieieio741
    @oieieio741 4 หลายเดือนก่อน +6

    Excellent explanation. A solid understanding of how AI works. Thanks IBM

  • @reazulislam8446
    @reazulislam8446 4 หลายเดือนก่อน

    So precise..

  • @jediTempleGuard
    @jediTempleGuard 4 หลายเดือนก่อน +8

    I think customized language models will become more important over time. Companies will want artificial intelligence applications specific to their fields of activity, and individuals will want artificial intelligence applications specific to their special interests. Not to sound like I'm telling fortunes, but with improvements in cost, customized smaller models may become more dominant in the market.

    • @Cahangir
      @Cahangir 4 หลายเดือนก่อน +2

      what types of AI apps would individuals want apart from personal assistants that would need customizing?

    • @Anurag_Hansda
      @Anurag_Hansda 3 หลายเดือนก่อน

      I very much agree with you... Google could be much more efficient by giving specific detail.

  • @teresafarrer1252
    @teresafarrer1252 4 หลายเดือนก่อน

    Great video: really clear and professional (unlike a couple of the saddos commenting). Thanks!

  • @AdamSioud
    @AdamSioud 4 หลายเดือนก่อน

    Great video

  • @ChrisJSnook
    @ChrisJSnook 4 หลายเดือนก่อน

    What software solution powers this mirrored whiteboard in front of you? It’s awesome and I want to use it?

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 4 หลายเดือนก่อน +1

    There are mistakes with the information provided.
    PEFT and Lora are separate things
    model size is influenced mostly by numerical choice and how you compile the GPU kernel.
    ...

  • @Alice8000
    @Alice8000 หลายเดือนก่อน

    Daaaaamn woman. Good explanation.

  • @johnnyalam7301
    @johnnyalam7301 4 หลายเดือนก่อน

    Very nicely and intelligently explained 3:49 pm ( Christmas Day 2023)

  • @luciengrondin5802
    @luciengrondin5802 4 หลายเดือนก่อน

    Stumbled upon this and feel like asking : how did IBM miss the LLM train? Watson was very impressive IMHO. Very much ahead of its time. How could IBM not capitalize on it? Why was it OpenAI that ended up with the language model breakthrough? Which innovation openAI had that IBM could not think of? Was it RLHF?

    • @VoltLover00
      @VoltLover00 4 หลายเดือนก่อน

      You can easily google the answer to your question

  • @carkawalakhatulistiwa
    @carkawalakhatulistiwa 4 หลายเดือนก่อน +4

    And PHI-2 with 2,7 B billion parameters. proves that we have spent a lot of time and money on computerization that is wasted because of bad data.
    with better data PHI-2 LLM can be equivalent to gpt 3 175 billion parameters . and there is still the possibility to reduce LLM to 1 billion parameters with the same capabilities

    • @akj3344
      @akj3344 4 หลายเดือนก่อน

      There are 1B models on huggingface made for RAGs.

  • @team-m2
    @team-m2 4 หลายเดือนก่อน

    Great and concise, thanks! But ... is she writing from the right to the left? 🤔

  • @gihan5812
    @gihan5812 4 หลายเดือนก่อน

    How can i speak to someone at IBM about working together.

  • @fasteddylove-muffin6415
    @fasteddylove-muffin6415 4 หลายเดือนก่อน +2

    You walk into a dealership & ask a salesperson how much a vehicle will cost.
    Answer: This vehicle will cost you whatever you're willing to pay.

  • @Murat-hh4hu
    @Murat-hh4hu 4 หลายเดือนก่อน +66

    For a moment I thought she is AI generated)

    • @CybermindForge
      @CybermindForge 4 หลายเดือนก่อน

      Truth

    • @Beny123
      @Beny123 4 หลายเดือนก่อน +1

      Don’t blame you . Pretty

    • @MohitSharma-dv7mg
      @MohitSharma-dv7mg 4 หลายเดือนก่อน +1

      Yeah and looked finely tuned!

    • @Alice8000
      @Alice8000 หลายเดือนก่อน

      nope u didn't

  • @silberlinie
    @silberlinie 4 หลายเดือนก่อน +1

    They used an interesting technique to record the video.

  • @webgpu
    @webgpu 4 หลายเดือนก่อน +2

    anyone noticed she kept on talking * while * writing ? women are real multitaskers - i swear to God my brain is 100% monotask and i could never Ever: write AND do anything else. The apex of my manly monotaskiness is to be able to talk while i'm driving (but i can only talk about light subjects, if you talk about anything a little more involved, i will just not follow you.

  • @mrd6869
    @mrd6869 4 หลายเดือนก่อน

    Small and powerfulmodels will win out.Phi 2 and Orca2 are some good examples.

  • @potatodog7910
    @potatodog7910 4 หลายเดือนก่อน

    Nice

  • @mohsenghafari7652
    @mohsenghafari7652 2 หลายเดือนก่อน

    hi. please help me. how to create custom model from many pdfs in Persian language? tank you.

  • @SiegelBantuBear
    @SiegelBantuBear 11 วันที่ผ่านมา

    🙏🏼

  • @cleansebob1
    @cleansebob1 4 หลายเดือนก่อน

    Looks like it all depends...

  • @wzqdhr
    @wzqdhr 4 หลายเดือนก่อน

    Does IBM have anything to do with this AI booming?

  • @potatodog7910
    @potatodog7910 4 หลายเดือนก่อน

    How much of this can be done with GPTs?

    • @scottt9382
      @scottt9382 4 หลายเดือนก่อน +1

      A GPT is just one type of an LLM

  • @rursus8354
    @rursus8354 4 หลายเดือนก่อน

    If you cannot find the best man, take the next best.

  • @joung-joonlee1037
    @joung-joonlee1037 4 หลายเดือนก่อน +1

    I think, that LLM or GAI Look like Spread-Sheet if concern the facts that this type of engine inject By SELF toward tokens and Spell Out tokens..!! AND This type of tokens look like iterated by LLM or GAI, because that is also programs using Computer Iterations...! AND The LLM or GAI's using cost can be acquired using calculations over Time/Number of Tokens/Weight of Meaning.... But, I know that this calculations is just approximation by User. Thank you for NICE Video! and I'm korean.

  • @markmaurer6370
    @markmaurer6370 4 หลายเดือนก่อน

    1:19 So IBM does not believe consumers need to have their data protected.

  • @jameshopkins3541
    @jameshopkins3541 3 หลายเดือนก่อน

    THEN A COMMON PERSON CAN'T DO A LLM FROM SCRATCH???

  • @jameshopkins3541
    @jameshopkins3541 3 หลายเดือนก่อน

    She is 36 years old Isn't it?

  • @ciphore
    @ciphore 4 หลายเดือนก่อน

    Nancy Pi did it first 😤

  • @jameshopkins3541
    @jameshopkins3541 3 หลายเดือนก่อน +1

    LLM IS BLA BLA BLAAAAA??????

  • @NisseOhlsen
    @NisseOhlsen 4 หลายเดือนก่อน +1

    What makes them so expensive? Simple. Their Architecture is not right.

  • @bobanmilisavljevic7857
    @bobanmilisavljevic7857 4 หลายเดือนก่อน +1

    🦾🥳

  • @Canadainfo
    @Canadainfo 4 หลายเดือนก่อน

    amazon bedrock!!

  • @Free-pp8mr
    @Free-pp8mr 4 หลายเดือนก่อน

    It is not intelligent to pay for AI! It’s simply marketing!

  • @Drunrealer
    @Drunrealer หลายเดือนก่อน

    Drink from de bottle

  • @thierry-le-frippon
    @thierry-le-frippon 3 หลายเดือนก่อน

    People will pay for that 😅😅😅 ???

  • @aprilmeowmeow
    @aprilmeowmeow หลายเดือนก่อน

    so sad that people cant even write a speech anymore.

  • @ashishsehrawat_007
    @ashishsehrawat_007 4 หลายเดือนก่อน

    Kinda boring explanation.

  • @talatshahgmailcom
    @talatshahgmailcom 4 หลายเดือนก่อน

    Thank you, very informative and easily understandable.