What Makes Large Language Models Expensive?
ฝัง
- เผยแพร่เมื่อ 8 พ.ค. 2024
- Explore watsonx.ai → ibm.biz/IBM_watsonx_ai
Amidst the buzz surrounding the promising capabilities of large language models in business, it's crucial not to overlook a practical concern: cost. In this video, Jessica Ridella, Jessica Ridella, IBM’s Global Sales Leader for the watsonx.ai generative AI platform, delves into seven pivotal factors crucial for understanding generative AI in the enterprise. She explores elements influencing cost, such as model size and deployment options, while also shedding light on potential cost-saving strategies like harnessing pre-trained models. By the video's conclusion, you'll gain a comprehensive understanding of the factors influencing costs and discover optimal strategies for the efficient utilization of large language models in your enterprise.
Get started for free on IBM Cloud → ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
Another excellent videos that makes you understand the fundamentals of an otherwise complicated subject.
Very interesting and useful. Thanks for explaining so many topics !
Incredibly helpful video. Please make more!
Thanks Jessica for this video, really eye opening and introspective at the same time.......
this is a great start to costing running models, I think you need to think/explain more along the lines of business i.e. adding in all biz file/google/365 docs, biz emails, other biz data sales cash flow, stock usage, forecasting usage of consumables lettuces coffee... all the things biz work off
Excellent explanation. A great understanding of how AI works
Great video Jessica and so informative!! I’m working on a project now implementing Gen AI (gen fallback, generators). Identifying proper use cases are so important to yield the best results while thinking about the # of LLM calls.
Yes, we need to select the suitable LLMs for pickings up the request with cost effective way. Thus the cost of operation should be lowered.
Excellent explanation! A minor note: the analogy of curtain makes sense, but then you mentioned fine-tuning makes structural changes to the parameters, which is not accurate. It just changes the values of the parameters.
How does it change the value ? Is it token change ? Basically it means that once you've tuned your model f(x) no longer equals y but actually z right ?
Can you make a video talking about smaller more effecient models (Orca, Phi II, Gemini Nano, etc)
Do they have a future, and if so, what does it look like?
Will more sota models leverage the techniques used by smaller models to become more effiecient?
Or will they always remain separate?
There are pros and cons to each approach. Larger models are scaled in a way that makes their capabilities proportional to their parameters. So, larger models are smarter and that will always be the case.
Both techniques feed off of one another, so improvements in one will lead to improvements in another.
It's cheaper and easier and faster to iterate over smaller models and any gains made throughout the process are applied to larger models.
Not sure if this helps. Anyone can feel free to correct me if I misrepresented any information.
I once attended a whole day IBM sales presentation in Delhi for telco CRM/Billing system.. it was an educational experience more than sales.. IBM sales is really good
Great explanation Jessica
awsome , 100% focued :D thx for the professionalisme :D
Very good video thanks a lot !
very good - thanks
Excellent explanation. A solid understanding of how AI works. Thanks IBM
So precise..
I think customized language models will become more important over time. Companies will want artificial intelligence applications specific to their fields of activity, and individuals will want artificial intelligence applications specific to their special interests. Not to sound like I'm telling fortunes, but with improvements in cost, customized smaller models may become more dominant in the market.
what types of AI apps would individuals want apart from personal assistants that would need customizing?
I very much agree with you... Google could be much more efficient by giving specific detail.
Great video: really clear and professional (unlike a couple of the saddos commenting). Thanks!
Great video
What software solution powers this mirrored whiteboard in front of you? It’s awesome and I want to use it?
There are mistakes with the information provided.
PEFT and Lora are separate things
model size is influenced mostly by numerical choice and how you compile the GPU kernel.
...
Daaaaamn woman. Good explanation.
Very nicely and intelligently explained 3:49 pm ( Christmas Day 2023)
Stumbled upon this and feel like asking : how did IBM miss the LLM train? Watson was very impressive IMHO. Very much ahead of its time. How could IBM not capitalize on it? Why was it OpenAI that ended up with the language model breakthrough? Which innovation openAI had that IBM could not think of? Was it RLHF?
You can easily google the answer to your question
And PHI-2 with 2,7 B billion parameters. proves that we have spent a lot of time and money on computerization that is wasted because of bad data.
with better data PHI-2 LLM can be equivalent to gpt 3 175 billion parameters . and there is still the possibility to reduce LLM to 1 billion parameters with the same capabilities
There are 1B models on huggingface made for RAGs.
Great and concise, thanks! But ... is she writing from the right to the left? 🤔
How can i speak to someone at IBM about working together.
You walk into a dealership & ask a salesperson how much a vehicle will cost.
Answer: This vehicle will cost you whatever you're willing to pay.
For a moment I thought she is AI generated)
Truth
Don’t blame you . Pretty
Yeah and looked finely tuned!
nope u didn't
They used an interesting technique to record the video.
anyone noticed she kept on talking * while * writing ? women are real multitaskers - i swear to God my brain is 100% monotask and i could never Ever: write AND do anything else. The apex of my manly monotaskiness is to be able to talk while i'm driving (but i can only talk about light subjects, if you talk about anything a little more involved, i will just not follow you.
Small and powerfulmodels will win out.Phi 2 and Orca2 are some good examples.
Nice
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
🙏🏼
Looks like it all depends...
Does IBM have anything to do with this AI booming?
How much of this can be done with GPTs?
A GPT is just one type of an LLM
If you cannot find the best man, take the next best.
I think, that LLM or GAI Look like Spread-Sheet if concern the facts that this type of engine inject By SELF toward tokens and Spell Out tokens..!! AND This type of tokens look like iterated by LLM or GAI, because that is also programs using Computer Iterations...! AND The LLM or GAI's using cost can be acquired using calculations over Time/Number of Tokens/Weight of Meaning.... But, I know that this calculations is just approximation by User. Thank you for NICE Video! and I'm korean.
😗
1:19 So IBM does not believe consumers need to have their data protected.
THEN A COMMON PERSON CAN'T DO A LLM FROM SCRATCH???
She is 36 years old Isn't it?
Nancy Pi did it first 😤
LLM IS BLA BLA BLAAAAA??????
What makes them so expensive? Simple. Their Architecture is not right.
🦾🥳
amazon bedrock!!
It is not intelligent to pay for AI! It’s simply marketing!
Drink from de bottle
People will pay for that 😅😅😅 ???
so sad that people cant even write a speech anymore.
Kinda boring explanation.
Thank you, very informative and easily understandable.