We are trying to build an intuitive AI copmanion with various conversatinal parameters on Vertex AI. We have tried to tune it in Google AI studio but it is buggy.(stuck in queue) As for the our fine tunin we have 4 parameters in percentages in 4 different columns, an input prompt and a desired response. How can we pass his structure in JSON format in the wayy ou describe it.
I’m taking all of my companies, documentation and experimenting with llama index and lang chain and open AIAPI. It would be cool to compare the same experiment with vertex AI and see what the results are with the same tuning dataset.
"GENERATIVE AI STUDIO > Language" doesn't seem to be an option anymore in the Vertex AI lefthand nav. There is a "VERTEX AI STUDIO > Tuning" option, which looks similar but seems to have been split into two steps since this video was made.
When i try to follow your instructions - i am not able to see the model run in the pipeline - i did everything as instructed and whenever i click on the start tuning button - nothing happens - somewhere i read the Tuning (Preview) happens in europe 4 region and i verified that the code generated looks good - any idea what am i missing ? Also in your screenshot it doesn't show the Model Evaluation section in the UI - may be it got added recently to this page
Thank you for the insights. Really nice. I have built context prompts, did input few questions and sample responses but I do not have an option to tune this context prompts to fine tune further. Any suggestions on how i could fine tune my context prompt? Ex: Say I talk to customer care representative on internet bill, I did keep few sample questions and responses and now I expect the AI to answer more with context for which I need to fine tune. Thank you
@@metaleadership Lamda had this backend to train, "Our two submissions were benchmarked on 2048-chip and 1024-chip TPU v4 Pod slices, respectively. We were able to achieve an end-to-end training time of ~55 hours for the 480B parameter model and ~40 hours for the 200B parameter model." So I'm guessing your rig would take ~13-15 years to train the first model. You might be better off using the API
🏗 What are you building with generative AI models?
Let us know and subscribe for more AI tips and tricks → goo.gle/GoogleCloudTech
We are trying to build an intuitive AI copmanion with various conversatinal parameters on Vertex AI. We have tried to tune it in Google AI studio but it is buggy.(stuck in queue) As for the our fine tunin we have 4 parameters in percentages in 4 different columns, an input prompt and a desired response. How can we pass his structure in JSON format in the wayy ou describe it.
Thanks Nikita, this was really interesting and easy to understand as someone who doesn't do too much with AI/ML or data.
I’m taking all of my companies, documentation and experimenting with llama index and lang chain and open AIAPI. It would be cool to compare the same experiment with vertex AI and see what the results are with the same tuning dataset.
are u concerned with your IP or proprietary data being in the public domain
Have you found any notable result?
Very nice video. You took complex issues and made them understandable to a layman (me). Thank you.
Thank you for an awesome presentation… However the background music was really distracted… Silence or something less noisy would be great
I think APIs are one of the biggest ways to democratize technology.
Thank you
This video is worth seeing for my GEN_AI course. Thanks 👍
"GENERATIVE AI STUDIO > Language" doesn't seem to be an option anymore in the Vertex AI lefthand nav. There is a "VERTEX AI STUDIO > Tuning" option, which looks similar but seems to have been split into two steps since this video was made.
This is a very interesting presentation!
Great explainer video. It really dumbed down the process of training quite a bit. Would like to see similar videos on topics like RAG.
Thank for sharing👍
When i try to follow your instructions - i am not able to see the model run in the pipeline - i did everything as instructed and whenever i click on the start tuning button - nothing happens - somewhere i read the Tuning (Preview) happens in europe 4 region and i verified that the code generated looks good - any idea what am i missing ? Also in your screenshot it doesn't show the Model Evaluation section in the UI - may be it got added recently to this page
were you able to solve the issue? i am having the same problem.
And did you try the europe 4 region?
I’m redefining the meaning of Shelter with Gen AI!
his is a very interesting presentation!
Thanks giving these kind of cources
Sentiment analysis can be done?
Where can I find the pricing plans to create a tuned model?
which base model is she finetuning ?
A pre-trained model on a generic dataset.
A very interesting video will ahve tod o some experimentation as I will try to create a model which is foccussed on solving learners queries
Great session. Just 1 QQ. So fine tuning is like transfer learning ?
Yes. Fine-tuning is a type of transfer learning.
2:53 summary paper link missing?
See the paper, thanks!
@@willian_z I was thinking the same thing, and I still don’t see the link. Where is it exactly?
@@econhelp583 It’s the second Google link, “a guide to…”
Thank you for the insights. Really nice.
I have built context prompts, did input few questions and sample responses but I do not have an option to tune this context prompts to fine tune further.
Any suggestions on how i could fine tune my context prompt?
Ex: Say I talk to customer care representative on internet bill, I did keep few sample questions and responses and now I expect the AI to answer more with context for which I need to fine tune.
Thank you
try dialogflow
Can you download a tuned model for dev/testing on local machine?
is your local machine a super computer by chance?
@@pat-grady-analytics Nvidia DGX to run local models and training custom models.
@@metaleadership Lamda had this backend to train, "Our two submissions were benchmarked on 2048-chip and 1024-chip TPU v4 Pod slices, respectively. We were able to achieve an end-to-end training time of ~55 hours for the 480B parameter model and ~40 hours for the 200B parameter model."
So I'm guessing your rig would take ~13-15 years to train the first model. You might be better off using the API
@@pat-grady-analytics Got it, lol
Hey! After clicking the Fine Tuning button: There is a small loader but nothing happens..
It was that infamous code: 13 error. After changing (literally) nothing it suddenly worked.
Now I get: 0 token must inside [1, 1024]
@@camillorohe6996 I have the same issue. the fine tuning doesnt start. were you able to solve the problem?
Thanks I'm very happy with Google ۔
this is the reason why it's called Large Language Model (LLM) 🤪
Interested
You are❤
❤❤❤
Generative AI Studio
Teach me about the matter master!? What time is it? Now! Where are you? Here. Help me please!
Is Nikita an AI generated character? Seems like it to me. No shade if it is a real person
What is your other language?
OI COMO ESTÁ VOCÊ TUDO BEM...❤😂🎉
Im meine Sprache,könnte ich verstehen,aber es ist leider nicht möglich........😍
I did not understand at all.