Thanks and of course! Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038& Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
I appreciate the effort and the information. But there is always a reason when they don't actually RUN the notebook in a demo. Add `cohere` to the package list before the openai package.
Thanks for wonderful sharing! I would like to ask if I can use GPT-3.5 to evaluate the answer from GPT-3.5 (because I am unable to use GPT-4.0 right now). Does it induce any risks when I use GPT-3.5?
Great question @tramanhphamthi5218! You can absolutely use GPT 3.5 for everything that you saw here! However, whenever you can while doing evaluation or automatically generating code, you should opt for the most powerful model that you can get your hands on. If it's GPT 3.5 for now, then go with that!
Great session. I am able to take a lot of this and use directly. However a concern of mine is in regards to using GPT4 as the evaluator, since we are not allowed to send our data outside the company. What would be your suggestions into what model to use if it has to run on-prem? Woudl that be a llama-2-70b model? Again thanks for a rreally great and informative session...
Thanks for the question! The best evaluator out there is GPT-4, although if you're very keen on using an open-source model instead, we'd recommend one of the instruct-tuned Llama models today.
Would it be worth renaming this raft as it sounds very familiar to retrieval augmentation fine-tuning or is it different than what you’ve done done here?
Compared to OpenAI's Ada-02, how well does the finetuned embedding model work? Since my application is fully dependent on openai and I prefer working with better models (sentence transformer is good but cant really stack up to Ada). Also, is it possible to apply this finetuning to Ada?
Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038&
Camels are my favorite animal, thanks for this.
YES! You're so welcome @Kingromstar 🤗! We had a blast putting it together!
Great tutorial! Thanks. Definitely try it out 👏
Fabulous overview and demonstrations.
Nice presentation, subscribed! Keep it up!
Awesome stuff. You guys are killing it!
It was another amazing event with great content - thank you so much! Could you please share the slides and the notebook used for the event?
Thanks and of course!
Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038&
Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
Great context. I really rllearnd something
Nice!
I appreciate the effort and the information. But there is always a reason when they don't actually RUN the notebook in a demo. Add `cohere` to the package list before the openai package.
You don't actually *need* Cohere for this notebook, you can safely ignore the pip error!
Some retrievers use libraries in the background, like pypdf, beautiful soup, and so on.
Thanks for wonderful sharing! I would like to ask if I can use GPT-3.5 to evaluate the answer from GPT-3.5 (because I am unable to use GPT-4.0 right now). Does it induce any risks when I use GPT-3.5?
Great question @tramanhphamthi5218! You can absolutely use GPT 3.5 for everything that you saw here! However, whenever you can while doing evaluation or automatically generating code, you should opt for the most powerful model that you can get your hands on. If it's GPT 3.5 for now, then go with that!
@@AI-Makerspace Thanks a lot for your response!
Could you share the URL for the notebook in the description box? Super session! 🎉
Thank you, and of course!
Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
Great session. I am able to take a lot of this and use directly. However a concern of mine is in regards to using GPT4 as the evaluator, since we are not allowed to send our data outside the company. What would be your suggestions into what model to use if it has to run on-prem? Woudl that be a llama-2-70b model? Again thanks for a rreally great and informative session...
Thanks for the question! The best evaluator out there is GPT-4, although if you're very keen on using an open-source model instead, we'd recommend one of the instruct-tuned Llama models today.
Would it be worth renaming this raft as it sounds very familiar to retrieval augmentation fine-tuning or is it different than what you’ve done done here?
Compared to OpenAI's Ada-02, how well does the finetuned embedding model work? Since my application is fully dependent on openai and I prefer working with better models (sentence transformer is good but cant really stack up to Ada). Also, is it possible to apply this finetuning to Ada?
In the specific case we examined, the fine-tuned embeddings performed slightly better than ada-02 on the aggregate hitrate metric.
solid