Hi @APCMasteryPath, Thank you for the great and clear detail. Can you share your code for the example? It would be much appreciated. Also, did you use fine-tuning with your own data as an example? Wouldn't implementing this example as RAG have been more flexible in the event of updates to the PDFs?
Many thanks for your comment and apologies for the late reply. You can find the link to my code in here: drive.google.com/file/d/16uHUVyn34UpM3eBGKoOB2WnlkJtHfUmq/view?usp=drive_link In my case, finetuning was the better option. RAG just picks up the closest text found in the PDF and doesnot draft a response using the persona found in the text. The PDFs that I have would not be updated as they are final submissions. The finetuning process takes more time that is for sure, but the end result is more accurate than the RAG model. I released a number of video about finetuning various LLMs using a wide variety of chat template on my youtube channel. I would suggest that you give them a watch in your free time. Here you go: 📽Useful videos: ⚫Llama 3.1 Conversational Chat Template for Finetuning using Unsloth & Deployment to Open WebUI: th-cam.com/video/owfxFA_L5g4/w-d-xo.html ⚫ Unsloth FineTuning & Comparing LLMs:Mistral, Gemma 2, Llama 3.1 with Chatbot Deployment on OpenWebUI: th-cam.com/video/v2GniOB2D_U/w-d-xo.html&ab_channel=APCMasteryPath ⚫Finetune your LLMs on custom datasets using Unsloth: th-cam.com/video/Y3T4FNRSFlE/w-d-xo.html ⚫Deploy Open WebUI with Zero Coding Skills : th-cam.com/video/5uT1rL6DKV8/w-d-xo.html
This What i need, cant wait your next video
Thanks for your comment. Working on other stuff as well. Hoping to share them soon. Stay tuned.
@@APCMasteryPath hey can i get the source code to convert marker into question answer json?
@@anasrachmadi9603 Here you go : drive.google.com/file/d/16uHUVyn34UpM3eBGKoOB2WnlkJtHfUmq/view?usp=drive_link
Apologies for the late reply.
Thank you. Very informative.
@@mufeedco Glad you liked it.
Hi @APCMasteryPath, Thank you for the great and clear detail. Can you share your code for the example? It would be much appreciated. Also, did you use fine-tuning with your own data as an example? Wouldn't implementing this example as RAG have been more flexible in the event of updates to the PDFs?
Many thanks for your comment and apologies for the late reply. You can find the link to my code in here: drive.google.com/file/d/16uHUVyn34UpM3eBGKoOB2WnlkJtHfUmq/view?usp=drive_link
In my case, finetuning was the better option. RAG just picks up the closest text found in the PDF and doesnot draft a response using the persona found in the text. The PDFs that I have would not be updated as they are final submissions. The finetuning process takes more time that is for sure, but the end result is more accurate than the RAG model.
I released a number of video about finetuning various LLMs using a wide variety of chat template on my youtube channel. I would suggest that you give them a watch in your free time.
Here you go:
📽Useful videos:
⚫Llama 3.1 Conversational Chat Template for Finetuning using Unsloth & Deployment to Open WebUI: th-cam.com/video/owfxFA_L5g4/w-d-xo.html
⚫ Unsloth FineTuning & Comparing LLMs:Mistral, Gemma 2, Llama 3.1 with Chatbot Deployment on OpenWebUI: th-cam.com/video/v2GniOB2D_U/w-d-xo.html&ab_channel=APCMasteryPath
⚫Finetune your LLMs on custom datasets using Unsloth: th-cam.com/video/Y3T4FNRSFlE/w-d-xo.html
⚫Deploy Open WebUI with Zero Coding Skills : th-cam.com/video/5uT1rL6DKV8/w-d-xo.html
👍
A million thanks for your outrageous support.
Awesome, but wow.. move away from you mic!!!