Create Your Own AI Clone Using WhatsApp Chat History | Fine Tuning Llama3

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ก.ย. 2024
  • Create an AI clone of yourself from your WhatsApp Chat History, by fine tuning Llama3.
    _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-
    You Can Upload Multiple Chat's at Once... Follow the Guide Carefully on Google Colab Notebook. ⚠️
    _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-
    GitHub repository - github.com/Evi...
    Colab Notebook - colab.research...
    Make sure you follow all the steps carefully!
    --
    #aiclone #aichatbot #nlp #unsloth #customchatbot #deeplearning #whatsappbot #whatsapppython #whatsapp_automation #llama3 #llm #finetuning #promptengineering #chatbot #langchain #aiproject
    llama3 finetuning
    finetuning llama on whatsapp data
    finetuning llm on whatsapp chat
    training llm on whatsapp chat
    whatsapp automation
    whatsapp chatbot
    whatsapp python automation
    whatsapp python chatbot
    unsloth tutorial
    finetuning llm on single gpu
    ai llm project
    #😎

ความคิดเห็น • 7

  • @shahidben
    @shahidben หลายเดือนก่อน

    Amazing!

  • @marcc0183
    @marcc0183 หลายเดือนก่อน

    Hey man, I have a content creator agency and a client has asked me to clone him. Do you recommend fine tuning first with interviews, podcasts, videos, etc. and then with WhatsApp chats? Lastly use a rag. Give me your email and we can work together if you want.

    • @ml_024
      @ml_024  หลายเดือนก่อน

      If you fine tune model on podcasts and interviews then presumably it will generate longer responses of questions, and if you fine tune on WhatsApp chats the responses will much shorter than the previous one due to the length of WhatsApp messages. Rag is good for contextual awareness but it doesn't copy one's personal talking/messaging style. Fine-tuning is best for this. Well I am not an expert but if u want contact me at electron0024@gmail.com

    • @ml_024
      @ml_024  หลายเดือนก่อน

      & don't use RAG on fine tuned model, it won't give you expected results.

    • @marcc0183
      @marcc0183 หลายเดือนก่อน

      @@ml_024 That's why I said about doing fine tuning a little more general with audio content and then with chats, but I don't know if it would work. Can you think of a way for him to have the same speaking style but respond briefly as if he were chatting? It also occurred to me to modify the interview data set with an AI so that it generated synthetic data (question and answer pairs) but with the desired tone.

    • @ml_024
      @ml_024  หลายเดือนก่อน

      You have to experiment with every possible option to make it work. First try with small dataset of interview questions then fine tune a model on it then check how it responds. After that generate synthetic data out of same interview questions probably small question-answer pairs then fine tune another model check if it up to standards or not. Then fine tune previously fine tuned model again on WhatsApp chats and see if it works.

    • @ml_024
      @ml_024  หลายเดือนก่อน

      You can try another approach...Fine-tune two models ...First on interview questions with full context, second one with WhatsApp chats... Then use third Instruct model such as Llama3 Instruct (or chatgpt api) to form a some sort of agent pipeline. When someone asks a question first give it to first model take its long response then take response of second model and give these two responses to third general model using a custom system prompt that will generate final response. Obviously this will waste time and GPU resources but it will give more accurate responses