Chain of Thought (CoT) meets Instruction Fine-Tuning

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ม.ค. 2025

ความคิดเห็น • 13

  • @danfox7356
    @danfox7356 ปีที่แล้ว +6

    You have officially become my favorite channel. ❤

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว +1

      It's a hidden gem. I love the energy.

  • @densonsmith2
    @densonsmith2 ปีที่แล้ว

    Thank goodness your website is finally up!

  • @tensiondriven
    @tensiondriven ปีที่แล้ว +1

    Really love it. Do you have a link to the code for having multiple instances of GPT4 talk to itself? I have been wanting to something similar, probably with LocalAI. Any existing code would be super helpful, even if it’s rough!

    • @code4AI
      @code4AI  ปีที่แล้ว +1

      I'll have some videos touching upon it.

    • @rafb145
      @rafb145 ปีที่แล้ว +1

      Does anyone have a fully working model?

  • @henkhbit5748
    @henkhbit5748 ปีที่แล้ว

    Thanks for your cot👍

    • @code4AI
      @code4AI  ปีที่แล้ว

      Appreciated!

  • @ricardocosta9336
    @ricardocosta9336 ปีที่แล้ว +1

    Nice my dude! As usual

  • @nadavnesher8641
    @nadavnesher8641 ปีที่แล้ว

    love it

  • @blablabic2024
    @blablabic2024 ปีที่แล้ว

    Did you test QLoRA? Idea of fine tuning LLaMA model on (almost) sub-1,000 $ GPU card (RX 7900 XTX) is rather tantalizing and possibly worth of 3,000-4,000 US$ workstation investment.

  • @pensiveintrovert4318
    @pensiveintrovert4318 ปีที่แล้ว

    How about confirming the speculation in whatever crazy paper that academia paper mills produces? Collect lots of examples, not a single cherry-picked one. These models have memorized a lot of word trajectories. Some appear as reasoning to enthusiastic aiphiles.

    • @code4AI
      @code4AI  ปีที่แล้ว

      I do not read crazy papers, therefore ....