Really love it. Do you have a link to the code for having multiple instances of GPT4 talk to itself? I have been wanting to something similar, probably with LocalAI. Any existing code would be super helpful, even if it’s rough!
Did you test QLoRA? Idea of fine tuning LLaMA model on (almost) sub-1,000 $ GPU card (RX 7900 XTX) is rather tantalizing and possibly worth of 3,000-4,000 US$ workstation investment.
How about confirming the speculation in whatever crazy paper that academia paper mills produces? Collect lots of examples, not a single cherry-picked one. These models have memorized a lot of word trajectories. Some appear as reasoning to enthusiastic aiphiles.
You have officially become my favorite channel. ❤
It's a hidden gem. I love the energy.
Thank goodness your website is finally up!
Really love it. Do you have a link to the code for having multiple instances of GPT4 talk to itself? I have been wanting to something similar, probably with LocalAI. Any existing code would be super helpful, even if it’s rough!
I'll have some videos touching upon it.
Does anyone have a fully working model?
Thanks for your cot👍
Appreciated!
Nice my dude! As usual
love it
Did you test QLoRA? Idea of fine tuning LLaMA model on (almost) sub-1,000 $ GPU card (RX 7900 XTX) is rather tantalizing and possibly worth of 3,000-4,000 US$ workstation investment.
How about confirming the speculation in whatever crazy paper that academia paper mills produces? Collect lots of examples, not a single cherry-picked one. These models have memorized a lot of word trajectories. Some appear as reasoning to enthusiastic aiphiles.
I do not read crazy papers, therefore ....