1 bit LLMs
ฝัง
- เผยแพร่เมื่อ 4 พ.ย. 2024
- 2 weeks back Microsoft released the implementation of their 1 bit LLM transformer.
This can potentially change the world, by enabling LLM inference on CPU and mobile, without any internet connection.
In this hands-on video, you learn to run 1 bit LLM inference on your own laptop!
THANK YOU!! I've watched a number of videos on 1-Bit LLMs and BitNet, none actually beats the simplicity and yet richness of this video.
Complex subjects simplified. I love the sessions
Thanks sir ❤ Your are revolutionizing this field in india
wow again superb explanation - making complex things so easy to understand
Yes - a Detailed Video theory from scratch much appreciated it
Thank you for making this video very interesting
Great video as usual !
Loved This Video :D
Hi Rajat, Thank you for your video. I was able to run the 8 Billion model on my laptop. It is really fascinating to see an 8 Billion model running on a laptop. Can you provide more information on how to train these 1-Bit LLMs? Does it still require a high-end GPU, or can it be trained on a CPU? I am looking to train this 1-Bit LLM with instruction-based fine-tuning
Can you explain how to download in windows
what is the difference we are also able to run the models like lamma 7B locally and it does work better than 1 bit
Really, that's surprising. I think you need atleast 16GB GPU to run llama 7B locally. Most customer GPUs are 6/8/12GB.
We use Mac
Please explain mathematically