MultiModal Llama 3.2 has ARRIVED!!!
ฝัง
- เผยแพร่เมื่อ 26 ก.ย. 2024
- 🔗 Links 🔗
ai.meta.com/bl...
huggingface.co...
Llama Stack Examples - github.com/met...
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - ko-fi.com/1lit...
🧭 Follow me on 🧭
Twitter - / 1littlecoder
Linkedin - / amrrs
If you want to run the model locally - th-cam.com/video/luGI1NWEAPY/w-d-xo.html
Thank you i was intrigued by the Vision models and I think i might be able to use the 11B model for things I've been experimenting on!
Never thought I would say this but Meta did some really awesome things recently.
i’m so excited to get my hands dirty! even if it’s 3am here 😂
haha good night :)
It's 1 am here and same
@@shivpawar135 🥹
This channel is my go to for GenAI related content❤
Thank you :)
Same!
I've been really struggling at running LLM's on the NPU of my x elite laptop. I am hoping this will finally open the door for this.
th-cam.com/video/luGI1NWEAPY/w-d-xo.html got the tutorial with llama cpp, hope it works fast!
incredible , i wonder though why dont more of the models size variants target the consumer graphics cards , with 24gb of VRAM for gpu that mean about 20B parameters ( depending on quantization ) however its typical seeing 8B and 12B models easily runnable , then a huge jump to 70B or 90B which cant be run locally on consumer gpu at all .
I cannot wait to update my Tamagotchi with some brainzzz
Meta gave us React for free, they are now giving us AI. Big W
Thanks for the update👍 meta is the champ for opensource👏
The models aren't open source, they're open weights. They haven't released any of the model architecture source code publicly.
For VLM, can you test it that the model can convert a complex flow diagrams to structured JSON output
That's usually been a challenge given a lot of these models aren't trained with such diagrams. But I guess mermaid could have been part of training. Let's see, Thanks for the suggestion!
@@1littlecoder so hypothetically speaking if a synthetic dataset is generated for this task and llama3.2 model fine tuned on it can we expect for it to generalize? What are your thoughts
Awesome video! You explained it really well and you were quick getting this out 🎉
Got it and it works descent
Cpu or GPU.
yeah i can start replacing haiku for vision
Another quick video form you
Wow! So incredible! 😮
hmm.. Wonder what I can get running on my old Linux computer with a GTX 970 4gb video ram and 16gb normal ram.. lol
did you see this, it might work fine- th-cam.com/video/luGI1NWEAPY/w-d-xo.html
After trying it out, llama 3.2 3b is the gemma 2 2b competitor I was waiting for. It seems to be an excellent upgrade, while being only 19% bigger.
Such small models are heavily underrated. Easy to sell it to a lot of small and medium businesses as well!
Please make a video on running these small 1b/3b models on Android and iphones if possible.
Got the local setup at this point - th-cam.com/video/luGI1NWEAPY/w-d-xo.html will try to get Phone if possible, I don't have an iphone to check. Will try!
you're quick bro
Ooooo exciting
3B looks good
Scoop 🎉❤
❤️
bhai tu to bahut kama raha hai ai ka review karke
👀
Tu bhi kar le
Zuckerberg is cool guy 🤟👽
Yeah who would have thought that they do finally something good with all the data they have collected.
❤🫡 excited to test it out or like llama 3.2 1billion model in my m1 mac and pretty cool to have vision model I am confused as of now if 1billlion model too is vision model or not