the goddamn Holiday hat I didn't expect... the combo between the avatar and the hat was very uncanny terrifying on that weird ass face. I'd get rid of that avatar and move to something little less creepy (there's a LOT that can work). I know it got 50/50 vote or whatever, how about an alternatyive mascot competition???
Still can't draw simple working floating OpenGL randomly rotating cube (like all other LLMs out there). [It draws something but not working cube in C with say colour faces on each side.]
There's an article discussing that and demonstrating the needs for doing it on the day 2 of the "12 days of EXO". A stack of 8 Mac minis of the 64gb of RAM M4 chip variety did it. 16 to 20 grand isn't really an "at home" project though.
The training cost deltas will turn out to be the most important aspect of this - with effects on NVIDIA and the like down/upstream. Such a simple and direct architecture. Some great technology here - But it's the first one that excelled in my coding/research/creativity/prediction "Uber prompt" combination testing, haven't tried multi-modal yet in native form, but it cooked up the required methods (agentic) with no problems at all.
I’m getting the same reaction / general vibe as I did when the jailbreak community reacted to the Pangu tool being released by a Chinese dev team, one of the first major Chinese releases of such a tool . . .
I can’t get past the ill feeling that I get when I consider using a Chinese company’s LLM. What knowledge are they gaining from our work on their system?
I can't find anything about what hardware is required to run locally. is it an online service only? like chatgpt? Or can it run on a 4090 for example. EDIT: Okay it clearly isn't a model that can run locally, based on this video it sounded like it was, but found even an quantized Int 4 model still requires 4×80 GB H100 or H200. So it's cool it's open source but it's clearly gonna be something only run online by services with the kind of hardware required. I guess it'll allow for even more competition which is good. I wonder what limits they have in place. i find a lot of the "free" AI sites have a limit, so your working on something then told, sorry come back tomorrow.
the goddamn Holiday hat I didn't expect... the combo between the avatar and the hat was very uncanny terrifying on that weird ass face. I'd get rid of that avatar and move to something little less creepy (there's a LOT that can work). I know it got 50/50 vote or whatever, how about an alternatyive mascot competition???
I absolutely do give the team huge props for open source!
So one giant crew of agents with actual mixture of models, being the real part
Still can't draw simple working floating OpenGL randomly rotating cube (like all other LLMs out there). [It draws something but not working cube in C with say colour faces on each side.]
Will there be a way to run this in LMStudio or ollama on regular pcs as an LLM?
There's an article discussing that and demonstrating the needs for doing it on the day 2 of the "12 days of EXO". A stack of 8 Mac minis of the 64gb of RAM M4 chip variety did it. 16 to 20 grand isn't really an "at home" project though.
The training cost deltas will turn out to be the most important aspect of this - with effects on NVIDIA and the like down/upstream. Such a simple and direct architecture. Some great technology here - But it's the first one that excelled in my coding/research/creativity/prediction "Uber prompt" combination testing, haven't tried multi-modal yet in native form, but it cooked up the required methods (agentic) with no problems at all.
Looking great 👍
Dude, chistmas is over..
I’m getting the same reaction / general vibe as I did when the jailbreak community reacted to the Pangu tool being released by a Chinese dev team, one of the first major Chinese releases of such a tool . . .
I can’t get past the ill feeling that I get when I consider using a Chinese company’s LLM. What knowledge are they gaining from our work on their system?
chill bro, it's open source, read their paper. you can use it even locally
Huh? You do realize united states citizens also use open source too?
4o?
Dude o3 is out in the market
lol exactly-- this glorified agentic model to delegate tasks
Did you use it? It's just a gimmick.
@@sasa-tg4od I hope it's a gimmick so far.. cuz it's sounds a little far ahead of others
I can't find anything about what hardware is required to run locally. is it an online service only? like chatgpt? Or can it run on a 4090 for example.
EDIT: Okay it clearly isn't a model that can run locally, based on this video it sounded like it was, but found even an quantized Int 4 model still requires 4×80 GB H100 or H200. So it's cool it's open source but it's clearly gonna be something only run online by services with the kind of hardware required. I guess it'll allow for even more competition which is good. I wonder what limits they have in place. i find a lot of the "free" AI sites have a limit, so your working on something then told, sorry come back tomorrow.
SBGAI INTERNATIONAL STUDIO 🎙️
It's like a intergrated agend system. 😁👍
YEEEEESSSS LETS GOOOOOO YEEEEHAWWWW NWONWONWONWO 🎉🎉🎉🎉
Enough with the creepy avatar.
I can't even take it seriously
Jeeeeeeeeezuz - Wow
love the content but that avatar is creepy. hat doesn't help ;-)
I suggest changeling the avatar ;-)
Magnificent
🧞Avatar 🧞
🔹🥇🔹
🧞👀AI👀🧞
👍Thanks 👉
Zu Zuchongzhi
Chinese 🥇🇨🇳, Mathematician
Liu Hui 🥇🇨🇳,
Geometry.
💚 Thanks 💚
🌏🐉🇨🇳🐉🌏
💜☸️☯️☸️💜
Magnificent
🛸 Taikonaut.
Automated 👀
Logistics 🧞🥇.
🥇Harmony OS.
🥇DeepSeek AI.
🥇Qwen 2.5 AI.
🧞 Zuchongzhi
Quantum Chip.
🧞🥇🇨🇳🥇🧞