7:07 This is the best confetti explosion Ive seen on your channel. It floats and flutters and flips. Its way better than the confetti from some of the "smarter" models on this channel in the last few weeks. Those were fine of course but this one is really good
'Well it doesn't cost $200... so that's good' 😂
2. How to run locally
To utilize DeepSeek-V2.5 in BF16 format for inference, 80GB*8 GPUs are required.
@@nahuelcosta8550 Right on point!
i think itll be a trend but wont last long i wonder if devin or openai came up with the "strategy "
Great breakdown of DeepSeek-V2.5 (1210)! Your style makes learning fun
Great video! DeepSeek-V2.5 (1210) & Search Agent : This GREAT Opensource LLM got MUCH BETTER! was super informative!
Thank you for your consistent updates content. I appreciate ❤
Love all the little shots at Chat gpt. 😂
7:07 This is the best confetti explosion Ive seen on your channel. It floats and flutters and flips. Its way better than the confetti from some of the "smarter" models on this channel in the last few weeks. Those were fine of course but this one is really good
You could also do a test of LG's Exaone models. They just got released 2 days ago. English/Korean trained.
Great video again today. 👍
King can you please cook a video on crafting a second brain that's analysed locally by an LLM?
i noticed that the 50 messages per day alert is gone , did they remove the limit for R1 ?
I think.
Hi, I want to ask you about another topic, is there any opensource option for lipsync that is better than wav2lip? Or something free online? Regards
7:42 Divided by nations united by thoughts🤣
I love qwen and deepseek models
Ha ha… nice… you opened up with an AI version of Portland Head Light! A local lighthouse near me.
From all your tests so far I have seen any model that got question 4 correct😅
Nice like always
❤
Google whopping a$$ at the moment