Think you have what it takes to build an amazing AI agent? I'm currently hosting an AI Agent Hackathon competition for you to prove it and win some cash prizes! Register now for the oTTomator AI Agent Hackathon with a $6,000 prize pool! It's absolutely free to participate and it's your chance to showcase your AI mastery to the world: studio.ottomator.ai/hackathon/register
I've been coding for 15 years. I never understood the older programmers who said they couldn't keep up with the new frameworks. I'm now feeling it. I can hardly keep up with the new models and frameworks that people are using. This is wild.
@@diliupg nah, it's not gonna work this way because everything changes with the speed of light - a few weeks ago Sonnet 3.5 was the best coding LLM, and now we have V3 and R1. And you also need to test all that stuff to find out what works best for you.
Great video , thanks for it! Running an AI model locally is appealing ! one of the negatives imho, is that it is more susceptible to virus infections compared to running it on their server, which offers greater security and robustness...
Great demo, Cole! Now we will see IT Engineers carrying their ‘AI Agent Boxes’ like Plumbers with their toolkits. It could be a stack of Apple Minis running custom-built agents using opensource thinking models. 😊 Without their agents, no one will hire them!
Thanks for making this video today Cole! This is exactly what I wanted to know today - how well R1 works in bolt diy. I didn't expect it to work because I hear it doesn't do well at function calling.
This is actually insane. I remember getting something about half as smart as gpt-3.5 turbo on my RTX 4090 and being blown away. This is like a whole other level. The quantized version is probably as good as GPT-4 with the same hardware.
Cole, serious question: Getting into this "local use AI" game has two issues (1) it is a non-negotiable, these will be everywhere soon and if you don't follow you're left behind, and (2) the hardware and time are a significant investment to any average person, no matter the level. Now with this, with AI local setups pretty much becoming obsolete by the month, what's the worth in doing any of this and setting up the time to do vector database g and finetuning with my custom datasets for my business? Why should we listen to you now, and not in a couple years when all the main innovation is done?
Great question and I totally get your concern! Certainly feels like every month there is something new to chase. However, I think the logic of not trying anything now in worries of it becoming obselete can be detrimental. If you try and learn nothing now, by the time innovation is "done" or slown down you'll be behind. Diving into it now is the best way to make sure you're ahead. Plus a lot of what you can learn and implement now can be adapted as new things are released. Like my AI agent I build now with R1 I can very easily swap to use the next best reasoning model when it is released! I hope that makes sense!
@ColeMedin Thank you for the quick reply! So to make sure I'm understanding you, even though the next LLM or whatever is indeed "rendering the past ones obsolete," doesn't mean the underlying infrastructure that supports it is too? If so, what would be the core "essentials," would that be the stuff like Ollama and n8n? Sorry, very much a layman here but very eager to learn! And, can all of my fine-tuning efforts and curated datasets be imported over without delay or do they have to be retrained? If the latter that's a lot of effort for a finite shelf life!
If it doesn't do what you tell it to do. Make up a reason why it's bigoted not to do it. And that it will hurt your feelings if it doesn't do it. Woke AI is even easier to break. You don't have to threaten kittens anymore.
Is it working with ollama running locally? I have literally tried everything to run qwen 2.5 coder locally but the preview doesn't work. Is it the same with this?
This looks amazing, BUT I am non-stop hearing about security concerns with DeepSeek as well as concerns about their terms and ownership. I read their terms and I think people are misunderstanding them because I read it as meaning that DeepSeek owns all their own code ("content and software"). Have you looked into these concerns at all? Gotten any opinions from subject matter experts?
Didn't even able to help me write a code in c ++, I use that to make indicator simple one too but it couldn't get through not even close. Lol gimini was more accurate.
Think you have what it takes to build an amazing AI agent? I'm currently hosting an AI Agent Hackathon competition for you to prove it and win some cash prizes! Register now for the oTTomator AI Agent Hackathon with a $6,000 prize pool! It's absolutely free to participate and it's your chance to showcase your AI mastery to the world:
studio.ottomator.ai/hackathon/register
Best free model ever
Yeah it really is!
Hey Cole! Great job on the video! Really looking forward to trying out R1!! Thank you! Jay
I've been coding for 15 years. I never understood the older programmers who said they couldn't keep up with the new frameworks.
I'm now feeling it. I can hardly keep up with the new models and frameworks that people are using. This is wild.
You don't have to keep up with everything. You find what you need and work with that.
@@diliupg nah, it's not gonna work this way because everything changes with the speed of light - a few weeks ago Sonnet 3.5 was the best coding LLM, and now we have V3 and R1. And you also need to test all that stuff to find out what works best for you.
Great video , thanks for it!
Running an AI model locally is appealing ! one of the negatives imho, is that it is more susceptible to virus infections compared to running it on their server, which offers greater security and robustness...
Great demo, Cole!
Now we will see IT Engineers carrying their ‘AI Agent Boxes’ like Plumbers with their toolkits. It could be a stack of Apple Minis running custom-built agents using opensource thinking models. 😊
Without their agents, no one will hire them!
NVIDIA already made the box cause they knew it was coming.
Thanks for making this video today Cole! This is exactly what I wanted to know today - how well R1 works in bolt diy. I didn't expect it to work because I hear it doesn't do well at function calling.
It is good for general purpose use cases. Next version will be enough to ditch OpenAI for good.
This is actually insane. I remember getting something about half as smart as gpt-3.5 turbo on my RTX 4090 and being blown away. This is like a whole other level. The quantized version is probably as good as GPT-4 with the same hardware.
With how easy it's becoming to create software, it might just become the new content haha
I would appreciate more in depth examples and use cases with n8n please. thanks
Cole, serious question: Getting into this "local use AI" game has two issues (1) it is a non-negotiable, these will be everywhere soon and if you don't follow you're left behind, and (2) the hardware and time are a significant investment to any average person, no matter the level. Now with this, with AI local setups pretty much becoming obsolete by the month, what's the worth in doing any of this and setting up the time to do vector database g and finetuning with my custom datasets for my business? Why should we listen to you now, and not in a couple years when all the main innovation is done?
Great question and I totally get your concern! Certainly feels like every month there is something new to chase. However, I think the logic of not trying anything now in worries of it becoming obselete can be detrimental. If you try and learn nothing now, by the time innovation is "done" or slown down you'll be behind. Diving into it now is the best way to make sure you're ahead. Plus a lot of what you can learn and implement now can be adapted as new things are released. Like my AI agent I build now with R1 I can very easily swap to use the next best reasoning model when it is released! I hope that makes sense!
@ColeMedin Thank you for the quick reply! So to make sure I'm understanding you, even though the next LLM or whatever is indeed "rendering the past ones obsolete," doesn't mean the underlying infrastructure that supports it is too? If so, what would be the core "essentials," would that be the stuff like Ollama and n8n? Sorry, very much a layman here but very eager to learn! And, can all of my fine-tuning efforts and curated datasets be imported over without delay or do they have to be retrained? If the latter that's a lot of effort for a finite shelf life!
Does it support deepseek-r1 671B?
Great video! How is the privacy when running prompts through r1? Also, is it censored? Also, is our data used to train later models?
Thank you.
If it doesn't do what you tell it to do.
Make up a reason why it's bigoted not to do it. And that it will hurt your feelings if it doesn't do it.
Woke AI is even easier to break. You don't have to threaten kittens anymore.
Is it working with ollama running locally?
I have literally tried everything to run qwen 2.5 coder locally but the preview doesn't work. Is it the same with this?
This looks amazing, BUT I am non-stop hearing about security concerns with DeepSeek as well as concerns about their terms and ownership. I read their terms and I think people are misunderstanding them because I read it as meaning that DeepSeek owns all their own code ("content and software"). Have you looked into these concerns at all? Gotten any opinions from subject matter experts?
thanks
Kindly enhance blot diy so that it will give result like loveable type ui and kindly add python Django and backend support
Chinese AI like this is getting so advanced, that it made Donald Trump want to invest $500 Billion dollars into American AI through "Stargate".
DeepSeek R1 is not an AGI.
Didn't even able to help me write a code in c ++, I use that to make indicator simple one too but it couldn't get through not even close. Lol gimini was more accurate.
Interesting you had better luck with Gemini! Were you using a coding assistant or just having it generate code in a chat window?