hey brother, thanks for the feedback, hopefully with experience i'll do better next time. i linked a docs i made in the comments, were you able to follow it? let me know if you run into any issues.
bro, your yt channel seems new but the content is great. I am hoping to see more awesome content in the future. keep cooking 😗👌your description have enough information and guides, but the beautification is missing. you are working hard on everything about this channel,right! then don't spare any sides for haters to complain. Light speed bro⚡⚡
you are such a hero for making this. If you want to be a super hero! Make a follow up video that explains how to do all that but tap into a cloud GPU for the heavy lifting, or alternatively, a GPU on a local network machine. Thanks! Liked and subscribed!
thanks mate! and interesting idea, we'll have to see if it would be less expensive than using the API of a cloud open source model. i'll look into it. until then you can change the .env.example to .env and put an API key there if your hardware doesn't support 32B. the github page of the project has decent instructions on how to do that. have a good day!
for 4ghz cpu and 16gb ram i recommend to use 7B its fit 100% but it takes so long so for basic ai experience use 4b or lowest fast but not good as 7b in the end i recommend to use 32gb ram or 64gb for best experience
definitely brother, the ram and gpu are very important for performance. hopefully in a few months this hardware can be less expensive to make the good models more accessible.
error "There was an error processing your request: No details were returned". I have done every step correctly and I can even see the llms. I have tried what you suggested to others who got this same error, but the problem was not fixed.
I have HP Elitebook 840 g5 16gb ram with 512gb ssd and integrated graphics, i followed all steps but the problem i am facing is that it is constantly showing error request on ollama models qwen-coder:7b but mistral is working but not very smoothly
in your situation, I would recommend the bolt.new website. you get free context tokens daily, so it will just take a few days to work on a project. in your first prompt ask to "create separate components for better organization and maintainability" so when you ask for changes it doesn't rewrite the whole code and waste your tokens. good luck in your project brother!
i have a 8gb 3070ti, with 32gb of ram and a 12th gen i9. i can run everything up to 32b, but speed wise 7b is the best, and 14b is okay but i have to wait 5ish min after each prompt
Hello, do you recommend a 3090 for AI and the use of BOLT locally? I'd like to know which model you recommend for a 3090, the 32 or the 14? Is there a big difference between the two? I want it to be a bit fast and not take 10 minutes to generate. I found it very slow for you. What GPU do you have? because either I buy a custom 2080ti in 22GB for 550euro or I take a 3090 in 24GB for 550euro so could you advise me what to do and I could use which model without problems (well without wasting time).
May i ask which model do you recommend for 128GB DDR5/3080Ti to use bolt for coding - tried bolt with pinokio - but it does not accept ollama - only GPT-API - on Github many send the same error. And ... can YOU upload Pictures or Files on Bolt like on the Online-Version?
@@techronin7 holy moly... i thought it would be a joke - but you are really giving me a good hint!!! Thank you very much! Subscribed. I will write you a PM if okay for you.
hello i've the problem you get in your first chat try "There was an error processing your request: No details were returned" do you have any tips to make it works ?
hey mate, this is a common issue without an established fix. What helped me solve that is to close and open back the server a few times, switching through the listed llms (what i did in the video) and the ‘run ollama insertmodelname’ in cmd before running the bat file. if you’re at that step, you’re really close to running it. let me know if that helps!
hey mate, this is a common issue without an established fix. What helped me solve that is to close and open back the server a few times, switching through the listed llms (what i did in the video) and the ‘run ollama insertmodelname’ in cmd before running the bat file. if you’re at that step, you’re really close to running it. let me know if that helps!
There shouldn't be a need to 'unlock' anything. I just use the Qwen 32 model at glhf chat without restrictions. No install, no cloning, no tweaking. Just start typing. Done.
they’re using claude 3.5 sonnet. the 32b model is comparable. if you want to use the same model as the original bolt, you can also link it to ottodev via your own claude api. it’s going to cost money, but a lot less than the subscription bolt charges.
haha sort of! he’s my favourite youtuber, a genius at making tech videos entertaining honestly. i learned how to edit recently, i promise i’ll get better and find my own style. take care brother.
@@techronin7 Hi dude I I don’t usually comment but when I read this and went to check, I was really surprised at how small your channel is. So I just wanna say great job on the video keep it up. 😊 and don’t worry about the style I only noticed it after rewatching. Anyway I got this in my recommendations and given the size of your channel.. you must be doing something right. Keep up the good work man
hey mate, that means the file path used after "ollama create -f" wasn't the one where modelfile is on your computer. the c: \ part is different for everyone. you can right-click on the file, go to properties, and see its location. then copy that path, add \modelfile to the end, and you should be good! let me know if it works.
I'm lost. I got to doing a "quick pnpm install" and I don't know what that means or where I'm meant to get the files from that you have on your desktop
hey mate, here's the step from the google docs i shared in the description: Step 4: Install Dependencies and Start the Application Open CMD with admin permissions Type the following command then press enter: cd path\to\OTTODEV Type the following command then press enter: npm install -g pnpm Type the following command then press enter: pnpm install Type the following command then press enter: pnpm run dev the path\to\OTTODEV is the path of the ottodev folder you have cloned, let me know which command doesn't work!
Hi bruh i did all the necessary stuff but im getting error in last part saying that there was an error processing your request: no details were returned can help me fix it
yeah brother, took me about 8-10 hrs for the editing (if we include rounding the edge of all pictures and upscaling a few, and doing the gif faceswaps). i used capcut, definitely the simplest program for beginners like me! cheers bro
i got the images mostly from google image, the gifs from giphy (to download them you have to paste the link in a giphy downloader), and the adobe background removal tool. some of the images i generated myself using flux. hope that helps!
for those in the same situation, here’s what i found: If your desktop files are being synced to OneDrive despite using a local account, it's possible that OneDrive was set up to back up your folders automatically. In Windows 11, even local accounts can have OneDrive access if not manually unlinked. To stop this, you can: 1. Unlink OneDrive: Go to OneDrive settings and select "Unlink this PC" 2. Disable Folder Backup: In OneDrive settings, disable backup for specific folders like Documents, Pictures, and Desktop 3. Uninstall OneDrive: If you don't use it, uninstall it via Settings > Apps
cheers, mate, i agree, the fork doesn’t quite match up to the original at the moment. that said, their roadmap does look pretty promising. only time will tell if they can secure the funding to take it to the next level. i’m definitely rooting for its success, though, gotta love a good free and open-source project
@ yeah but 7b wasn’t smart enough so I switched to 14b and it’s a little better but not what you expect. I can’t run 32b on my MacBook. I have 48g ram not enough for 32b
@@darahk88same, the 14b model is the best one i can run smoothly currently. i’ll be on the lookout and upload an updated video when a better lightweight model gets released.
hey mate, you can create nice looking websites for local businesses for sure. As for creating web apps, you can create a proof of concept for an application (prototype) that you can then hire devs to develop, which saves costs as they have less work to do.
Bro the video too fast and you are not even trying to explain the steps Like come on bro Wtf I really had to watch in 0.25x speed Wtf man Like "break your neck " "Do backflip " Like what are you even trying at this point?
hey mate, i’ll improve in my next video, thanks for the feedback! please don’t break your neck though. you can access the google docs in the description and hopefully you’ll be able to follow along with that! let me know if you’re stuck at any steps.
@techronin7 dude I was about go crazy because I was getting so much errors 😭 If it weren't to be chatgpt I would be cooked ----- The error my onedrive wasn't synced ---- I took me one and half hours until I asked chatgpt 😭😭😭
Thanks for going through it so fast us noobs can't keep up. Well done.
hey brother, thanks for the feedback, hopefully with experience i'll do better next time. i linked a docs i made in the comments, were you able to follow it? let me know if you run into any issues.
bro, your yt channel seems new but the content is great. I am hoping to see more awesome content in the future. keep cooking 😗👌your description have enough information and guides, but the beautification is missing. you are working hard on everything about this channel,right! then don't spare any sides for haters to complain. Light speed bro⚡⚡
hey man, thanks very much for the advice and good vibes. i’ll learn more and make things tidier, hope you have a good day bro.
you are such a hero for making this.
If you want to be a super hero! Make a follow up video that explains how to do all that but tap into a cloud GPU for the heavy lifting, or alternatively, a GPU on a local network machine.
Thanks! Liked and subscribed!
thanks mate! and interesting idea, we'll have to see if it would be less expensive than using the API of a cloud open source model. i'll look into it. until then you can change the .env.example to .env and put an API key there if your hardware doesn't support 32B. the github page of the project has decent instructions on how to do that. have a good day!
@@techronin7 maybe u could use kaggle for the gpu and if that works then maybe invest in an actual gu
Keep it up! Fireship's style is good enough to be standard 😅👏
Subscribed to techronin!
thanks very much bro, i'm glad you liked it!
Such a to the point tutorial. Great job! 👍
thank you, glad you enjoyed it!
can I just launch the .bat without launing qwen in the CMD ?
Or do I have to ollama run in the cmd, before opening the .bat?
hey bro, you can open it directly from the .bat file if i'm not mistaken!
for 4ghz cpu and 16gb ram i recommend to use 7B its fit 100%
but it takes so long so for basic ai experience use 4b or lowest fast but not good as 7b
in the end i recommend to use 32gb ram or 64gb for best experience
definitely brother, the ram and gpu are very important for performance. hopefully in a few months this hardware can be less expensive to make the good models more accessible.
@@techronin7 sure
And that's why online ai is better
Why so fast bro….😭 I can’t see anything…great job btw
sorry mate! it was my first video ever so didn't know what i was doing tbh. hopefully you can follow along the google docs in the description!
error "There was an error processing your request: No details were returned". I have done every step correctly and I can even see the llms. I have tried what you suggested to others who got this same error, but the problem was not fixed.
I have HP Elitebook 840 g5 16gb ram with 512gb ssd and integrated graphics, i followed all steps but the problem i am facing is that it is constantly showing error request on ollama models qwen-coder:7b but mistral is working but not very smoothly
i see brother, this is definitely a setup that requires a discreet gpu to work. integrated gpu’s are unfortunately not able to run the qwen model
@@techronin7 can you suggest me any lighter llm free for my setup
in your situation, I would recommend the bolt.new website. you get free context tokens daily, so it will just take a few days to work on a project. in your first prompt ask to "create separate components for better organization and maintainability" so when you ask for changes it doesn't rewrite the whole code and waste your tokens. good luck in your project brother!
What's your computer specs and how's the performance when you we're using this?
i have a 8gb 3070ti, with 32gb of ram and a 12th gen i9. i can run everything up to 32b, but speed wise 7b is the best, and 14b is okay but i have to wait 5ish min after each prompt
Hello, do you recommend a 3090 for AI and the use of BOLT locally? I'd like to know which model you recommend for a 3090, the 32 or the 14? Is there a big difference between the two? I want it to be a bit fast and not take 10 minutes to generate. I found it very slow for you. What GPU do you have?
because either I buy a custom 2080ti in 22GB for 550euro or I take a 3090 in 24GB for 550euro so could you advise me what to do and I could use which model without problems (well without wasting time).
May i ask which model do you recommend for 128GB DDR5/3080Ti to use bolt for coding - tried bolt with pinokio - but it does not accept ollama - only GPT-API - on Github many send the same error. And ... can YOU upload Pictures or Files on Bolt like on the Online-Version?
how much vram does your 3080ti has? and file uploads is a high priority item on their roadmap, should come in the coming weeks!
hey mate! did some research and apparently they released file upload. search "Eduards Ruzga" on youtube he made a video about it.
@@techronin7 holy moly... i thought it would be a joke - but you are really giving me a good hint!!! Thank you very much! Subscribed. I will write you a PM if okay for you.
for sure mate! appreciate the sub :)
hello i've the problem you get in your first chat try
"There was an error processing your request: No details were returned"
do you have any tips to make it works ?
hey mate, this is a common issue without an established fix. What helped me solve that is to close and open back the server a few times, switching through the listed llms (what i did in the video) and the ‘run ollama insertmodelname’ in cmd before running the bat file. if you’re at that step, you’re really close to running it. let me know if that helps!
There was an error processing your request: No details were returned
hey mate, this is a common issue without an established fix. What helped me solve that is to close and open back the server a few times, switching through the listed llms (what i did in the video) and the ‘run ollama insertmodelname’ in cmd before running the bat file. if you’re at that step, you’re really close to running it. let me know if that helps!
There shouldn't be a need to 'unlock' anything. I just use the Qwen 32 model at glhf chat without restrictions. No install, no cloning, no tweaking. Just start typing. Done.
does not work i done all steps but getting stupid problem: Error: accepts 1 arg(s), received 2 lol
hmm haven't seen that error before, which step of the document does it appear at?
But will it be as good as the original? Because we are not using their original model though.
codelamma should be just as good
they’re using claude 3.5 sonnet. the 32b model is comparable. if you want to use the same model as the original bolt, you can also link it to ottodev via your own claude api. it’s going to cost money, but a lot less than the subscription bolt charges.
senior dev with no human rights 😂😂!
poor guy just wants to eat vram and chill
Nice, keep it up
thank you brother! i'll do my best.
is this a fireship parody account?
haha sort of! he’s my favourite youtuber, a genius at making tech videos entertaining honestly. i learned how to edit recently, i promise i’ll get better and find my own style. take care brother.
@@techronin7 Hi dude I I don’t usually comment but when I read this and went to check, I was really surprised at how small your channel is.
So I just wanna say great job on the video keep it up. 😊 and don’t worry about the style I only noticed it after rewatching.
Anyway I got this in my recommendations and given the size of your channel.. you must be doing something right. Keep up the good work man
@@kronos2266 thank you for the support man, your comment really motivates me to improve and post good stuff on here. hope you have a great day
its saying Error: specified Modelfile wasn't found in the last step
hey mate, that means the file path used after "ollama create -f" wasn't the one where modelfile is on your computer. the c: \ part is different for everyone. you can right-click on the file, go to properties, and see its location. then copy that path, add \modelfile to the end, and you should be good! let me know if it works.
in package.json line 11 you must replace full line 11 in this
"dev": "remix vite:dev --host 0.0.0.0 --port 5137 --open",
@@techronin7 okay thanks
I'm lost. I got to doing a "quick pnpm install" and I don't know what that means or where I'm meant to get the files from that you have on your desktop
hey mate, here's the step from the google docs i shared in the description:
Step 4: Install Dependencies and Start the Application
Open CMD with admin permissions
Type the following command then press enter:
cd path\to\OTTODEV
Type the following command then press enter:
npm install -g pnpm
Type the following command then press enter:
pnpm install
Type the following command then press enter:
pnpm run dev
the path\to\OTTODEV is the path of the ottodev folder you have cloned, let me know which command doesn't work!
Hi bruh i did all the necessary stuff but im getting error in last part saying that there was an error processing your request: no details were returned can help me fix it
The ollama model download is 5gb, right?
hey mate! yes it's 5gb, and the llms you download will have various sizes.
@techronin7 My RAM is 4GB, is that possible?
it says on their website the minimum requirements are 8gb of ram, so performance might be slow brother
the UI 💀 I think otto dev is doing something wrong
it's getting updated constantly, hopefully they get a gradient going soon haha
So much editing man 😂, how much time it took you to edit this video and what software you use for editing and animation
yeah brother, took me about 8-10 hrs for the editing (if we include rounding the edge of all pictures and upscaling a few, and doing the gif faceswaps). i used capcut, definitely the simplest program for beginners like me! cheers bro
@ Great hardwork bro, keep it up
@ btw where do you find these meme gif and images, I also want to implement this in my gaming channel
i got the images mostly from google image, the gifs from giphy (to download them you have to paste the link in a giphy downloader), and the adobe background removal tool. some of the images i generated myself using flux. hope that helps!
How do i do this on macOs?
good question! i don’t have a mac so never tried the process for that platform. have you had any luck searching for “ottodev mac tutorial” on youtube?
Fireship 2.0
that’s the goal brother! but he’ll always be the best.
You should not clone on desktop when its part of One drive. it will start uploading all files to one drive.. always use different drive.
thank you for the advice brother, i setup my windows using a microsoft account but then made the account local. i’ll fix that right away.
for those in the same situation, here’s what i found:
If your desktop files are being synced to OneDrive despite using a local account, it's possible that OneDrive was set up to back up your folders automatically. In Windows 11, even local accounts can have OneDrive access if not manually unlinked. To stop this, you can:
1. Unlink OneDrive: Go to OneDrive settings and select "Unlink this PC"
2. Disable Folder Backup: In OneDrive settings, disable backup for specific folders like Documents, Pictures, and Desktop
3. Uninstall OneDrive: If you don't use it, uninstall it via Settings > Apps
It's good but it can't be compared to the original at all.
cheers, mate, i agree, the fork doesn’t quite match up to the original at the moment. that said, their roadmap does look pretty promising. only time will tell if they can secure the funding to take it to the next level. i’m definitely rooting for its success, though, gotta love a good free and open-source project
i am not able to utilize gpu plz help!!
Will this work with AMD GPU ?
windows only afaik
@@GabrielM01 I mean - of course Windows - but the graphics card - must it be Nvidia or can it be AMD Radeon card ?
how much vram does your gpu has? imo it should work, i’m running nvidia
@@techronin7 12GB. Radeon 6700XT . TY for your answers BTW ❤❤
@@cocosloan3748 in windows, my Radeon RX 6700XT works fine with ollama
no cost??? running that model with 32B parameters will cost you an arm and a leg
you’re right, hopefully they can improve the 7b parameters even more in the coming months! are you able to run qwen 7b?
@ yeah but 7b wasn’t smart enough so I switched to 14b and it’s a little better but not what you expect. I can’t run 32b on my MacBook. I have 48g ram not enough for 32b
@@darahk88 i have 64gb on my linux install, runs on the cpu tho, no rocm support
@@darahk88same, the 14b model is the best one i can run smoothly currently. i’ll be on the lookout and upload an updated video when a better lightweight model gets released.
@@GabrielM01that’s great to hear mate, hopefully you cook great things with it.
it is possible to create some real with this "free-by-hand" setup?)
hey mate, you can create nice looking websites for local businesses for sure. As for creating web apps, you can create a proof of concept for an application (prototype) that you can then hire devs to develop, which saves costs as they have less work to do.
Bro the video too fast and you are not even trying to explain the steps
Like come on bro
Wtf I really had to watch in 0.25x speed
Wtf man
Like "break your neck "
"Do backflip "
Like what are you even trying at this point?
hey mate, i’ll improve in my next video, thanks for the feedback! please don’t break your neck though. you can access the google docs in the description and hopefully you’ll be able to follow along with that! let me know if you’re stuck at any steps.
@techronin7 dude I was about go crazy because I was getting so much errors 😭
If it weren't to be chatgpt I would be cooked
-----
The error my onedrive wasn't synced
----
I took me one and half hours until I asked chatgpt
😭😭😭
@@Quran_short769 got it! that means probably your C: path wasn’t the same as mine, i’m glad you got it to work!