@@ColeMedin yes Yes, Thank You so much. I honestly can’t Wait to just Speak my idea Then oh yes, Then it’s completed moments later! Yeah, I also thought would have flying cars by now.
Bro you are literally sitting on a gold mine! Clone ur voice and use it as one of those Ai App that helps people to sleep! Your demanour and everything is so chill! Amazing tut btw
@@vincibits maybe create a separate coding ASMR channel? trust me you'll get a lot of audience, there are plenty devs with ADHD who loves ASMR, me for example
Thank you so much, you are so analytical and patient with the explanations! I am a fan of Medin as well, advanced stuff!!! Can you explain how to deploy the apps created please? Keep going, you are amazing!!!!
Is it possible to get a video done which shows EVERYTHING (like how to set up the docker in a local ubuntu server setup) from start to finish, things like how to remove the example from example.env, how to setup the npm tools. START TO FINISH for the noobs. It would be awesome.
@@vincibits Yes, I have been looking at many YT teachers on those things, but whenever one says RTFM I realize it could take DAYS looking at the options and implementing it without fault. The YTers who actually go through the whole process as a live stream, they as a community come out with the best most detailed tutorial that looks at all the possible GOTCHAs.
@@MIkeGazzaruso Also, remember that you'll need a pretty beefed-up computer to have these local models work well. If you really want to build something useful using this tool, and you don't have a lot of memory or RAM, I recommend getting a deepseek model, or Openeai or Claude sonet... these are not free, but aren't too expensive either. Just a thought
@@vincibits yeah, with my Mac Studio M2 Max (32 GB of RAM unified due to Apple Silicon SoC) I can run pretty decently qwen coder 2.5 @ 14b, but I think that nowadays unique model able to build something cool (and not tic tac toe or todo app) is still Claude Sonnet 3.5
You need to go deeper, as in real projects, for instance, adding/removing themes, auths, fixing business logic... this is the real limit for now, as most solution forget the context, and focus only on one single task, and often implement unrelated attributes, features, and methods. Package versioning is also a big step to go over on such innovative coding assistants.
Is this a request? because you are not paying him to make this video you know? Presentation is king. Start with a thank you for making this video and ask courteously if he could CONSIDER going deeper on what YOU need done.. Please and thanks!
@@FarisandFarida Some people still seem to live in the medieval age, where politeness was a privilege meant only for the nobility. Like a rare comet passing through the sky, their understanding of common courtesy feels outdated and fleeting.
@@FarisandFarida Sorry, I guess you did not read my full post, to be able to comment the video, I first had to appreciate it, and follow the author's channel, and I had to spend time listening and learning, while having few minutes. then my comments were not a request, i shared my experience, and mentionned how impressive those tools are, and how we can drive their roadmap toward real projects. I surely missed the "Thank you for the vidéo, as I quickly commented the author and the bolt.new initiative", this doesnt' mean disrespect.
Hello. Could you tell the extension of modelfiles which you create? When I create modelfile with extension of .md the command gives error: "specified Modelfile wasn't found" I appreciate your help with this. Thank you.
Great video. I am trying to use openAI. Have put in my API key into the .env file but it returns an error (There was an error processing your request: An error occurred.) when promiting. Any idea what the issue is and how to resolve?
I have this question. If I want to make a web and apps for Android and for iOS, can or how should I ask Bolt to do it? Is it possible to use the code it makes and edit or improve it in the same tool?
Remember that these tools are great at giving you code, but you need to understand how to code (it doesn't have to be PhD level). You can use this tool to answer your question. You might need to install other tools for Android/iOS development of course.
Glory be to God, over days I've been battling to set bolt up locally and today it finally worked, my issue now is how to add my token to the .env file on Ubuntu, because the .env.example is not listed among the files
, so you'll need to open the project in your code editor (make sure to navigate to "bolt. new-any-LLM" and then create the .env file. The problem is that you may not be in the correct directory?
It sounds like you're having a bit of trouble. Have you checked the configuration settings or the model compatibility? Sometimes a small tweak can make a big difference!
Hello Sir, is it normal to have this on bolt localy? Ollama API Key: Not set (will still work if set in .env file) There was an error processing your request: No details were returned
When I send a message to the model in the local Bolt, and after applying everything , this message appears and does not give me results There was an error processing your request: No details were returned
@@vincibits where i pull in particular folder or any where and before the step of ollama install i go for run the bolt throw error on api key and something else but i have api key of open ai and i already pasted it in .env file , sir can you tell me why i got error i followed every step what you told
Hello, I can't do coding. There is only the chat screen, and nothing is being added to the code container, and it doesn't run. Where could the problem be
It could be that now you don’t need to create a modelfile anymore. The project is very active so they might have changed that already. Check the readme file from the original repo for changes.
And bro can you make a step by step on how to really install this as someone who has no coding experience. Its like we have been intentionally been counted out on how to do this. There is thoussnds of us waiting on a video like this to drop. Please!!!!!!!!
without the subscription for the api key , i will not able to use any model. Am i right sir., Failed to load resource: the server responded with a status of 500 (Internal Server Error)
No worries! You can usually get an API key by signing up on the website of the service you're trying to access. Check their documentation for guidance!
I am getting a erro when i do app all create but then get a erro of: Start Application npm run dev And dont run it, and where i and run this comand npm run dev? Ans also say on mine ollma i dont have a api key
It sounds like you're encountering a frustrating issue! I recommend double-checking your code and ensuring that all your settings are configured correctly. Sometimes a simple oversight can cause these errors.
everytime i use the qwen2.5-coder:32b i get this error "There was an error processing your request: No details were returned" does that mean my system cant run such model?
Yeah there’s a known issue where llms they are small don’t have the “bandwidth” to handle the load, hence using different llms is a better idea like Claud 3.5
@@vincibits without the subscription for the api key , i will not able to use any model. Am i right sir., Failed to load resource: the server responded with a status of 500 (Internal Server Error)
Thank you for sharing! It's been a pleasure building this with the community!
Doing amazing
Thank YOU, for starting this amazing project :)
Thank you!!
@@ColeMedin yes Yes, Thank You so much. I honestly can’t Wait to just Speak my idea Then oh yes, Then it’s completed moments later! Yeah, I also thought would have flying cars by now.
Nicely done @ColeMedin - Thanks for your efforts
I could literally listen to this guy all day--even if I don't understand what he's teaching, I would still listen!
Lol! Thank you, thank you!
@@vincibits I found you a few weeks ago. I came for the tech, I stayed for the voice!
Bro is so calm, the way he explains everything was amazing, big ups🙌🙌
Thank you so much 😀
Bro you are literally sitting on a gold mine! Clone ur voice and use it as one of those Ai App that helps people to sleep! Your demanour and everything is so chill! Amazing tut btw
Wow, thanks
I can literally sleep to this video, your voice is so calming it's almost ASMR
If my voice can put you to sleep, I should start charging for bedtime stories! 😂
@@vincibits maybe create a separate coding ASMR channel? trust me you'll get a lot of audience, there are plenty devs with ADHD who loves ASMR, me for example
You are an amazing teacher ! Love your personality and calm talk ! ❤️Subscribed !
Thank you!!
Great teacher! Simple and straightforward, no complications.
In your opinion, what is the best API to use in media production?
Thank you for the kind words! Can you give me more context about the question, please.
Windsurf is pretty good to! - but what you've covered here is excellent - love your videos dude, i learn a lot for you!
Glad you enjoy it!
Thank you so much, you are so analytical and patient with the explanations! I am a fan of Medin as well, advanced stuff!!! Can you explain how to deploy the apps created please? Keep going, you are amazing!!!!
Thank you so much! I will have to make a video to show how to deploy these apps :)
This is insane amount of value.
Glad you like it.
Content great, voice sensational. Thank you
Thank you! I really appreciate it!
Nicely explained!. This is exactly what i wanted.
Wonderful!!
Is it possible to get a video done which shows EVERYTHING (like how to set up the docker in a local ubuntu server setup) from start to finish, things like how to remove the example from example.env, how to setup the npm tools. START TO FINISH for the noobs. It would be awesome.
Okay, I’ll do a full video later, but I don’t have Ubuntu
meanwhhile, I would search online how to do that -- there are tons of tutorials and it's not that hard :)
@@vincibits Yes, I have been looking at many YT teachers on those things, but whenever one says RTFM I realize it could take DAYS looking at the options and implementing it without fault. The YTers who actually go through the whole process as a live stream, they as a community come out with the best most detailed tutorial that looks at all the possible GOTCHAs.
Thank you so much cole medin, i have really enjoyed using bolt❤
Glad you enjoy it!
Thanks for sharing, well explained :) Wondering about which Ollama model suits best on a Mac Studio M2 Max 32GB Ram?
Thanks
Thank you! I would start with quen-coder first and see how it works for you.
@vincibits thanks, I will try
@@MIkeGazzaruso Also, remember that you'll need a pretty beefed-up computer to have these local models work well. If you really want to build something useful using this tool, and you don't have a lot of memory or RAM, I recommend getting a deepseek model, or Openeai or Claude sonet... these are not free, but aren't too expensive either. Just a thought
@@vincibits yeah, with my Mac Studio M2 Max (32 GB of RAM unified due to Apple Silicon SoC) I can run pretty decently qwen coder 2.5 @ 14b, but I think that nowadays unique model able to build something cool (and not tic tac toe or todo app) is still Claude Sonnet 3.5
@@MIkeGazzaruso OMG!! My thoughts exactly! I am actually putting together a video about this as we speak! Well, great minds.... :)
This is good 👍, real quality content
The Hank you so much!!
Thank you ..nice video sharing 😊
Thank you
You need to go deeper, as in real projects, for instance, adding/removing themes, auths, fixing business logic... this is the real limit for now, as most solution forget the context, and focus only on one single task, and often implement unrelated attributes, features, and methods. Package versioning is also a big step to go over on such innovative coding assistants.
Is this a request? because you are not paying him to make this video you know? Presentation is king. Start with a thank you for making this video and ask courteously if he could CONSIDER going deeper on what YOU need done.. Please and thanks!
Thanks for the feedback
@@FarisandFarida Some people still seem to live in the medieval age, where politeness was a privilege meant only for the nobility. Like a rare comet passing through the sky, their understanding of common courtesy feels outdated and fleeting.
@@FarisandFarida Sorry, I guess you did not read my full post, to be able to comment the video, I first had to appreciate it, and follow the author's channel, and I had to spend time listening and learning, while having few minutes. then my comments were not a request, i shared my experience, and mentionned how impressive those tools are, and how we can drive their roadmap toward real projects. I surely missed the "Thank you for the vidéo, as I quickly commented the author and the bolt.new initiative", this doesnt' mean disrespect.
Just what I was looking for recently. 😊
Enjoy!
Thanks for the inspiration Sir!
Thank YOU!
Hey, great video, I have a question abouts costs? Limits?
If you use Ollama models, locally, there are no costs :)
Great video, thanks! 💪 Which tool are you using for coding in it?
Which one?
@@vincibitsthe one with the explorer view of all files on the left hand side. (Dark theme) Thanks
@@Eldorado66 docker
No, it's not docker, you can't code inside docker, what are you talking about? I think I figured it out: it's visual studio. Thanks anyway.
wow nice work thanks can i deploy on this or just run coding
I don’t think you can deploy not this yer
Hello. Could you tell the extension of modelfiles which you create? When I create modelfile with extension of .md the command gives error: "specified Modelfile wasn't found" I appreciate your help with this. Thank you.
In the video, I believe, I said that that file must not have an extension. Remove the file extension for this to work.
@ thanks 🙏
what hardware limitations i need to think beforehand??
You need to have at least 10GB of RAM, but even then you won’t be able to run large models comfortably.
@@vincibits my goodness :'
@ I know
Subbed, thank you
Thank you!
Great video. I am trying to use openAI. Have put in my API key into the .env file but it returns an error (There was an error processing your request: An error occurred.) when promiting. Any idea what the issue is and how to resolve?
Thank you, is Ollama free and we dont need to add any key to .env?
Yes, correct
What the specs of your pc because me i have so much lag
I have a MacBook Pro M4 Max and it was struggling a bit with the larger models :(
I have this question. If I want to make a web and apps for Android and for iOS, can or how should I ask Bolt to do it? Is it possible to use the code it makes and edit or improve it in the same tool?
Remember that these tools are great at giving you code, but you need to understand how to code (it doesn't have to be PhD level). You can use this tool to answer your question. You might need to install other tools for Android/iOS development of course.
those are mobile apps but yeah!
Glory be to God, over days I've been battling to set bolt up locally and today it finally worked, my issue now is how to add my token to the .env file on Ubuntu, because the .env.example is not listed among the files
You can create the file yourself and then add the api keys and all. Let me know if this makes sense, okay :)
@vincibits At where will I create the file. specify please. I'm using ubuntu
, so you'll need to open the project in your code editor (make sure to navigate to "bolt. new-any-LLM" and then create the .env file. The problem is that you may not be in the correct directory?
},
responseBody: '{"error":"model requires more system memory (16.0 GiB) than is available (2.5 GiB)"}', how to remove this storage problem
Make sure you have enough memory locally to be able to run these models.
Thanks bro😍
Welcome 😊
Hi i got bolt to run locally and choose an ollama model and i didn't get any response
It sounds like you're having a bit of trouble. Have you checked the configuration settings or the model compatibility? Sometimes a small tweak can make a big difference!
Hello Sir, is it normal to have this on bolt localy? Ollama API Key:
Not set (will still work if set in .env file)
There was an error processing your request: No details were returned
It should
At 11:10 How can i create this module in linux machine.
It should be the same process, no?
When I send a message to the model in the local Bolt, and after applying everything , this message appears and does not give me results
There was an error processing your request: No details were returned
Give it another try and see. Sometimes there are errors that come up.
There was an error processing your request: No details were returned
got this erorr how to solve??
This happens usually due to the model you’re using. If the model is small, these issues tend to happen.
@@vincibits i am using ollama 7B, should i cange ti 14B??
and same error when i use oTToDev Bolt website too
@@vincibits still after 2 weeks its same
how to resolve this:
WARN[0000] The "OLLAMA_API_BASE_URL" variable is not set. Defaulting to a blank string.
It means that the ollama base url is empty
so when you use it locally its free?
Yes
Thanks a lot for sharing, how can i get an 'anthropic' api_key for free ?:)
You’re welcome. Go to Anthropic site and find “api” and create an account, pay, and you’ll all set :)
Sir how to install qween 2.5 locally pls tell me as it not described in video so i got confused pls tell me
You do the same thing like you would to install other llms locally with ollama: ollama pull (name of the model you want).
@@vincibits where i pull in particular folder or any where and before the step of ollama install i go for run the bolt throw error on api key and something else but i have api key of open ai and i already pasted it in .env file , sir can you tell me why i got error i followed every step what you told
Hello, I can't do coding. There is only the chat screen, and nothing is being added to the code container, and it doesn't run. Where could the problem be
I’d check the model you’re using. If it’s too small, then you’ll need to try a larger model.
@vincibits i tried gemma2 27b and 2b models. Whats your suggestion. I am also grateful to you for answer
there is no modelfile while cloning .I download qwen2.5-coder:32b.Can`t running
It could be that now you don’t need to create a modelfile anymore. The project is very active so they might have changed that already. Check the readme file from the original repo for changes.
And bro can you make a step by step on how to really install this as someone who has no coding experience. Its like we have been intentionally been counted out on how to do this. There is thoussnds of us waiting on a video like this to drop. Please!!!!!!!!
Okay, I will make sure to have a full on tutorial on how to set this up!
@@vincibits Thank You🙏🏾🙏🏾
without the subscription for the api key , i will not able to use any model. Am i right sir.,
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
You can use ollama models like I explain in the video.
THANK YOU!
You're welcome!
Thank you!
You're welcome!
Thank you sir
You’re welcome!!
i have a erorr i try but is giv me error and also i have the api key
What error you are getting?
But where i got api key i have not any api key so what to do
No worries! You can usually get an API key by signing up on the website of the service you're trying to access. Check their documentation for guidance!
I am getting a erro when i do app all create but then get a erro of:
Start Application
npm run dev
And dont run it, and where i and run this comand npm run dev?
Ans also say on mine ollma i dont have a api key
Go to the original repo and look at the read me file for troubleshooting
(There was an error processing your request: No details were returned ) , i got this error while trying to create my website , any advice sir ?
It sounds like you're encountering a frustrating issue! I recommend double-checking your code and ensuring that all your settings are configured correctly. Sometimes a simple oversight can cause these errors.
Its good sir but for me windsurf works better and can import projects
Ah, windsurf! The only tool that can literally take you places! Just make sure it doesn’t blow you away while you’re importing projects!
Amazing
Thanks
Error: specified Modelfile wasn't found
ollama create -f modelfiles/quen-coder qwen2.5-coder:7b
@@Rdrudra99 worked but mac air m3 not handle ))
Did you figure it out?
@@vincibits yes thx
Can you Sell your Voice on Eleven Labs . it good for narrating
Haha.. I may just so that! Not sure if they would buy it :)
everytime i use the qwen2.5-coder:32b i get this error "There was an error processing your request: No details were returned" does that mean my system cant run such model?
Yeah there’s a known issue where llms they are small don’t have the “bandwidth” to handle the load, hence using different llms is a better idea like Claud 3.5
Nice.
Thank you! Cheers!
😇😇
Bolt 30 errors, 50 errors, GoodBy tokens, goodBye Money
Sounds like you’ve got a whole Bolt comedy show going on there! “30 errors walk into a bar…” - but seriously, let’s get that fixed!
so basically we need to subscribe for api keys right for each model right?
only for models that are not ollama based.
@@vincibits without the subscription for the api key , i will not able to use any model. Am i right sir.,
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
Use ollama models and you won’t need an api key
@@vincibits thanks