This project is really taking off - thank you so much everyone for your suggestions, contributions, and support! ❤ Couple of things: 1. If you make a contribution to this fork of Bolt.new, I WILL feature your change in a video! 2. I am certainly planning on opening up a PR to the original Bolt.new repo at some point! First though I want to make this more mature as a community. Fleshing out features, making it possible to set the API keys in the frontend somewhere, etc. so that it could be added into the main Bolt.new repo seamlessly. 3. If you are still having issues with the smaller models not opening up the code container on the right side, I address that 12 minutes into the video! This is something I am working on that will be a huge improvement for local LLMS!
@@ColeMedin Can you Add an option for users to begin their development process using an existing project as a starting point, rather than always starting from scratch.
This is fucking fire bro thank you keep it up I built a whole app in 12 hours I’ve been trying to build this for 6 years this is a game changer no coding experience just constantly updating and fixing the errors via natural language prompts
LM Studio and MSTY (my preferred GUI for LLMs) both use an OpenAI compatible API. Same as Ollama. I'd be willing to bet just pointing the Ollama provider at LM Studio's URL has a decent chance of working.
I believe you are right! And I for sure know that setting it up like I set up Groq would work since that just changes the OpenAI baseUrl like you are saying essentially!
Thank you all so much for creating this. Its truly amazing
26 วันที่ผ่านมา +1
Wau!! Saw the first video some days ago and now i get to this one 4 days sfter publish and the done ✅ list is so much longer than the one reflected in the video. Congrats to all guys!! Amazing!! Can not wait to get my hands on it
Love to see this continuing! Could you please do more videos of use cases? I'd love to see it try web design or something, just to see the full process with ollama. I'm still having a lot of trouble getting the llm to move to console and not just reply like a regular chatbot
This is amazing! Thank you Cole (and others) for your hard work. I'm so grateful for living on a planet where smart people like you provide dumb people like me with things like this! One huge thing I would love to see implemented, is something to prevent Bolt from re-writing code when you add a new function. There's a lot of one step forward, two step back in Bolt, where when you have added function A, B and are about to add function C.. it removes the work you did with function A. Very frustrating!
It's my pleasure! Thank you for the kind words! I agree that the rewrites are pretty frustrating! This would be a pretty fundamental change to how Bolt.new works, but a few others have raised this concern as well so I am certainly keep on fixing it and will add it to the list of improvements!
Love seeing this! One thing I really appreciate is that you're not just keeping it to yourself, but you're allowing others to join in. That’s huge! 🤟 Thanks for not being like other TH-camrs, man. So many in the AI space have decided to monetize their subscribers. I get that they need to make money to support their content, but I’d much rather 'buy you a coffee' than spend money on some scammy course about making a billion-dollar business-like certain other TH-camrs with around 120,000 subscribers are trying to sell.
Thank you so much! My goal is to be much more giving, collaborative, and value packed than the average AI content creator, so I appreciate you calling that out a lot! I certainly don't blame the other TH-camrs for what they do, and I of course will have to monetize in some ways myself, but I'm working hard to do that in a way that doesn't involve wasting your time, selling scammy courses, or anything like that!
I can just say kudos to you mate, that's an amazing work you've done there, and also shotout to the community for those amazing suggestions. I'd like toask a question if you don't mind, for example if we are already working in a project, is there a way to upload the project to your fork version to review the code and fix issues?
With all the negative news about open source .i.e. Wordpress recently in the press 🤣 ... and OpenAI no longer being Open. You are restoring the heart of Open Source. Well done!!!
Getting the image upload will be key to quickly iterating on POCs. On the Bolt paid plan I can dump a screenshot of an app I like or is similar and boom instantly I have the design, buttons, etc in place. Then I just reprompt with Bolt to make the buttons do the things I want. Image upload is more powerful than people think at times. +1 vote for this. Either way amazing job so far on the fork!! Thank you
Thanks for the hardwork I have a suggestion plz add proper instructions for it to first run a command to initialize a project and then plan and edit files based on the plans This can make it a lot more robust😊
You bet!! Could you expand a bit more on what you are looking for here? I have instructions for running things in the README so I am curious what kind of follow up you keen on.
@@ColeMedin I am talking about a feature like cursor rules , web commands like these @web, a special folder for it to create plans in markdown files like current task, roadmap, plans , improvements,etc in the folder boltdocs. And it should always use the terminal commands like npx create react APp or sveltekit or svelte or something like that and it should then edit the necessary files according to the plans
@@ColeMedin I have been giving this some more thought. Repro prompt (application) has as a nice feature to be able to select files to add to the LLM/context. It might be good to have a check in/check out feature. Files could be checked in for improvement/refactoring and then checked out to keep the context focused and reduce token usage. Eventually with larger models the whole project could be checked in. Also you could make use of git-ignore to keep files such as node_modules out and probably best to exclude static files such as images. Maybe this is biased towards my own type of work flow but definitely a way to get files into the model without copying and pasting the code would be very much appreciated.
Great content and a great way to engage the open source community! Potential huge improvement: One challenge with these tools (bolt.new, cursor V0 etc) is that they start hallucinating and breaking existing code (or creating new errors) when generating new code, especially when the context become large (needle in the haystack problem). I think I have a solution on how to mitigate this problem. How would you like me to proceed in order to see if it could be part of this project?
Thank you very much! I appreciate you being willing to contribute to fix this problem. I agree it's a huge issue for really any AI coding assistant! The fact that you have a solution is incredible and you have me so curious haha You can absolutely feel free to make a pull request with your solution! Or if you want to contribute in any other way please let me know! Regardless - I would make an entire video on your contribute if you tackle this!
Great work! Just saw a video of Fragment saying it's better than Bolt. They have the functionality which your fork has of being able to select other models. So the Medin fork is probably the best already 😊 another thing with Fragment which could be an improvement of the Medin fork is the functionality to select and choose a persona for a session. Almost like scaffolding the session but not doing it in the prompting.
@@ColeMedin I've tried to build some cross-platform apps, i.e. both for mobile and web, and my experience is that the suggested tech stack for front- and back-end differs a lot between different models chosen. But could be that I should be better att prompting of course.
Interesting! I bet it's mostly because there are just so many good platforms out there for any part of the stack. But if you prompt it very specifically with your use case and needs I'm sure the models would suggest more similar things.
I see that most ai's are missing out on automatically checking if the links in the application work as intended. This would be a huge feature if it checks out the functionality after the code is completed. If a function isn't working, the LLM should revise the code and make it work. It has to be using the application like a user would and provide feedback back to the LLM if something needs a fix or improvement. Like buttons, content, links, functions (like search), Or other purposes.
I can see this exploding over the next few months with all the features being added. Great work Another feature that would be HUGE is the ability to deploy to Netlify
Hey man, i've been watching all your AI videos especially the ones on bolt and how you took it offline to run locally. Its amazing. However as a newbie, the installation and the whole process to get your forked bolt locally seems intimidating. Could you like make or rather remake a more simpler tutorial on this whole process of taking bolt on a fresh new system which has no prerequistes installed ? like everything from scratch? even getting those LLMs and some APIs to access them?
The ability for it to have other project context windows so when you start a new chat it can still have memory of the other chats and projects you did will be game changer
Load Local Projects!! I have been trying to do something like this! You can not do that with the paid version either. Yes, I am subscribed to the paid version also.
You can do one thing instead of listing future updates open for everyone, distribute them to different persons to build. You can run a poll and taking the vote who wants to develop which feature. The individual has to provide a plan for tackling the problem. Then you will decide who will develop which feature. Otherwise multiple persons are doing the same thing. For example I have also implemented the Gemini integration.
I appreciate this suggestion a lot! The only problem is it would take a good amount of time + effort to organize this. So I might need to rely on people to check the pull requests and the checked off features to know what is left that needs to be implemented. But I love your suggestion so I will be thinking about how to do that efficiently!
I agree with the OP to an extent. There is another repo that is also a fork and they went the other extreme with around 480 open issues. But at least we have a way to communicate with each other. Discord with polling maybe. I like that better that TH-cam comments and GitHub issues
As a java solution architect with over 17 years experience I have been testing bolt for the past couple of days and it's a little disapointing. it keeps introducing regressions. So it's probably paramount to introduce a merge solution in the ui that will validate the new created code against early created unit tests. one of the first improvement thas has to be is to create a FR ( functional Requirement) document and a NFR (non functional requirement). The next think will be to generate a high level architecture and a a set of features -> workflows -> user stories -> unit test). by spliting the app by feature it will help address partially the llm limited memory.
I'm with you here and I appreciate your thoughts on this a lot! I haven't seen a single AI coding assistant out there that goes into this level of detail for planning and regression testing, but that would certainly be a fantastic next step for this fork and really any AI coding assistant in general!
12:50 When an llm thinks, it uses less tokens then user input queries or llm inferences. In other words the reason why it helps is because the llm uses less tokens to process the request because its already in its context.
Imagine creating an agent that can create fine tuned agents automatically based on a URL of documentation of a particular library, framework or API... So every time you want to use a new library it creates a specialized sub agent who goes away scrapes all the documentation and examples for a library, creates the fine tuned agent and then the master agent has a specialist in that particular library or framework available to him. Imagine all that happening automatically in the backend.... 10-20 specialist agents fine tuned on all the working parts of your codebase. I bet the results would be infinitely better!
Just subscribe to your channel and loving what you’ve done so far with this project, these are 3 things that would turn the tide when it comes to code generation: image to code, running agents and importing projects
@@ColeMedin if the agents could run autonomously until the goal is achieved, if this could be done wow. Thanks for putting in the time and effort to create this (upgrade) it.
Good question! Bolt.new can use a few different component libraries from my experience, and ShadCN is one of them! So yeah if you ask it to use ShadCN for the components it will 😎
I love this concept. I tried to download it last night into my system. For some reason it’s not working. Is there a step-by-step tutorial on how to get the system running. I may be missing a step. I am a novice
Great work! Just a question Did you contact the creators of bolt? Maybe you shouldn’t fork it but join their development , together you truly can build something even more awesome
Thank you and good question! I have not at this point, but maybe I would consider doing it! I am also probably going to be making some pull requests into the main open source repo once we build it out more.
@@ColeMedin amazing, I think just like you said, the power of open source is building together, their team obviously have some marketing and design skills :)
This is awesome and I’d really like to get involved! Any tips on how some could contribute who is still learning python? Perhaps we could write a guide that explains how to contribute for newbies?
Freakin love it bro. I would monetize it though. Just a little. Just enough so you can hire help for bugs, support, updates, new features. Even if its $5-10 month or something.
Thank you very much! I appreciate the suggestion! The plan right now is to keep this free but build a community around it that I will monetize in different ways. The goal with that is to make it really easily accessible but still profitable to hire help as you said!
@@ColeMedin nice! Well, I am not a code type person, if your ever looking for a recorded case study let me know. I got a cologne tracker app ive been wanting to make...could be good content.
YOUR FORK IS NOW MINE WAHHHAHAHAHAMUHUHUHAHAHAHAH but seriously, great job with this! I’m still torn between a bunch of different projects, something similar to this being one of those projects (video editing assistant with ai suggestion & open source scraping) Something I consider to be very worth while is in-app access to a scraping tool combined with some suggestions or maybe a suggestion tool that uses scatterplot comparisons to drive relatable suggestions. I guess what I’m saying here is take what you like from my comments, you have your own shit, I’d love to contribute here and there and when I start finishing some other stuff i’ll have more time to attribute. Also consider the framework of application specific open source repositories, or adding to/recompiling existing repos… Interesting Stuff! I’ll follow along from now on, I have to carry a little AI dev agenda around with me or something… maybe make one using some walmart stationary or have one printed off somewhere… Walmart. Brought to you in part by Walmart!
Haha thank you very much! I appreciate your thoughts here! Sounds like you have a lot going on and if you get the chance to contribute to this project I would appreciate it a ton!
YES I agree!! Having agents running in the background with LangGraph to make the code generation better would be sweet and it's one of the tools I'm considering for implementing agents for this.
This is really helpful and a great work. It is highly appreciated-thank you! I would like to understand why, when using Ollama, it doesn't show the code file structure and preview.
Thank you very much! Yeah for some of the weaker local LLMs I have had this happen to me as well. This is something we are working on with changing up the prompting for Bolt.new!
Yeap this project killin' :) thanks to contributers. there is some errors still there, same prompt in two it can create files on webcontainer. but it has no preview. its working with ollama smoothly.
Hey Cole, this is awesome. Can you please add or suggest how this can be improved or trained for blockchain development e.g. Rust for solana. Not many code generators currently focus on blockchain, thanks.
This is a super interesting suggestion - thank you! I actually did a TON of Blockchain development in 2022 so I agree this would be dope. A bit outside of the current improvements being planned but it would be so cool to do in the future! The main thing that would need to change is the prompting for Bolt.new to generate Rust code. Or Solidity for other chains. And then the webcontainer right now isn't able to run anything besides Node so a new type of container (what opens on the right side of Bolt) would have to be implemented.
The fact it rewrites all thebcode every time kinda sucks. Also gets stuck on errors it cant fix itself and the editing experience kinds sucks. Also will just start implementing changes merely from asking the llm a question. It kinda sucks in these aspects would be nice to fix. For this reason cursor and replit ai agents have been alot less frustrating
I have both replit and Bolt...and I'm trying to create a slightly more complex app...replit is too slow and often doesn't do what I ask, Bolt is more responsive but the problems you highlighted after as the app becomes more complex, they start to get boring and repetitive and I have to ask Claude or chatgpt for help to get it working. Cursor I haven't tried it yet
@@rouges666 Cursor is alot more flexible, but wont just go and build the entire app out for you in one go like replit and bolt. But once you get a minimal version of the app going, it allows alot more flexibility, and doesnt just overwrite all your code each time with changes. I have the paid version of replit, and also hit the limits each day. I havent hit any limits yet on cursor and im on the freeplan. Cursor will force you to learn more as well, which is think is better in the longterm. Today with replit i just couldnt get it to fix its own errors, and i just had to rollback, then hit usage limits.
I totally agree with you that the rewrites is a downside to using Bolt.new! As @rouge666 mentioned though the Bolt.new experience is often better than Replit, and then Cursor as you mentioned can build out an entire app. So in my mind, there is a time and place for both Bolt.new (and potentially Replit, it's still awesome) and Cursor. Bolt.new when you want to build something from scratch without coding in an IDE, and then Cursor to take your application further.
nice job to you and the community ! anyway to import an existing project? also iv tried the stop the response and open the web containers and re run the prompt but with lama 3.2 but it wont create any code files. any free model you that are small you suggest ? also any way to save? Iv had issues when i refresh the page it removes some changes from project
Thank you very much! This is a top priority feature that will be added soon! We are working on making it work better for smaller models right now. In the meantime, I would try some of the larger models offered by OpenRouter! They aren't free but they are super affordable.
I've been trying for months to do really relevant things with small LLM models like Llama 3.1 7B, Llama 3.2, Mistral, Gemma2. Of the many possible ways, Langchain JS, Python with Langchain, Node JS with and without frameworks, Python with and without frameworks and none of them can do anything relevant and useful without using GPT or another paid model. Does it make more sense in this incredible project (Bolt) to use models with fine-tuning instead of "super" prompt at least for local models? 🤔
I've been running into the same thing honestly! So many times smaller models fail to even create single functions that a model like Claude 3.5 Sonnet or o1 can knock out of the park. But yes with this fork of Bolt.new it's a step forward to being able to do these things with smaller models. Still work to be done for sure! A fine-tuned local LLM is for sure a great approach!
Could you please create a video tutorial on how to set up this version of Bolt app on Windows? I'm new to this field and have tried multiple times using Cursor AI but wasn't successful. I would really appreciate your help! Thanks
Love your content bro. Thx a lot. Can you add a option to download and install a model which we select from the list ? also show all free available models which can be downloaded and installed automatically ?
Can you add a feature to give any documentation link(openai,crewai, autogen, langchain etc) , so that when we want to code using these frameworks, the llm will go to these documentation and create the correct code using that knowledge?
This is a fantastic suggestion - thank you for making it! It might be tough to scrape an entire documentation for a project, but let me think about how I could do that!
This project is really taking off - thank you so much everyone for your suggestions, contributions, and support! ❤
Couple of things:
1. If you make a contribution to this fork of Bolt.new, I WILL feature your change in a video!
2. I am certainly planning on opening up a PR to the original Bolt.new repo at some point! First though I want to make this more mature as a community. Fleshing out features, making it possible to set the API keys in the frontend somewhere, etc. so that it could be added into the main Bolt.new repo seamlessly.
3. If you are still having issues with the smaller models not opening up the code container on the right side, I address that 12 minutes into the video! This is something I am working on that will be a huge improvement for local LLMS!
this is how open source should be, stating and even cite the main branch in video description... your morality is touching my heart. thank you.
You are so welcome!
Damn, bro, this is a total time-saver! What a time to be alive, bless you!
Thank you! It sure is a time to be alive 🎉
@@ColeMedin Can you Add an option for users to begin their development process using an existing project as a starting point, rather than always starting from scratch.
@@dxvidfernxndez @ColeMedin that could be amazing!!! 😮
@DavidFernandez-zg1cr and @XerxesD This is one of the top priority features to be added that we are looking into implementing very soon!
@@dxvidfernxndez this is the key feature
kudos for such fantastic initiative. thank you so much. loading local project or from github will make this fork THE one to rule them all
Thank you very much! That is one of the features that I'm planning on implementing very soon because I totally agree!
This is fucking fire bro thank you keep it up I built a whole app in 12 hours I’ve been trying to build this for 6 years this is a game changer no coding experience just constantly updating and fixing the errors via natural language prompts
Did you build the app with this new fork version?
Thank you Chris!! And that's amazing! What did you build?
🧢
What is your App?
LM Studio and MSTY (my preferred GUI for LLMs) both use an OpenAI compatible API. Same as Ollama. I'd be willing to bet just pointing the Ollama provider at LM Studio's URL has a decent chance of working.
I believe you are right! And I for sure know that setting it up like I set up Groq would work since that just changes the OpenAI baseUrl like you are saying essentially!
You and the community did an awesome job!! 🙌🙌
On behalf of all of us, thank you! 😁
Thank you all so much for creating this. Its truly amazing
Wau!! Saw the first video some days ago and now i get to this one 4 days sfter publish and the done ✅ list is so much longer than the one reflected in the video. Congrats to all guys!! Amazing!! Can not wait to get my hands on it
Indeed, it's crazy how much this project has grown recently! Thanks man!
Love to see this continuing! Could you please do more videos of use cases? I'd love to see it try web design or something, just to see the full process with ollama. I'm still having a lot of trouble getting the llm to move to console and not just reply like a regular chatbot
Thank you! Yes - I will be doing content around specific use cases with this in the near future!
Very exciting! Glad to see the community taking off. Keep up the good work!
Thank you very much Jared!! I sure will!
thank you. More local use and open utilities = better.
You bet! I absolutely agree!!
Hello Cole. Could you make a video of how to setup, build and run the app on windows . Thank you for a great video
I certainly can! Or maybe even a short!
@@ColeMedin Same for a mac please. Also, I pay for Claude, I'm not sure how to use that sub with this. (Or if that's even necessary)
His Read Me file on GitHub is super well written if you follow it closely :) Even me, a non coding dummy could figure it out :)
Yes an please a mac too! I keep having a error
@@TheDandonian I agree, I would love you to do a MacOs setup video please... Great Job. You got me into coding. I'm so excited.
dewd comes in with the gratitude, workaround, fix's and project plan.
Appreciate you pushing this out, well done.
Thank you!
This is amazing!
Thank you Cole (and others) for your hard work. I'm so grateful for living on a planet where smart people like you provide dumb people like me with things like this!
One huge thing I would love to see implemented, is something to prevent Bolt from re-writing code when you add a new function. There's a lot of one step forward, two step back in Bolt, where when you have added function A, B and are about to add function C.. it removes the work you did with function A. Very frustrating!
It's my pleasure! Thank you for the kind words!
I agree that the rewrites are pretty frustrating! This would be a pretty fundamental change to how Bolt.new works, but a few others have raised this concern as well so I am certainly keep on fixing it and will add it to the list of improvements!
Love seeing this! One thing I really appreciate is that you're not just keeping it to yourself, but you're allowing others to join in. That’s huge! 🤟
Thanks for not being like other TH-camrs, man. So many in the AI space have decided to monetize their subscribers. I get that they need to make money to support their content, but I’d much rather 'buy you a coffee' than spend money on some scammy course about making a billion-dollar business-like certain other TH-camrs with around 120,000 subscribers are trying to sell.
i do agree with u and u are referring to David
@@shay5338 Yes and may his channel RIP.
Thank you so much! My goal is to be much more giving, collaborative, and value packed than the average AI content creator, so I appreciate you calling that out a lot!
I certainly don't blame the other TH-camrs for what they do, and I of course will have to monetize in some ways myself, but I'm working hard to do that in a way that doesn't involve wasting your time, selling scammy courses, or anything like that!
Awesome progress. I think the cherry on top, once things settle , we can get it into a docker container.... That would be awesome.
Thank you and yes that is definitely the plan!
I can just say kudos to you mate, that's an amazing work you've done there, and also shotout to the community for those amazing suggestions. I'd like toask a question if you don't mind, for example if we are already working in a project, is there a way to upload the project to your fork version to review the code and fix issues?
Thank you so much! And to your question - not yet, but I know someone is working on that as we speak!
Great, you can always surprise us! I think you're a very thoughtful person. I'll learn from you!
Thank you for the kind words - I appreciate it a lot!
Thanks man ❤ much needed features, and thanks for your contribution ❤
1. Much needed is open exiting projects
2. Agents support like cline if possible
You are so welcome! Both of your suggestions are on the list and I can't wait to get them added!
With all the negative news about open source .i.e. Wordpress recently in the press 🤣 ... and OpenAI no longer being Open. You are restoring the heart of Open Source. Well done!!!
That's the goal - thank you!! ❤
Getting the image upload will be key to quickly iterating on POCs. On the Bolt paid plan I can dump a screenshot of an app I like or is similar and boom instantly I have the design, buttons, etc in place. Then I just reprompt with Bolt to make the buttons do the things I want.
Image upload is more powerful than people think at times. +1 vote for this.
Either way amazing job so far on the fork!! Thank you
Thank you very much!! And I agree that image uploading should be one of the top priorities for this fork!
will spread this video out to improve performance!
Wow I appreciate it a ton - thank you!!
Thanks for the hardwork
I have a suggestion plz add proper instructions for it to first run a command to initialize a project and then plan and edit files based on the plans
This can make it a lot more robust😊
You bet!! Could you expand a bit more on what you are looking for here? I have instructions for running things in the README so I am curious what kind of follow up you keen on.
@@ColeMedin I am talking about a feature like cursor rules , web commands like these @web, a special folder for it to create plans in markdown files like current task, roadmap, plans , improvements,etc in the folder boltdocs.
And it should always use the terminal commands like npx create react APp or sveltekit or svelte or something like that and it should then edit the necessary files according to the plans
I love this - thanks for expanding on your suggestion! Let me think about how I could make this possible! I'll add it to the list as well
Love this video! ❤️ I haven't gone through the project yet but I'll definitely try and make a contribution! ✨
Awesome, thank you so much! I seriously appreciate any contributions!
Being able to load local projects on it would be insane!
I agree! It's on the list of improvements and it's one of the highest priority ones!
@@ColeMedin I have been giving this some more thought. Repro prompt (application) has as a nice feature to be able to select files to add to the LLM/context. It might be good to have a check in/check out feature. Files could be checked in for improvement/refactoring and then checked out to keep the context focused and reduce token usage. Eventually with larger models the whole project could be checked in. Also you could make use of git-ignore to keep files such as node_modules out and probably best to exclude static files such as images. Maybe this is biased towards my own type of work flow but definitely a way to get files into the model without copying and pasting the code would be very much appreciated.
Love your thoughts here @DarrenSaunders-l6l, thank you! Great suggestions.
Awesome project! It would be nice if you could do a quick "How to setup and install" video for us newbies 😁
Thank you! Yes I will be making a tutorial for this soon!
Great content and a great way to engage the open source community!
Potential huge improvement: One challenge with these tools (bolt.new, cursor V0 etc) is that they start hallucinating and breaking existing code (or creating new errors) when generating new code, especially when the context become large (needle in the haystack problem). I think I have a solution on how to mitigate this problem. How would you like me to proceed in order to see if it could be part of this project?
Thank you very much!
I appreciate you being willing to contribute to fix this problem. I agree it's a huge issue for really any AI coding assistant! The fact that you have a solution is incredible and you have me so curious haha
You can absolutely feel free to make a pull request with your solution! Or if you want to contribute in any other way please let me know! Regardless - I would make an entire video on your contribute if you tackle this!
All salute and thanks to you Mr. Cole. Really so helpful to the growing AI community ;)🥰👍
I'm glad you think so - it's been my pleasure to start this up for all of us!
Amazing work! I wonder if it could be integrated well with VS code?
Thank you! At this point it can't be, but once we add the ability to load in local projects that would be a logical next step!
Great work! Just saw a video of Fragment saying it's better than Bolt. They have the functionality which your fork has of being able to select other models. So the Medin fork is probably the best already 😊 another thing with Fragment which could be an improvement of the Medin fork is the functionality to select and choose a persona for a session. Almost like scaffolding the session but not doing it in the prompting.
Thank you! I've been checking out Fragment as well and I appreciate your thoughts here! What kinds of personas would you see being useful for this?
@@ColeMedin I've tried to build some cross-platform apps, i.e. both for mobile and web, and my experience is that the suggested tech stack for front- and back-end differs a lot between different models chosen. But could be that I should be better att prompting of course.
Interesting! I bet it's mostly because there are just so many good platforms out there for any part of the stack. But if you prompt it very specifically with your use case and needs I'm sure the models would suggest more similar things.
I see that most ai's are missing out on automatically checking if the links in the application work as intended. This would be a huge feature if it checks out the functionality after the code is completed. If a function isn't working, the LLM should revise the code and make it work. It has to be using the application like a user would and provide feedback back to the LLM if something needs a fix or improvement. Like buttons, content, links, functions (like search), Or other purposes.
I can see this exploding over the next few months with all the features being added. Great work
Another feature that would be HUGE is the ability to deploy to Netlify
Thank you very much! Fingers crossed it continues to grow!
Fantastic suggestion too - I will add this to the list!
Hey man, i've been watching all your AI videos especially the ones on bolt and how you took it offline to run locally. Its amazing. However as a newbie, the installation and the whole process to get your forked bolt locally seems intimidating. Could you like make or rather remake a more simpler tutorial on this whole process of taking bolt on a fresh new system which has no prerequistes installed ? like everything from scratch? even getting those LLMs and some APIs to access them?
Thank you very much! Yes once we flesh out things with Docker more I am planning on making another super simple installation guide!
This is sooo Awesome!!...Thank you so much, this will sound stupid but does this use any credit on this bolt.new version you guys are working on?😂😂
It does not!
The ability for it to have other project context windows so when you start a new chat it can still have memory of the other chats and projects you did will be game changer
Also being able to directly talk to the ai rather then text will also be amazing!!
Fantastic suggestions - thank you!
What an amazing project. I am truly excited to test it out.
Thank you very much!
Thank you so much for your work!😊
You are so welcome!
Load Local Projects!! I have been trying to do something like this! You can not do that with the paid version either. Yes, I am subscribed to the paid version also.
Yes this is on the list - I would absolutely love this too!!
+1 for this feature, maybe the git integration can be done in both directions
Loading local projects would make it the best tool in the category 💪
Yes we are looking to get this added really soon!
Very impressive mate, well done. I’m going to try this. Thank you very much..
Thank you very much! I hope it works great for you! :)
Good stuff 🔥 excited to see where it goes!
Thank you very much, I am as well! 😃
new subscriber bro, you're doing amazing work
Hello Cole,
*Perplexitiy integration is missing from that list.
Great to see all the progress in the making. 🚀
Yes you are right - I will add that now!
Great work dude. I will be contributing to it
Awesome, thank you!
You can do one thing instead of listing future updates open for everyone, distribute them to different persons to build. You can run a poll and taking the vote who wants to develop which feature. The individual has to provide a plan for tackling the problem. Then you will decide who will develop which feature. Otherwise multiple persons are doing the same thing. For example I have also implemented the Gemini integration.
I appreciate this suggestion a lot! The only problem is it would take a good amount of time + effort to organize this. So I might need to rely on people to check the pull requests and the checked off features to know what is left that needs to be implemented. But I love your suggestion so I will be thinking about how to do that efficiently!
I agree with the OP to an extent. There is another repo that is also a fork and they went the other extreme with around 480 open issues. But at least we have a way to communicate with each other.
Discord with polling maybe. I like that better that TH-cam comments and GitHub issues
Yes I will be making a Discourse community soon for all of us to communicate effectively for everything related to this project!!
can you please please do a indepth tutorial on how to install everything and set up thanks you!
I will be doing this soon!
@@ColeMedin thanks
You bet! :D
Amazing! Thanks for your work
You bet! Thank you!
Nice work! Awesome recruiting a team. Are you doing external pull requests?
Thanks and yes I am! This is all community based - I haven't recruited anyone specifically!
I see you are and it looks great! This will take over.
@@hope42 Thank you! 😁
As a java solution architect with over 17 years experience I have been testing bolt for the past couple of days and it's a little disapointing. it keeps introducing regressions. So it's probably paramount to introduce a merge solution in the ui that will validate the new created code against early created unit tests. one of the first improvement thas has to be is to create a FR ( functional Requirement) document and a NFR (non functional requirement). The next think will be to generate a high level architecture and a a set of features -> workflows -> user stories -> unit test). by spliting the app by feature it will help address partially the llm limited memory.
I'm with you here and I appreciate your thoughts on this a lot! I haven't seen a single AI coding assistant out there that goes into this level of detail for planning and regression testing, but that would certainly be a fantastic next step for this fork and really any AI coding assistant in general!
This is really great, can't wait to try it out
Absolutely love it man! Is there a chance that you could do a guide to self host this fork?
Thank you! I am planning on containerizing this to make it easier to run yourself, and then yes I will make a video on how to self-host it!
@@ColeMedin Amazing, thanks for the reply, keep up the good work :D
You bet! Thank you!
Another suggestion. Implement mixture of agents mechanism to get better quality response from even small models.
Love the suggestion! Agents in general is one of the things on the list of improvements and mixture of agents is certainly a part of that!
always love your content,It also have alot of values.
Thanks for keeping everything open source!
Thank you so much for the support Marc, it means the world to me! You bet!!
12:50 When an llm thinks, it uses less tokens then user input queries or llm inferences. In other words the reason why it helps is because the llm uses less tokens to process the request because its already in its context.
Nice yeah that makes a ton of sense!!
ooh yeah been using it and its awesome, also contrubuted into it!
Awesome!! Thank you!
Imagine creating an agent that can create fine tuned agents automatically based on a URL of documentation of a particular library, framework or API... So every time you want to use a new library it creates a specialized sub agent who goes away scrapes all the documentation and examples for a library, creates the fine tuned agent and then the master agent has a specialist in that particular library or framework available to him. Imagine all that happening automatically in the backend.... 10-20 specialist agents fine tuned on all the working parts of your codebase. I bet the results would be infinitely better!
Wow I love your thoughts here! Yes - this would be INCREDIBLE!
Please add docker support directly
Fantastic suggestion - thank you! I will add it to the list.
Docker installation not working...
thats Great work, thanks alot
looking forward for the improvments
Just subscribe to your channel and loving what you’ve done so far with this project, these are 3 things that would turn the tide when it comes to code generation: image to code, running agents and importing projects
Thank you so much! All three of your suggestions are on the list in my repo and I agree would be a game changer!!
@@ColeMedin if the agents could run autonomously until the goal is achieved, if this could be done wow. Thanks for putting in the time and effort to create this (upgrade) it.
Yes that would be incredible!! You bet!
Do Bolt use shadcn ui?
May be using it (optional) for better UI ?
Or may be, we just use our prompt will do it without adding it to the system prompt.
Good question! Bolt.new can use a few different component libraries from my experience, and ShadCN is one of them! So yeah if you ask it to use ShadCN for the components it will 😎
I love this concept. I tried to download it last night into my system. For some reason it’s not working. Is there a step-by-step tutorial on how to get the system running. I may be missing a step. I am a novice
Thank you and I appreciate you trying it out! I will be making a guide on how to run it yourself soon here!
@@ColeMedin thank you
Great work! Just a question
Did you contact the creators of bolt? Maybe you shouldn’t fork it but join their development , together you truly can build something even more awesome
Thank you and good question! I have not at this point, but maybe I would consider doing it! I am also probably going to be making some pull requests into the main open source repo once we build it out more.
@@ColeMedin amazing, I think just like you said, the power of open source is building together, their team obviously have some marketing and design skills :)
Yeah for sure!!
This is awesome and I’d really like to get involved! Any tips on how some could contribute who is still learning python? Perhaps we could write a guide that explains how to contribute for newbies?
Thank you! I'd love to have you on board! I'll be creating a video this month on how to contribute so stay tuned for that :D
I really think local LLMs will be the next step on AI enabled software development, It can lower the cost and improve privacy and security
You are so right!!
Freakin love it bro.
I would monetize it though. Just a little.
Just enough so you can hire help for bugs, support, updates, new features. Even if its $5-10 month or something.
Thank you very much! I appreciate the suggestion! The plan right now is to keep this free but build a community around it that I will monetize in different ways. The goal with that is to make it really easily accessible but still profitable to hire help as you said!
@@ColeMedin nice! Well, I am not a code type person, if your ever looking for a recorded case study let me know.
I got a cologne tracker app ive been wanting to make...could be good content.
Yeah for sure! Thanks for offering that man!
Please make a full step by step tutorial how to setup and run also how to add api especially for ollama.
I have this on my radar to make a video for!
Awesome work
Thank you!!
This is awesome! Do you have a community like discord?
I have tried using gemini flash, it does have a lot of errors since i think it has smaller context? But works like bolt, great job!
Thank you! I'm working on a Discourse community right now that I'll be releasing soon!
Viva the community.Open source forever ♾️♾️♾️♾️♾️♾️♾️♾️♾️
Phidata for agents. I'm not a seasoned dev, but been playing with it today and it's bloody great. Much more simple than Lang etc.
I haven't heard of Phidata until now but it looks amazing! I will definitely check it out further!
@ColeMedin its so damn intuitive. Love it. I hadn't heard of it either until this week.
Love to hear it! I'm excited to check it out!
when you see your pull request in the youtube video. Am more than happy
I'm glad!! Thank you so much for contributing to this! ❤
YOUR FORK IS NOW MINE
WAHHHAHAHAHAMUHUHUHAHAHAHAH
but seriously, great job with this!
I’m still torn between a bunch of different projects, something similar to this being one of those projects (video editing assistant with ai suggestion & open source scraping)
Something I consider to be very worth while is in-app access to a scraping tool combined with some suggestions or maybe a suggestion tool that uses scatterplot comparisons to drive relatable suggestions.
I guess what I’m saying here is take what you like from my comments, you have your own shit, I’d love to contribute here and there and when I start finishing some other stuff i’ll have more time to attribute. Also consider the framework of application specific open source repositories, or adding to/recompiling existing repos…
Interesting Stuff!
I’ll follow along from now on, I have to carry a little AI dev agenda around with me or something… maybe make one using some walmart stationary or have one printed off somewhere… Walmart.
Brought to you in part by Walmart!
Haha thank you very much! I appreciate your thoughts here! Sounds like you have a lot going on and if you get the chance to contribute to this project I would appreciate it a ton!
Great video bro!!
Thanks man!
LangGraph adding would make this a killer app.
YES I agree!! Having agents running in the background with LangGraph to make the code generation better would be sweet and it's one of the tools I'm considering for implementing agents for this.
This is awesome man, great work
Thanks a ton!
great work! gained a sub :) ... file upload will be a great next important addition
Thank you very much - you bet! File upload is something we are looking to do very soon!
This is really helpful and a great work. It is highly appreciated-thank you! I would like to understand why, when using Ollama, it doesn't show the code file structure and preview.
Thank you very much! Yeah for some of the weaker local LLMs I have had this happen to me as well. This is something we are working on with changing up the prompting for Bolt.new!
keep up the great content, fan of you!
Will do - thank you very much! :D
Yeap this project killin' :) thanks to contributers. there is some errors still there, same prompt in two it can create files on webcontainer. but it has no preview. its working with ollama smoothly.
Thanks man! Could you clarify what the error is? Glad Ollama is working smoothly for you though!
Hey Cole, this is awesome. Can you please add or suggest how this can be improved or trained for blockchain development e.g. Rust for solana. Not many code generators currently focus on blockchain, thanks.
This is a super interesting suggestion - thank you! I actually did a TON of Blockchain development in 2022 so I agree this would be dope. A bit outside of the current improvements being planned but it would be so cool to do in the future!
The main thing that would need to change is the prompting for Bolt.new to generate Rust code. Or Solidity for other chains. And then the webcontainer right now isn't able to run anything besides Node so a new type of container (what opens on the right side of Bolt) would have to be implemented.
The fact it rewrites all thebcode every time kinda sucks. Also gets stuck on errors it cant fix itself and the editing experience kinds sucks. Also will just start implementing changes merely from asking the llm a question. It kinda sucks in these aspects would be nice to fix. For this reason cursor and replit ai agents have been alot less frustrating
I have both replit and Bolt...and I'm trying to create a slightly more complex app...replit is too slow and often doesn't do what I ask, Bolt is more responsive but the problems you highlighted after as the app becomes more complex, they start to get boring and repetitive and I have to ask Claude or chatgpt for help to get it working.
Cursor I haven't tried it yet
@@rouges666 Cursor is alot more flexible, but wont just go and build the entire app out for you in one go like replit and bolt. But once you get a minimal version of the app going, it allows alot more flexibility, and doesnt just overwrite all your code each time with changes. I have the paid version of replit, and also hit the limits each day. I havent hit any limits yet on cursor and im on the freeplan. Cursor will force you to learn more as well, which is think is better in the longterm. Today with replit i just couldnt get it to fix its own errors, and i just had to rollback, then hit usage limits.
I totally agree with you that the rewrites is a downside to using Bolt.new! As @rouge666 mentioned though the Bolt.new experience is often better than Replit, and then Cursor as you mentioned can build out an entire app.
So in my mind, there is a time and place for both Bolt.new (and potentially Replit, it's still awesome) and Cursor. Bolt.new when you want to build something from scratch without coding in an IDE, and then Cursor to take your application further.
i'll come back in a month to see the new changes 👍
Sounds great! You'll see a lot more in a month for sure! 😃
nice job to you and the community ! anyway to import an existing project? also iv tried the stop the response and open the web containers and re run the prompt but with lama 3.2 but it wont create any code files. any free model you that are small you suggest ? also any way to save? Iv had issues when i refresh the page it removes some changes from project
Thank you very much! This is a top priority feature that will be added soon!
We are working on making it work better for smaller models right now. In the meantime, I would try some of the larger models offered by OpenRouter! They aren't free but they are super affordable.
Can I load up a Figma file and have Bolt code the whole thing?
Not yet but that is one of the goals!
I've been trying for months to do really relevant things with small LLM models like Llama 3.1 7B, Llama 3.2, Mistral, Gemma2. Of the many possible ways, Langchain JS, Python with Langchain, Node JS with and without frameworks, Python with and without frameworks and none of them can do anything relevant and useful without using GPT or another paid model.
Does it make more sense in this incredible project (Bolt) to use models with fine-tuning instead of "super" prompt at least for local models? 🤔
I've been running into the same thing honestly! So many times smaller models fail to even create single functions that a model like Claude 3.5 Sonnet or o1 can knock out of the park.
But yes with this fork of Bolt.new it's a step forward to being able to do these things with smaller models. Still work to be done for sure! A fine-tuned local LLM is for sure a great approach!
Bless you brother
Could you please create a video tutorial on how to set up this version of Bolt app on Windows? I'm new to this field and have tried multiple times using Cursor AI but wasn't successful.
I would really appreciate your help! Thanks
Yes I will be making a tutorial on this soon!
Suggestion: Add the option for the model to automatically start debugging if the produced code throws an error.
Love this suggestion - thank you! I believe someone made a pull request for this recently, I will take a look!
thank you for this amazing >> i have question ?
How to import files into app to start edit or cusomize ?
Thank you very much! And this feature isn't implemented yet but it's at the top of the list for improvements to make to the project!
Can we have a simple GUI to enter the API keys pls 🙏
This is a great suggestion! It would require a lot of setup in the backend but it would make things a lot easier! I've got it added to the list!
Love your content bro. Thx a lot. Can you add a option to download and install a model which we select from the list ? also show all free available models which can be downloaded and installed automatically ?
Love these suggestions, thank you!
Can you add a feature to give any documentation link(openai,crewai, autogen, langchain etc) , so that when we want to code using these frameworks, the llm will go to these documentation and create the correct code using that knowledge?
This is a fantastic suggestion - thank you for making it! It might be tough to scrape an entire documentation for a project, but let me think about how I could do that!
This is awesome!
Thank you :D
thank you! for the video and the effort
You are so welcome!
You did an amazing job. Congratulations. Can I use OPENAI_API_KEY without ANTHROPIC_API_KEY?
Thank you very much! Yes you can!
Great work.....👏👏👏
Thank you very much!
The ability to send files(photos, pdfs…) to the model