The Open Devin Team Just updated the setup instructions. You can now use a Make File which is an automated version of these instructions. Try the Make File Approach and if you are having trouble you can still follow these instructions in my video to get it setup. I suggest opening the MakeFile which is just an automated version of this video and you'll see all the instructions broken down. You can follow my video an double check in Make File if you have any issues. Make File on Github - github.com/OpenDevin/OpenDevin/blob/main/Makefile If you don't have Make on Windows you can use Choclatey for windows to download and run it - chocolatey.org/install
Hi, thanks for tutorial. with new setup instruction I have an error when I lauch make run. Running the app... /usr/bin/env: ‘bash ’: No such file or directory make: *** [Makefile:36: run] Error 127 . I'm using WSL on windows
@@giulianomaglieri3865 That is the line in the Make file that runs the Node Front End. You could try running the commands in the Make file section Make Run one by one and seeing if you can isolate the problem or just setup the frontend with node install and npm install yourself.
Thank you man I watched another guys video and yours is 100x simpler and easy to follow. Much appreciated from a fellow novice to all of these applications this was very simple.
If you open the make file in the route folder it’s basically all the commands and instructions you need to setup. A make file automates this process by using the command make. You can use chocolatey package manager on windows to install it
Well you'd need to add some sort of login system and to feed a full install to each user... That could prove... Well bulky. But if it's just for building an app on that server then yeah sure... On aws of Google or culture or digital ocean...should work easy.
I’ve setup all of them so I might run a side to side. Right now my quick take is swe looks promising in its logic, devika in its interface and openDevin I’m still finding most buggy.
Just tried it now, seems like it goes off on a tangent if it doesn't understand your initial message. Like that one ranty software engineer most people work with who doesn't have the social capacity to just shut the f up :D
I'm running into an issue during make build. Ran into the same while running your installation guide: Getting requirements to build wheel: finished with status 'error' and then later it breaks on: RuntimeError: uvloop does not support Windows at the moment How did you overcome this?
only three days since he made this and again they changed the install procedure since they now have a container and makefile. Okay, okay its new, I get that.... but jesus lads... let us at least catch our breath okay?
Slow down lads!!! Haha in fairness it’s a tiny team going fast but having a simple install would make it accessible for more contributors so it’s worth streamlining it and it’s not there yet
@@RobShocks I installed it on a Vultr instance... having a little issue setting it up with openrouter and litellm to be honest... I'll get back to it. But I think a good use case is it actaully installed on the dev server where an app is being built (Not on the production server obviously)... and as a helping tool, not the actual solution, cos this thing and I'm sure Devin itself are going to be flaky as fuck for the next year!
I'm facing the issue: "OPENAI Connection error", does anybody know what I did wrong and how I can fix it? PS: The error is coming when I send Devin a msg
@RobShocks Am I supposed to buy something somewhere? Because I followed the exact steps from your video and it gave me a secret key but am I supposed to buy credits from openai?
@@ambicapradhan1064 You need a subscription or credits to use Open AI API it’s not free I’m afraid. If you want you can use a free local model with Ollama you can
I am getting litellm module not found when running uvicorn command. I have litellm installed in the environment and verified that from environment itself. Anyone else having the same issue? I am in WSL2 env
I don't think so but you can use a lot of other LLMs via chaning the config.toml file in the root and using docs.litellm.ai/ instructions are in the read me
The docker image won't be on the repo. If you look in the makefile however you will see how the image is built and created on your machine for you. What is the specific problem?
Not sure about Claude, but you can certainly add local LLM's from Ollama github.com/OpenDevin/OpenDevin/commit/08a2dfb01af1aec6743f5e4c23507d63980726c0
The Open Devin Team Just updated the setup instructions.
You can now use a Make File which is an automated version of these instructions.
Try the Make File Approach and if you are having trouble you can still follow these instructions in my video to get it setup. I suggest opening the MakeFile which is just an automated version of this video and you'll see all the instructions broken down. You can follow my video an double check in Make File if you have any issues.
Make File on Github - github.com/OpenDevin/OpenDevin/blob/main/Makefile
If you don't have Make on Windows you can use Choclatey for windows to download and run it - chocolatey.org/install
Hi, thanks for tutorial. with new setup instruction I have an error when I lauch make run. Running the app...
/usr/bin/env: ‘bash
’: No such file or directory
make: *** [Makefile:36: run] Error 127 . I'm using WSL on windows
@@giulianomaglieri3865 That is the line in the Make file that runs the Node Front End. You could try running the commands in the Make file section Make Run one by one and seeing if you can isolate the problem or just setup the frontend with node install and npm install yourself.
Could you make a tutorial for the Make method? I can't get it to work. It says it isn't recognized as a "cmdlet" even though I installed it.
@@captainpumpkinhead1512 when you get a cmdlet issue, make sure you are running in conda or the environment you setup. Install Make via chocolatey
The video method did not work. Should I use chocolate to download make and will it work?
Thank you man I watched another guys video and yours is 100x simpler and easy to follow. Much appreciated from a fellow novice to all of these applications this was very simple.
That comment was really appreciated, thank you. Takes ages to make these, I love it when it helps.
@@RobShocks please do more of this, thanks alot
This was great, I was having trouble getting this installed and your video really helped out. You have a new subscriber now! Thanks again.
I love these kind of messages, you made my evening thank you.
Why all the videos of افتح ديفين On the same date and stopped after that
Is it free to get API key you used or should we need to buy API when install the openDevin, if so, where do we purchase from?
Could you pretty please do a comparison video comparing the abilities and features of Aider vs Open Devin vs Devika : )
In the works! Crew AI coming next first
Thanks bro... You are awesome ❤❤
Unfortunately the install procedure is far to complicated. No real explanation from the project side. Seems no interest in broad usage :-/
Hi , can you please share your project repo ? . Seems they have changed it again.
Can you also help me understand how this Makefile approach works?
If you open the make file in the route folder it’s basically all the commands and instructions you need to setup. A make file automates this process by using the command make. You can use chocolatey package manager on windows to install it
how does it work for mac?
@@RobShocks
Is there anything stopping someone from hosting this on a cloud server so it can be accessed via a domain?
Well you'd need to add some sort of login system and to feed a full install to each user... That could prove... Well bulky. But if it's just for building an app on that server then yeah sure... On aws of Google or culture or digital ocean...should work easy.
@@gremlinsaregold8890 I am currently deploying it now. I will report back how things are going in a few days.
@@gremlinsaregold8890 I have it on a server now
Can you do comparison please with devika and swe ?
I’ve setup all of them so I might run a side to side. Right now my quick take is swe looks promising in its logic, devika in its interface and openDevin I’m still finding most buggy.
Would it be possible to ignore the configuration of the api-key step? I have a ton of local LLM's and in fact no access to an api key from openAI
Try this github.com/OpenDevin/OpenDevin/pull/615
subscribed Rob... ya madzer ya!
Really appreciate that!! Ya Madzer yerself
What are the hardware requirements to run devin locally?
❤❤❤that’s a great
Thanks!
need a guide on th makefile method, keep running into errors :c
Any chance you could help setting up using a local LLM using Ollama? I have tired but seem to be failing on the last part
What kind of errors are you getting?
Just tried it now, seems like it goes off on a tangent if it doesn't understand your initial message. Like that one ranty software engineer most people work with who doesn't have the social capacity to just shut the f up :D
ahhh working perfectly so...
Hi,
can i use it to optimize my database?
I'm running into an issue during make build. Ran into the same while running your installation guide:
Getting requirements to build wheel: finished with status 'error'
and then later it breaks on:
RuntimeError: uvloop does not support Windows at the moment
How did you overcome this?
Did you use Conda for environment management? Do you have wsl?
@@RobShocks No, I actually gave up, learned ubuntu and install the whole thing there. Costs me 50gb of harddrive, but it works. Thanks!
@@dociler ah well prob better off good to have Ubuntu
can you please please make a video of open devin with local llm from lm studio please
thats not possible. Opendevin is not an LLM
@@hule333The question was. Getting Open DEVIN to work with a local LLM by using LM studio to take the place of OpenAI
@@hule333 no i meant integrating lm studio with open devin just like it has functionality with ollama and litellm
@@saimnadeem5882 ah
in vs code I cant able to see conda interpreter
It doesn't work at all. Is there anyone who has run the program?
I find it ironic that programmers are helping build the thing that will eliminate / replace programmers.
I think we've been doing that since the first computer
That’s literally the whole point of computers!
@@thelostandunfounds 🤓
We have been automating everything away since the first computer.
The point is not to have to code at all.
That's because we programmers are lazy 🦥
only three days since he made this and again they changed the install procedure since they now have a container and makefile. Okay, okay its new, I get that.... but jesus lads... let us at least catch our breath okay?
Slow down lads!!! Haha in fairness it’s a tiny team going fast but having a simple install would make it accessible for more contributors so it’s worth streamlining it and it’s not there yet
@@RobShocks I installed it on a Vultr instance... having a little issue setting it up with openrouter and litellm to be honest... I'll get back to it. But I think a good use case is it actaully installed on the dev server where an app is being built (Not on the production server obviously)... and as a helping tool, not the actual solution, cos this thing and I'm sure Devin itself are going to be flaky as fuck for the next year!
I'm facing the issue: "OPENAI Connection error", does anybody know what I did wrong and how I can fix it?
PS: The error is coming when I send Devin a msg
Did you check your openAI credits? The error can be quite vague. Also try create a new key and try again.
@RobShocks Am I supposed to buy something somewhere? Because I followed the exact steps from your video and it gave me a secret key but am I supposed to buy credits from openai?
@@ambicapradhan1064 You need a subscription or credits to use Open AI API it’s not free I’m afraid. If you want you can use a free local model with Ollama you can
@@RobShocks Alright Thank you so much
I am getting litellm module not found when running uvicorn command. I have litellm installed in the environment and verified that from environment itself. Anyone else having the same issue? I am in WSL2 env
Did not have this error, its worth checking the Discord and Discussion page on Github for anyone with similar problem
python -m pip install litellm
Where is the requirement.Txt file I didn't found that can some one help me
me too
They changed it again.
Time for a new video! hahaha
@@redchief3905 😅
Can I use this with Gemini API key ?
I don't think so but you can use a lot of other LLMs via chaning the config.toml file in the root and using docs.litellm.ai/ instructions are in the read me
i cant fid the docker image on git hub can anyone help me?
The docker image won't be on the repo. If you look in the makefile however you will see how the image is built and created on your machine for you. What is the specific problem?
Makefile get error
Can i add Claude api key instead of open ai keys?
Not sure about Claude, but you can certainly add local LLM's from Ollama github.com/OpenDevin/OpenDevin/commit/08a2dfb01af1aec6743f5e4c23507d63980726c0
not work