I believe that learning coding and using gpt in tandem is very nice its basically the future and gpt can sometimes help fix errors that you personally created or sometimes you can fix an error(like what you did in the video) that gpt made but saved you minutes or hours if you wouldve written it yourself 100%. having knowledge to create the code is still very important even if you have chat GPT create it. its like trying to instruct someone how to do a job without even knowing how to do the job yourself. sure GPT can guess what you want but sometimes it will guess wrong and i have experienced that many times. you have to be clear on what you want it to do and it will just fill it in.
Hmm. Good stuff, I expect great things from you so I'll subscribe. Gotta keep up with this wave. Maybe one day these can actually create entire high-end apps from your prompt, and in that sense it'd be more important to focus on its architecture right now than start coding it perhaps...! :O When the final form of this gpt hits the market, if you have things well thought out and a file to be read ready, maybe you can even create anything in a day. Expecting that is a little like expecting stocks to rise. If that singularity is ever reached, ofc.
I love these projects, however I hate that it relies on an OpenAPI key making it cost money to use. It would be amazing if it utilized an open source LLM to run the agents against. I know that the open source LLM's are not as good as say GPT4, however using something like WizardCoder LM would still be somewhat useful for this sort of task. Being able to run something like this completely locally will be a game changer.
How much does it cost typically? I'd love to mess around with this, but I don't want to go broke. I had the same problem with learning Cloud networking.
Amazing. 99% of engineers will be redundant soon. We'll just need one senior engineer to work with the AI. This is awesome.... Unless you're an engineer
Thats not possible. You still need a team of engineers that know the code inside and out in order to address security vulnerabilities and other issues the AI may not be able to solve. Would you board an aircraft built from AI-generated design files provided by a company with one engineer?
While this can make things easier but I think in the long run, it can be a disaster. I coded an Android app last year and yesterday, I tried to change a few things and upgraded some packages. at first, I was like what the heck is all this code? It took me some time to understand before making those changes. Now, imagine, if I ask GPT to create an app for me. There is no way, i would read all those files. After a year, I ask GPT to solve the issues or upgrade the app and if it doesn't work, maybe because my prompt is not good or GPT is trying but can't get it right, then I think it would not be easy. Also, it's taking the ability of the programmer to understand stuff. I remember watching all those videos about Github Copilot where developers/comments were saying to use it but don't completely rely on it. Now, with all this AI craze, exact same thing is happening.
Isn't that the whole point? As a programmer, you may have extensive knowledge, but GPT is there to expedite the process, right? The situation with your app mirrors my own predicament. I'm a complete novice when it comes to programming. With GPT's assistance, I managed to create a Twitch program that downloads streams-it's a basic script with a few add-ons. However, when I offer this script to GPT and request enhancements-let's say I want to incorporate a new feature-the AI restructures my entire script around this new function. This often results in a more complex script, especially when dealing with a substantial amount of code
Dave, this is very cool. I am using GPT4 in many of my projects, so will see if this will streamline my work. BTW. It would be interesting to see if GPT Engineer can iteratively add things to the project which it has created, make changes, or fix bugs. Also, does it look at the entire context of the project (is aware of all files), or is loosing it from time to time and somehow this needs to be reloaded? Inoticed that GPT4 is struggling with it a bit (forgets some parts of the code after few interactions). I wonder it works in this case? Keep up the great work!
I think we might need to start using a local language model customized with this AGI script. Then use GPT4 as a backend to debug and pick up what the lower ended model we are locally using (Llama2) for example is doing or can't do. This will lower cost, and help us build a very flexible model with memory modules, arrays and maybe something I have no idea of what I'm talking about, so its like a little brother trying to impress his friends he is able to build a car, but runs into the other room to ask his big brother who is actually a mechanic, and asks about the different parts of the car, but the little brother is the one that knows the whole car. The little brother also has a self-fine tuning model where it learns and remembers from big brother, and implements the same code structure and cascading logic into new code. I believe this is the key to building this and other AGI's out. We need to harness the power of Local LM's mixed with GPT4 to build a new AGI coder, and that has root access to install, modify services and packages on the running OS, needed to be used with expanding the code. What do you guys think? You guys want to build this with me? I have no idea what I'm doing, but somehow I'm doing it.
MetaGPT performs better, it can generate real files with code while GPT-engineer gives only a codebase. MG simulates AI software company with multiple AI agents collabrating together. Multi agents is definitely superior to a single agent.
I basicly have this - I'll try to combine it this weekend. I found out, that GPT is basicly very good in Backlog-Filling with Userstory in a Scrum Framework oriented process. Wich are a perfect way to create microservices to solve these stories with a tool like this - because they're way smaller in scale than a whole project. It still needs a project idea - but that's basicly it. Wich can be generated with a prompt as well, so that it generates random projects =D This should work well on the backend level - the problem might be, to combine these microservices in the frontend.
put some autonomous coding agents in the cloud that figure out new ML architectures through a genetic algorithm, then benchmark those on popular sets and mutate the architectures until they replace GPT in performance, that is for the same amount of params. then scale the best models up and voilà you surpassed transformers. if I can soon do that, mega caps will do it to perfection give it half a year
I found out, that GPT is basicly pretty good in filling backlogs in a SCRUM framework with userstories and creates whole theoretical projects if done right. Combining this with GPT Engineer, so that it creates microservices to solve these Userstories - wich are limited in scale and therefore easier to do - might lead to even better results, espacially if the whole scrum process is generated with different GPT Agents in different roles, that are able to correct themself, enhace the problem and the solution in this way. I hope, GPT3.5-16k does the job as well, I'll test it this weekend and integrate it in my scrumGPT setup - I'm still on waitlist for gpt-4 -.-
@@Ilamarea Would that be too bad? xD No, to be fair - this is still on a very low level. Basicly, it's about having a great amount of monkeys and minimize everything on a level, that a monkey can do the job.
I am superbeginner, no expierice with coding, but I am starting starting college education in programming focused on AI. I was messing around with Chat GPT and trained it so it could make notes about any text I gave to it, then generate testing questions and evaluate my answers. Is it possible to create app, that can do the same using GPT Engineer?
Two questions (which have been asked individually elsewhere) and a thought about incorporating LangChain in the process: 1. Other than OpenAI models (GPT-4 and GPT-3.5), does it support other LLMs? And, particularly, what might it take to adapt it for some open source LLMs (e.g. Falcon-7b)? 2. As it stands now, can it be used effectively to develop code which utilizes a well-known API (particularly one that adheres to gRPC) that is likely to have been part of the GPT-4 or 3.5 training sets? (I intend to find out later today, but thought I'd ask.) In thinking about supporting 'engineering' on an API for which I would have to supply the detailed description, my first thought would be to use one of the 'splitter' functions of LangChain to handle what is likely to be pretty extensive documentation (e.g. the protobuf definitions, and client stubs that would be generated automatically by the creation of the API). Do you (or anyone else reading this) have knowledge of an effort that has used LangChain in this fashion?
good job Dave. In my case gpt engineer has created all python files code in single file called all_output.txt. Is that expected as files are not placed as yours in workspace.
I tried to do some work that requires tensorflow installation but it does not ask me to install or it does not install it, it self. Should we install dependencies ourselves?
Are you willing to share? I have only used langchain once and my immediate thought when watching was langchain with my own files that it could use as an example.
Could you perhaps explain how to use the --steps use_feedback function on codebases which haven't been generated from it beforehand. So on our own projects to expand.
how the model can keep up the context of all your code? maybe its because the code its not to much? but and for to bigger systems? I guess this have some limitations yet (context)
Hmmm. Tried this but It says exceeded current quota. tried to change model but it prompts same message. Please have a disclaimer first that we need to pay something and you didnt even tell where to get OpenAPI key. Anyways more power on your channel! Keep sharing knowledge! Thanks!
that is pretty pretty neat, altough can I somehow give it entire API description and ask it to create a module to connect this API? and I mean it would need to API documentation and we have like limit for 2000 characters I guess? or can I copy paste 20 000 of characters trough this module and this plugin will spoon feed the chatgpt with this data? what is the best approach here - please advice
Easiest thing to handle big chunks is to break it down in parts and build something that GPT can navigate though this. Like: Give one GPT Agent a list with all endpoints and one sentence that descripes it. Let it choose, what comes in handy. Than pull the docs from this endpoint (stored in cache, file or db) and give it to the next agent (like GPT Engineer). I guess, these frameworks work best with small chunks of data and in creating microservices, because it's smaller in scale and the model can not drift away that easy.
Is there a way to put a CSV dataset in the folder, provide a prompt while having GPT Engineer look the file over & create something like an LSTM or XGBoost model for time series forecasting tailored to that specific dataset?
Gosh I am trying to follow your tutorial however the folder structure was changed yesterday. It's making it challenging to run the prompt. does any one know how to run the prompt with the new structure?
It seems great, but the problem is the repo has already changed and the requirements to get it to work have already changed. I can’t seem to time an install a video with how the repo looks at that given time…main.py is not in the same directory anymore
Very nice bro. I have a trading data set, how tell me when a strategy work yes or no. I would like GPT Engineer to serch witch machine learning model give me the best score ?How can I introduce the differents models. Thank you Bro.👍👍
@@daveebbelaar thanks, for some reason my subscription is not active, so I cannot use gpt4 at the moment. I will experiment when openai fixes my subscription
Can't follow along. Need an additional payment plan that charges based on usage. Not a flat rate like for ChatGPT Plus. Too risky for me to potentially spend all my savings.
I am trying to use to vs code to follow your tutorial. I have installed python environment and created conda gpt-engineer and start "code ." under gpt-engineer but when I start terminal the terminal is still set to "PS" . Typing conda info --env on terminal does show the environment "gpt-engineer * C:\Users\kehsa\miniconda3\envs\gpt-engineer gpt-gdrive C:\Users\kehsa\miniconda3\envs\gpt-gdrive langflow C:\Users\kehsa\miniconda3\envs\langflow pix2pix C:\Users\kehsa\miniconda3\envs\pix2pix" but activating the environment does not seem to work PS C:\Users\kehsa\gpt-projects\gpt-engineer> conda activate gpt-engineer" PS C:\Users\kehsa\gpt-projects\gpt-engineer> Would you know why, I would greatly appreciate it.
Hey Dave how do I give gpt-engineer additional clarifications/prompts after my initial clarifying prompts. For example, gpt-engineer made me a simple user interface but it doesn’t perform one of the actions I wanted it to. Is there a way for gpt-engineer to add onto the work it has already done?
👉🏻Learn more about data science and AI: bit.ly/data-alchemy
👉🏻Kick-start your freelance career in data: www.datalumina.io/data-freelancer
since when do you train your arms
My man, you have been my go to source for all things AI lately. You catch all the good stuff right away and explain it beautifully. A+ content.
That's awesome, thanks Dan!! 🙏🏻
I support this statement 👍🏼
I believe that learning coding and using gpt in tandem is very nice its basically the future and gpt can sometimes help fix errors that you personally created or sometimes you can fix an error(like what you did in the video) that gpt made but saved you minutes or hours if you wouldve written it yourself 100%. having knowledge to create the code is still very important even if you have chat GPT create it. its like trying to instruct someone how to do a job without even knowing how to do the job yourself. sure GPT can guess what you want but sometimes it will guess wrong and i have experienced that many times. you have to be clear on what you want it to do and it will just fill it in.
one suggestion. Please use bigger fonts in VS Code, like presentation mode. It's impossible to watch from a mobile phone. Thank you for your work
Do you also code on your phone? Watch from PC.
Lmaooo wow
It shouldn't be impossible? Have you checked your resolution is high enough?
@@lmnts556Have you considered that people might watch videos while travelling?
@@smitsanghvi1827 Then save it for when you get home and watch another video.
Man... where have you been in my life. You came up in my Google News feed randomly and I'm so glad you did!!!
I was right here man - glad I can help ;)
@Mr_DaveEbbelaar hey!
imagine using midjourney style prompts to scaffold an app like “/framework” “/data” “/ui” “/host” to build and launch an app 😮
very possible... i looking on it hard =P
Why not use gpt to help you build this?
My Goodness!! This is the scariest demo I have seen yet - in a good way :)
Dave could you make a tutorial showing how to use "Function Calling Capability" with Flowise? I see they added the option in Flowise.
Yes, that would be awesome 🙏🏼
Would be cool to find out how we can continue to expand the existing program by adding new features, etc
As I understand it, iterating is the next step everyone's working on.
My AI does this.
@@drlordbasil care to share?
Hmm. Good stuff, I expect great things from you so I'll subscribe. Gotta keep up with this wave. Maybe one day these can actually create entire high-end apps from your prompt, and in that sense it'd be more important to focus on its architecture right now than start coding it perhaps...! :O When the final form of this gpt hits the market, if you have things well thought out and a file to be read ready, maybe you can even create anything in a day. Expecting that is a little like expecting stocks to rise. If that singularity is ever reached, ofc.
Dude, great video OVER HERE!
I liked it too OVER HERE
This was a little over my head! But I am subbed!!
I love these projects, however I hate that it relies on an OpenAPI key making it cost money to use. It would be amazing if it utilized an open source LLM to run the agents against. I know that the open source LLM's are not as good as say GPT4, however using something like WizardCoder LM would still be somewhat useful for this sort of task. Being able to run something like this completely locally will be a game changer.
How much does it cost typically? I'd love to mess around with this, but I don't want to go broke. I had the same problem with learning Cloud networking.
There is politics involved to make us use the large corporations! @@techyesplc
Amazing. 99% of engineers will be redundant soon. We'll just need one senior engineer to work with the AI. This is awesome.... Unless you're an engineer
Thats not possible. You still need a team of engineers that know the code inside and out in order to address security vulnerabilities and other issues the AI may not be able to solve.
Would you board an aircraft built from AI-generated design files provided by a company with one engineer?
@@skateboarderlucc yes if its been tested a few million times. Which the AI also can do...
I’m definitely trying this with some with some web applications.
Great vid man. Thanks
Hey bro can you teach me?
Hi Dave, i really would like to figure out how to connect a locally running model, or a model in LM Studio to this. How might I do that?
While this can make things easier but I think in the long run, it can be a disaster. I coded an Android app last year and yesterday, I tried to change a few things and upgraded some packages. at first, I was like what the heck is all this code? It took me some time to understand before making those changes. Now, imagine, if I ask GPT to create an app for me. There is no way, i would read all those files. After a year, I ask GPT to solve the issues or upgrade the app and if it doesn't work, maybe because my prompt is not good or GPT is trying but can't get it right, then I think it would not be easy. Also, it's taking the ability of the programmer to understand stuff. I remember watching all those videos about Github Copilot where developers/comments were saying to use it but don't completely rely on it. Now, with all this AI craze, exact same thing is happening.
Isn't that the whole point? As a programmer, you may have extensive knowledge, but GPT is there to expedite the process, right? The situation with your app mirrors my own predicament. I'm a complete novice when it comes to programming. With GPT's assistance, I managed to create a Twitch program that downloads streams-it's a basic script with a few add-ons.
However, when I offer this script to GPT and request enhancements-let's say I want to incorporate a new feature-the AI restructures my entire script around this new function. This often results in a more complex script, especially when dealing with a substantial amount of code
Awesome video dave!!!!
You forgot to mention this requires you to have enabled billing. Otherwise you'll always run into rate limit error!
the next improvement is to crossbreed it with Langchain Python REPL agent. I'll try to reach the author
Will be nice a video using feedback , identity and steps of this project. Thanks you
@DaveEbbelaar. where?
Wow, very cool! Thanks Dave.
You are real rockstar man! Lots of love
Dave, this is very cool. I am using GPT4 in many of my projects, so will see if this will streamline my work. BTW. It would be interesting to see if GPT Engineer can iteratively add things to the project which it has created, make changes, or fix bugs. Also, does it look at the entire context of the project (is aware of all files), or is loosing it from time to time and somehow this needs to be reloaded? Inoticed that GPT4 is struggling with it a bit (forgets some parts of the code after few interactions). I wonder it works in this case? Keep up the great work!
Thanks Marek, I am still new to this tool as well, so that I is something I have to further explore.
@@daveebbelaar Great video! Can this work only with python, or can it work with building full stack apps with a JS/TS framework for example??
@@hiranga Works with other languages as well!
@@daveebbelaar hey !
So, did you find a way to make this work as @marekdziubinski850 mentioned ?
thanks ! :)
I think we might need to start using a local language model customized with this AGI script. Then use GPT4 as a backend to debug and pick up what the lower ended model we are locally using (Llama2) for example is doing or can't do. This will lower cost, and help us build a very flexible model with memory modules, arrays and maybe something I have no idea of what I'm talking about, so its like a little brother trying to impress his friends he is able to build a car, but runs into the other room to ask his big brother who is actually a mechanic, and asks about the different parts of the car, but the little brother is the one that knows the whole car. The little brother also has a self-fine tuning model where it learns and remembers from big brother, and implements the same code structure and cascading logic into new code. I believe this is the key to building this and other AGI's out. We need to harness the power of Local LM's mixed with GPT4 to build a new AGI coder, and that has root access to install, modify services and packages on the running OS, needed to be used with expanding the code. What do you guys think? You guys want to build this with me? I have no idea what I'm doing, but somehow I'm doing it.
MetaGPT performs better, it can generate real files with code while GPT-engineer gives only a codebase. MG simulates AI software company with multiple AI agents collabrating together. Multi agents is definitely superior to a single agent.
If anyone is having 'current quota exceeded ' you need to add a debit/credit card to your openai account, even if you have a GPT 4 subscription.
thanks. 3 days ago, and looking at the structure it is totally different
personally I will get anxious if I got a model with 0.99 R2 XD. Great content though, subscribed
Now create an agent for gpt engineer that comes up with its own projects and take the human out of it completely
I basicly have this - I'll try to combine it this weekend.
I found out, that GPT is basicly very good in Backlog-Filling with Userstory in a Scrum Framework oriented process. Wich are a perfect way to create microservices to solve these stories with a tool like this - because they're way smaller in scale than a whole project.
It still needs a project idea - but that's basicly it. Wich can be generated with a prompt as well, so that it generates random projects =D
This should work well on the backend level - the problem might be, to combine these microservices in the frontend.
put some autonomous coding agents in the cloud that figure out new ML architectures through a genetic algorithm, then benchmark those on popular sets and mutate the architectures until they replace GPT in performance, that is for the same amount of params. then scale the best models up and voilà you surpassed transformers. if I can soon do that, mega caps will do it to perfection
give it half a year
I found out, that GPT is basicly pretty good in filling backlogs in a SCRUM framework with userstories and creates whole theoretical projects if done right.
Combining this with GPT Engineer, so that it creates microservices to solve these Userstories - wich are limited in scale and therefore easier to do - might lead to even better results, espacially if the whole scrum process is generated with different GPT Agents in different roles, that are able to correct themself, enhace the problem and the solution in this way.
I hope, GPT3.5-16k does the job as well, I'll test it this weekend and integrate it in my scrumGPT setup - I'm still on waitlist for gpt-4 -.-
"Hope". Why do you not hope to avoid human extinction instead?
@@Ilamarea Would that be too bad? xD
No, to be fair - this is still on a very low level. Basicly, it's about having a great amount of monkeys and minimize everything on a level, that a monkey can do the job.
Should I ad "GPT Engineer" to my resume? lmao
This is awesome! Thank you for sharing!
Thank you for the great video - keep recording.
Thanks! Will do 🙏🏻
Hi Dave !! Thanks a lot for this video tutorial. !! Dave I have question can I use Jupyter notebook for these? thanks !
Excellent video as always top class 🙂🙏
Keep up the good work!
looks amazing!
Thank you Sir for this beautiful fastest intelligence. I have just started ML and wanted to know will it work in jupyter notebook?
What is the python interactive terminal you using? An extension?
great video! the microfone seems to be a little too sensitive though, there are high sounds that are a little bothersome
Crazy stuff ! Here we go !
@Mr_DaveEbbelaar no
@Mr_DaveEbbelaar no
@Mr_DaveEbbelaar no
@Mr_DaveEbbelaar no
@Mr_DaveEbbelaar no
Nailed it bro ❤
🙏🏻
looks awesome!! how much did the request to OPENAI cost?
I had no idea what you made with got engineer. Lol you machine learned how to sort data from files?
That interactive window... how are you doing that? Is that built into VSCode?
th-cam.com/video/zulGMYg0v6U/w-d-xo.html
I am superbeginner, no expierice with coding, but I am starting starting college education in programming focused on AI. I was messing around with Chat GPT and trained it so it could make notes about any text I gave to it, then generate testing questions and evaluate my answers. Is it possible to create app, that can do the same using GPT Engineer?
Very cool thanks for this
Can you keep tweaking with iterations or do you have to manually do it again after?
Two questions (which have been asked individually elsewhere) and a thought about incorporating LangChain in the process:
1. Other than OpenAI models (GPT-4 and GPT-3.5), does it support other LLMs? And, particularly, what might it take to adapt it for some open source LLMs (e.g. Falcon-7b)?
2. As it stands now, can it be used effectively to develop code which utilizes a well-known API (particularly one that adheres to gRPC) that is likely to have been part of the GPT-4 or 3.5 training sets? (I intend to find out later today, but thought I'd ask.)
In thinking about supporting 'engineering' on an API for which I would have to supply the detailed description, my first thought would be to use one of the 'splitter' functions of LangChain to handle what is likely to be pretty extensive documentation (e.g. the protobuf definitions, and client stubs that would be generated automatically by the creation of the API).
Do you (or anyone else reading this) have knowledge of an effort that has used LangChain in this fashion?
This would be awesome
Yeah I’ve already done this with open api standard and falcon 180b with langchain. Agents can now easily create endpoints for web servers
is this python specific, or can you essentially create project in any other language too, for example c#?
Works with any language that ChatGPT understands
what do u use for the mockups
does this works also in visual studio 2022 and c# client and web projects?
Is the plus plan ( $ 20/month) enough to use this or do I need to pay for the API key?
Yo, what theme do you use. Looks amazing
Thanks! Atom One Dark
good job Dave. In my case gpt engineer has created all python files code in single file called all_output.txt. Is that expected as files are not placed as yours in workspace.
same here
same here. is it because we using gpt-3.5?
Thank you
How much costs to run the chat GPT API to run something like that?
@Dave,
Can it build data engineering pipelines using tools like airflow or dbt?
how did you change from gpt 4 to 3.5 again ? really nice video btw.
I tried to do some work that requires tensorflow installation but it does not ask me to install or it does not install it, it self. Should we install dependencies ourselves?
Does this only build apps in python?
I made a version with LangChain to use my personal data set. And it's much better with !
Are you willing to share? I have only used langchain once and my immediate thought when watching was langchain with my own files that it could use as an example.
it's similar to noteable which plugin execute code
Falls far short of the hype and expectations.
Could you perhaps explain how to use the --steps use_feedback function on codebases which haven't been generated from it beforehand. So on our own projects to expand.
how the model can keep up the context of all your code? maybe its because the code its not to much? but and for to bigger systems? I guess this have some limitations yet (context)
Hmmm. Tried this but It says exceeded current quota. tried to change model but it prompts same message. Please have a disclaimer first that we need to pay something and you didnt even tell where to get OpenAPI key. Anyways more power on your channel! Keep sharing knowledge! Thanks!
that is pretty pretty neat, altough can I somehow give it entire API description and ask it to create a module to connect this API? and I mean it would need to API documentation and we have like limit for 2000 characters I guess? or can I copy paste 20 000 of characters trough this module and this plugin will spoon feed the chatgpt with this data? what is the best approach here - please advice
Easiest thing to handle big chunks is to break it down in parts and build something that GPT can navigate though this.
Like:
Give one GPT Agent a list with all endpoints and one sentence that descripes it. Let it choose, what comes in handy. Than pull the docs from this endpoint (stored in cache, file or db) and give it to the next agent (like GPT Engineer).
I guess, these frameworks work best with small chunks of data and in creating microservices, because it's smaller in scale and the model can not drift away that easy.
Can it create a Jupyter notebook with Markdown anotations?
Is there a way to put a CSV dataset in the folder, provide a prompt while having GPT Engineer look the file over & create something like an LSTM or XGBoost model for time series forecasting tailored to that specific dataset?
Dave, is your OpenAI API key for GPT3.5 or GPT4?
I previously played with Smol AI but it was useless with GPT3.5 API.
This demo was with GPT4
Yep. GPT4 should be included in your title. NOT everyone has access to GPT4 API yet. Me included :(
@@pleabargain Sorry about that. Did not know that it wouldn't work with GPT3.
@@daveebbelaar np
Gosh I am trying to follow your tutorial however the folder structure was changed yesterday. It's making it challenging to run the prompt. does any one know how to run the prompt with the new structure?
So is it all just in Python?
as of 6/19 the repo has been severely changed and this video needs to be updated :(
first human.
😂
prolly a bot that wrote this lol
Holy shit, using the 🤯 they said it would take 5 years for devs to lose their jobs seems like next week more likely
But does it only make python code?
How about using real open source AI, which runs locally?
Can this engine used in Xcode for apple app development?
It seems great, but the problem is the repo has already changed and the requirements to get it to work have already changed. I can’t seem to time an install a video with how the repo looks at that given time…main.py is not in the same directory anymore
Yep, I spent hours and hours on it and can’t get it to work now - even with help from GPT4! Shame, this looked really good.
i have access to gpt4, do i need credits in my account to use it API key ?
What is the interactive Extension he uses to run script blocks one at a time?
Check the description 👌🏻
Very nice bro. I have a trading data set, how tell me when a strategy work yes or no. I would like GPT Engineer to serch witch machine learning model give me the best score ?How can I introduce the differents models. Thank you Bro.👍👍
I could see a AI assisted coding like in C++ or java scripts.
You need a GPT-4 Key....
the repo as changed as well as the instruction, maybe you should consider to do another video
Yea everything changes so fast... The most up-to-date information can be found on the repo page itself.
@@daveebbelaar thanks, for some reason my subscription is not active, so I cannot use gpt4 at the moment. I will experiment when openai fixes my subscription
Can't follow along. Need an additional payment plan that charges based on usage. Not a flat rate like for ChatGPT Plus. Too risky for me to potentially spend all my savings.
Amazing❤❤❤
I am interested in this LLM can it code any other languages except python
Dave, looks strange but when I tried the same example that you showed in this demo i don;t see any Python files generated. any idea?
I just learned that it only works with GPT4
I am trying to use to vs code to follow your tutorial. I have installed python environment and created conda gpt-engineer and start "code ." under gpt-engineer but when I start terminal the terminal is still set to "PS" .
Typing conda info --env on terminal does show the environment "gpt-engineer * C:\Users\kehsa\miniconda3\envs\gpt-engineer
gpt-gdrive C:\Users\kehsa\miniconda3\envs\gpt-gdrive
langflow C:\Users\kehsa\miniconda3\envs\langflow
pix2pix C:\Users\kehsa\miniconda3\envs\pix2pix"
but activating the environment does not seem to work
PS C:\Users\kehsa\gpt-projects\gpt-engineer> conda activate gpt-engineer"
PS C:\Users\kehsa\gpt-projects\gpt-engineer>
Would you know why, I would greatly appreciate it.
what watch do you have?
WHOOP band
there's a weird hessing or is my speaker broken. ?
I wish If these AI has came before 2017 I could really create my colleagues projects and I can save tons of money 😂😂😂
Impressive, very nice...let's see the API cost.
it is not so bad. They reduced the price with their latest update. I ran 2 projects for under $2 bucks.
it says ModuleNotFoundError: No module named 'typer'. depsite typer being clearly installed
Wait until Devin comes out
Hey Dave how do I give gpt-engineer additional clarifications/prompts after my initial clarifying prompts. For example, gpt-engineer made me a simple user interface but it doesn’t perform one of the actions I wanted it to. Is there a way for gpt-engineer to add onto the work it has already done?
Did you find out?
It’s a brave new world…