I love how just a short while ago this would have been mind boggling to anyone watching - many years or decades away, but now it's just a standard capability of AI.
I feel like this very ripe for a whole host of automation but I can't seem to figure out what needs to happen. I've been hesitant to use OI, but your video has given me more ideas to try and test this out. Thanks!
Adding hot keys for this to create a fast-key directory for some kind of automation menu would be sick. For those with local compute restrictions, LM won’t work and OI doesn’t allow for anything other than OAI API Keys. Otherwise you would see way more contents on this application. Adding Groq or Mistral API Keys for endpoint inferencing would be sick. Imagine the automation that could be achieved 😅
Pre built function libraries within OI would be great while we wait for more capable models. Also using the more capable model to build those functions but letting less capable, faster models execute those functions and process the data.
There is a new model called Open Code Interpreter, supposedly trained to deal with errors. I've downloaded it, haven't tried in Open Interpreter yet. PS Could not find GUFF for that model, had to run in Text Generation Web UI. Preliminary results in TGWUI looked good.
You seem to gotten lucky and had it working without too much trial and error. During my session it couldn't produce a code that fulfill the requirements and sometimes it went off rails looping the code generation. Note: I used mistral-8x7b
Thanks for the amazing video! I love it wen TH-camrs get to the point fast enough.. Although, I'm confused at times like @05:12 that we see error messages in green, but apparently there was no error and it actually worked.. Another question, would you think using a model that supports function calling (like the ones fine tuned by Trilis) work better here?
tnx:) yeah that is a bit strange, i dont now 100% whats happening there, since i did not write the code. i think that could be interesting, will take a look. thnx for tuning in :)
Most Open Source models at this time will struggle with Open Interpreter. OpenAI GPTs have been better "polished" for a wide range of tasks. They fine-tuned it for a ton of errors and GPTs also understand instructions better. Open Source models are get better quickly however. Moreover and I don't know how much this matters but Chat GPT code interpreter apparently runs in a Jupyter Notebook as reported by leaks.
yeah, that is exaclty my observation too, the instruction following gets too weak, and it kinda goes off the rails. i have had super results using claude 3 haiku tho
Can you help me how the follow-up question can be achieved in a traditional bot like th-cam.com/video/vp-k9jPTQrQ/w-d-xo.html..... Based on the questions the follow-up questions need to be populated with some dynamic controls. Is it possible to achieve the same in our new GenAI (without defining any static work flow template)... Any thoughts? Thank you
I love how just a short while ago this would have been mind boggling to anyone watching - many years or decades away, but now it's just a standard capability of AI.
yeah haha, its crazy how fast you adapt to a new techology. tnx for tuning in :)
I feel like this very ripe for a whole host of automation but I can't seem to figure out what needs to happen. I've been hesitant to use OI, but your video has given me more ideas to try and test this out. Thanks!
cool, yeah for me its more of a experiment to see what could be the future of interactions with systems or agents
Truly amazing, I’ve only been using it for 36 hours but the amount of things it can do already is insane.
That's flipping awesome. You can almost curate your code to have the "AI" evolve itself (magic of python)
Adding hot keys for this to create a fast-key directory for some kind of automation menu would be sick. For those with local compute restrictions, LM won’t work and OI doesn’t allow for anything other than OAI API Keys. Otherwise you would see way more contents on this application. Adding Groq or Mistral API Keys for endpoint inferencing would be sick. Imagine the automation that could be achieved 😅
Yeah I had to back off OI due to racking up $10 a day in gpt4 usage
Pre built function libraries within OI would be great while we wait for more capable models. Also using the more capable model to build those functions but letting less capable, faster models execute those functions and process the data.
Love Open Interpreter!
There is a new model called Open Code Interpreter, supposedly trained to deal with errors. I've downloaded it, haven't tried in Open Interpreter yet.
PS Could not find GUFF for that model, had to run in Text Generation Web UI. Preliminary results in TGWUI looked good.
okey, cool. will def check that out! thnx for the tip, and for tuning in :)
You seem to gotten lucky and had it working without too much trial and error. During my session it couldn't produce a code that fulfill the requirements and sometimes it went off rails looping the code generation. Note: I used mistral-8x7b
I thought the same thing but it's maybe because he is doing small step by small step with clear and detailed instruction.
@@MrWuzeyYou might be right - small steps. Also as a good programmer he seems to be directing the model towards a correct solution.
yeah, I def had some issues too. my experience is that mixtral sometimes stop following my instructions, might be a part of the issue
Already a member, this project is very interesting, waiting for the github code to be released
awsome :) should be up tomorrow hopefully, tnx for tuning in!
Thanks for the amazing video! I love it wen TH-camrs get to the point fast enough.. Although, I'm confused at times like @05:12 that we see error messages in green, but apparently there was no error and it actually worked..
Another question, would you think using a model that supports function calling (like the ones fine tuned by Trilis) work better here?
tnx:) yeah that is a bit strange, i dont now 100% whats happening there, since i did not write the code. i think that could be interesting, will take a look. thnx for tuning in :)
Most Open Source models at this time will struggle with Open Interpreter. OpenAI GPTs have been better "polished" for a wide range of tasks. They fine-tuned it for a ton of errors and GPTs also understand instructions better. Open Source models are get better quickly however.
Moreover and I don't know how much this matters but Chat GPT code interpreter apparently runs in a Jupyter Notebook as reported by leaks.
yeah, that is exaclty my observation too, the instruction following gets too weak, and it kinda goes off the rails. i have had super results using claude 3 haiku tho
Great vid as usual. Curious about the cost and completion for each task via the APIs to OpenAI ChatGPT4 / turbo / 3.5 etc
I killed a 50 dollar subscription in a day.
:S:S
tnx:) yeah this uses a fair amount of tokens, so i would only use 3.5T, the best results without a doubt has been with claude 3 haiku
Great Tutorial! Please let me know where the config.yaml file is located? Thanks
thnx :) it should be in the /test folder i think
It is amazing! Can you make a video about using AI to make mobile apps?
Really awesome! I want ask about pdf. You used the library that was installed. Can Open Interpreter install it?
can it build applications and games with py ? like if i use deepseeker 70b or smth in lmstudio with it
yeah, but i feel the scope of games are kinda small atm. needs much improvement for bigger tasks is my experience! tnx for tuning in :)
how do i get this working with 1 script for rpi5 that would be cool
Every one is a programmer right now as he/she can speak English!
Nice:)
Can you help me how the follow-up question can be achieved in a traditional bot like th-cam.com/video/vp-k9jPTQrQ/w-d-xo.html..... Based on the questions the follow-up questions need to be populated with some dynamic controls. Is it possible to achieve the same in our new GenAI (without defining any static work flow template)... Any thoughts? Thank you
Sebastian Vettel
dolphin-mistral-7b woohoooooooo!