Hey! With this course from beginning to end, you will be familiar with AutoGen and be able to create your own ai agent workflow. Like, subscribe and comment 😀 Have a good day coding!
I love that your profile picture features both you and your wife. I'm subscribing not only because you're a great teacher but also because you proudly display the love of your life. 🥳❤🥳❤🥳❤🥳❤🥳❤
OK so sorry for the caps, mastered it now THANKS :) best thing you can do is talk a little slower though haha makes it hard to follow when it's all new EDIT = Just want to say anyone not getting this at 1st do it a few times and suddenly the penny will drop, just focus on why he's writting and get the structure understood then it's not so hard anymore :)
I usually do not comment, but I am commenting because you are just awsome. I was confused from last 3 days about LangGraph vs Autogen, but now you have completed my all doubt, with this video, thanks.
This is truly a fantasic video, i have been trying to learn crewai and i just have crap results every time plus its allot more code, i think i am setting on autogen and your video has been a huge help thank you you just earn another sub :)
@@TylerReedAI main thing is when I make from scratch but I want it run run a local LLM do I still need the config.json or do I just put the Equivalent API stuff in main.py?
thank you for these videos. I would like to ask that a TH-camr do a video like this that goes straight into using a local LLM so that those of us without extensive knowledge don't have to piece it together later and hope we have it all right. Because quite frankly anyone looking to set this stuff up is more likely headed to the free version of most all if not all of this.
Tyler I believe you have one of the best channels on YT for learning AI frameworks. I just finished watching your CrewAI & Autogen videos but I'm not sure when to pick one or the other. Maybe we could get a video explaining the differences in workflow?
What's your thoughts on AutoGPT? it seems like they're going in another direction, I prefer this, it was easy to setup and I prefer a CLI instead of a user friendly interface, otherwise I feel quite restrained, also I'm a complete noob, not a coder so sorry if I'm not making sense, thanks for the tutorial.
Hello, is it ok if i use llama 3 instead of openai with autogen? Would the method of function/tool calling be same with different llm model? Or it would be different method to implement function calling or tool calling? The tutorial was brilliant. Thanks
Quick note: When I copy the code from your github repo, it says human_input_mode="ALWAYS". I didn't notice that and the code completed and asked me for input. I then noticed in the tutorial that you used human_input_mode="NEVER". I made this change and I was able to get the agents to "auto run". Perhaps others might run into this issue as well. Thanks for the tutorial!
Wish I would've seen this 6 months ago when you released it!! Phenomenal work, my friend. Subscribed! I'm gonna send you 20k for a new rig when I make my first million! lol Fair?
Thank you for this excellent introduction! I have one question: I would like to have two agents performing an interview with a human on a certain topic. The first agent should ask the questions, while the second agent should reflect on their understanding of the topic and decide whether additional messages are needed. This seems like a good case for a Nested Chat. However, the nested chat seems to be bound to the amount of turns you define in the beginning. Is there a way to have the nested agent decide when to finish the interaction?
Hey I'm glad it has helped and thank you! So yeah you can determine how many max_turns a chat could have in the nested chat. It is sequential, but I guess for that...you may just need to say something in the prompt of each agent. For instance, AssistantAgent could say, ".... When task is done, reply TERMINATE". Then the UserAgent checks for that in the termination message. res = user.initiate_chats( [ {"recipient": assistant_1, "message": tasks[0], "max_turns": 1, "summary_method": "last_msg"}, {"recipient": assistant_2, "message": tasks[1]}, ] ) Here, in this example, you can increase the max turns where the user will initiate a chat with another assistant. I get what you're saying, and I think the answer is...No. Like not exactly. The closest would be with the prompting. Hope this helps, if it didn't let me know!
@tylerreedAI : i have a use case at work, where I need to load a xlsx file with dealer data and then based on user question with year, month for a particular dealer..need to calculate 12 month previous rolling avg including current month on Demand for each part of that dealer and previous month rolling avg for 12 months and then find the difference of current to previous rolling avg on demand and come up with a percentage difference of demand. if the difference is 10% or more create a new file with those entries. I have it kind of working with 3 agents group chat for all dealer data but when i try to add filters like year or month or dealer or part number, it falls apart. would love to take your input and get it fixed , if you are willing to help. this would prove your subscribers that it can work in real scenario than just hello worlds. thanks
I saw a video that had little game devs working together. I want to make a game in Unreal Engine using little AI agents as helpers, but im not entirely convinced that AI agents are all the way there yet (?) What all can be done in this regard, to your knowledge? Like, i need one agent to interact with me, as my liasian to the other agents, to help prioritize which agents operate / do tasks in / etc., one to give screen reading/ keyboard/ mouse control to, to operate some specific programs (unreal, browser, etc), one to scrape websites for data, one to compile that data into tables, one that can learn unreal engine, one that codes in multiple languages, one for front end, one for back end, one to operate a local ai image generation on my workstation to make 2d pictures for inventory item sprites, and ui design iterations for me to pick through, etc.
Another observation I saw: if I simply stated the user query as “who are you and what is your purpose?” - it starts the initiate_chat() with couple of agents and talk about life and AI etc whereas it should be intelligent enough to know that the question doesn’t really require a swarm of agents to respond to Hi / Hello / Who are you type of questions (which users invariably ask when starting a chat). How can we address these trivial queries within this framework and not spawn unnecessary agents and group chat?
Hey Tyler! Amazing video I am non technical, just wanted to know if I can use jupyter notebook instead of pycharm. If yes, do I need to create separate jsonn file to call Open AI API key like you did by for two way chat? Thanks
Thank you! And yes you can absolutely use jupyter notebook instead of pycharm. You do not need to create a separate file, you can just add the model and api key to the config list separately. If you need help, email me and I can give you a sample code! tylerreedytlearning@gmail.com
This is excellent! Thank you for your efforts in bringing this tutorial out. Can I ask, can we add pdf’s to agents. Like ask agents to digest pdfs at particular points in the workflow and contribute to the discussion based on what it learns there?
So once you have it installed with pip install autogenstudio, then run it like this: Autogenstudio ui -port 8080 Then it will show a localhost url in the terminal
Hey, yeah there really is no functional difference, it's just how they get the properties. Because you could even just import os, and then say like os.environ("OPENAI_API_KEY"), something like that, and you would have that set into your configuration on env path. the oai_json is just the way I like to do it
i dont have openAI keys as they are costly. But we have cohere & Gemini APIs offered free of cost. So it will be great if we can get same configurations via cohere & Gemini so that more people can practice freely.
Thanks for sharing this video, it helps me a a lot. I have one question. Is that possible to dynamically changing base prompt(system_message)? "dynamically" means that I would like to know how to change system_message during conversation.
Hey I'm glad it could help! And I will look into that. I can think of just adding context in each iteration to shape the output, but you would need to set the human_input_mode="ALWAYS".
That may be fine, I only have 8GB of ram as well. The processor may be an issue, but just try it out and see how long a simple response like what is 2 + 2 takes. Let me know!
Tools and Functions are very similar. End of the day, just python functions. A couple things though, I think that one of the differences is how they interact with OpenAI. And tools are a little more diverse, and tbh, I prefer them over function calling. You can also assign a bunch of tools to an agent and it will decide which is best. A function must be called when assigned to an agent.
Any way to use AutoGen to login on the website and perform a job? I mean the functionality where I can describe with the text to login on the specific website with my credentials and do specific tasks, without specifying manually CSS or XPath elements and without writing (or generating) code for Selenium or similar tools?
Hey, I don't think doing that with their native tools just yet, however I know they are hard working (as per last week) on making things like this happen. They mentioned it in a discord call they had.
Hi, i commented on your other SAAS customer survey video i am using that code of you yours, and i keep getting "openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}" error even though i have default_auto_reply:"..." A different question, when i followed the exact path of your SAAS customer survey video it once ran the code, it even generated me an output, but couple things i see is: - I cant see all those requirement interactiosn between the agents, - and once the execution is done with code generation the last thing i get is the same error mentioned above PLS HELP
Oh I see, you also have the default auto reply already. Are you using lm studio or ollama for local integration? Or something else? So, to get all the interactions, just set the human_input_mode="ALWAYS" so you can be a part of the conversation. And you are still using Mistral-7B for this?
Hi So I want to do this with a local run LLM how do I change [ { "model": "gpt-3.5-turbo", "api_key": "sk-proj-1111" } ] to run with say LM studio with Llama 3
model: the actual model from LM studio you are using api-key: “lm-studio” base_url: “the url found in LM studio as well” There is like a snippet of python code when you start local server, both of the model and base url can be found there
import autogen def main(): llama3 = { "config_list": [ { "model": "Meta-Llama-3-8B-Instruct-GGUF", "base_url": "localhost:1234/v1", "api_key": "lm-studio", }, ], "cache_seed": None, "max_tokens": 1024 } phil = autogen.ConversableAgent( "Phil (Phi-2)", llm_config=llama3, system_message=""" Your name is Phil and you are a comedian. """, ) # Create the agent that represents the user in the conversation. user_proxy = autogen.UserProxyAgent( "user_proxy", code_execution_config=False, default_auto_reply="...", human_input_mode="NEVER" ) user_proxy.initiate_chat(phil, message="Tell me a joke!") if __name__ == "__main__": main()
well i wonder, the first programs runs with no error, but it don`t create the coding folder and also don`t create or run files where i see the chart in it, also not if i create it before, also in mode "Never". So it only produce output on output but not the resulting scripts. Any idea? Well i tryed with python 3.10
found the reason: after create project i have to say in the 3thd tab conda as environment and use py 3.10.11. it seams there is a problem with my 3.10.6 installed for automatic1111.
Sorry for late reply, I'm glad you got it figured out. Yeah so this is why I'm soon going to be creating docker images so everybody can have the same workflow with same settings I have. Then we won't have issues like this.
I'm at 10:45 on your tutorial and my code just pop up a META & TESLA Stock price graph! Just an issue: it was a success with Assistant sending "TERMINATE", but then "user" doesn't stop sending empty messages to Assistant, and Assistant responding "good bye" / "feel free to ask more questions"... in an infinite loop, CTRL-C help me get out of here! (^_^)
I'm glad you got the graph! Yeah sorry, it happens, but I will try to update the code with better termination replies and prompts so you don't run into this issue nearly as often. But yeah ctrl + c gets you out of it :D
When I ran the code, I got the following repeated about 12 or more times. Maybe we need to limit replies? Assistant (to user): If you have any more questions or need assistance in the future, please feel free to ask. Have a great day! Goodbye! TERMINATE -------------------------------------------------------------------------------- user (to Assistant): I also did not get the same results you did, but I now think I know why. Since I have docker running, I set "use_docker" to True. When I set "user_docker" to false, I get results closer to yours. I was thinking I needed to use the docker executor, but that causes other issues. You might want to try using docker and see if there are any differences. If so, it might be the subject of another video. I had more consistent results when I set the temperature to 0, and set use_docker to false.
So it's not outputting anything? It seems that it didn't error out at least. If you want to either join my discord and share code and I can see whats going on or give me some of it here
After like 3 weeks of fiddling around with AI, the way to go is to fine-tune the model itself directly to create agents. There's no need for any tool. The AI itself has it all already.
Using llama 3, that is a viable strategy. However, consider that the AgentOptimizer autogen workflow from Zhang and Zhang allows you to get the effect while still using the top of the line models. gpt-4-turbo is current $30 per million tokens. Until the SML agent swarm gets traction, this is going to be the best option
Hey! With this course from beginning to end, you will be familiar with AutoGen and be able to create your own ai agent workflow. Like, subscribe and comment 😀 Have a good day coding!
Oh no coding again?
I love that your profile picture features both you and your wife. I'm subscribing not only because you're a great teacher but also because you proudly display the love of your life. 🥳❤🥳❤🥳❤🥳❤🥳❤
@@StynerDevHub Thank you! I really appreciate your words! Yeah, it wouldn't be possible without her...that's the truth lol
You earned a new subscriber and loyal follower, gentleman. Great speech modulation and clearness.
Thank you so much appreciate this 🙌
OK so sorry for the caps, mastered it now THANKS :) best thing you can do is talk a little slower though haha makes it hard to follow when it's all new EDIT = Just want to say anyone not getting this at 1st do it a few times and suddenly the penny will drop, just focus on why he's writting and get the structure understood then it's not so hard anymore :)
Okay noted!
I usually do not comment, but I am commenting because you are just awsome. I was confused from last 3 days about LangGraph vs Autogen, but now you have completed my all doubt, with this video, thanks.
Hey thank you so much! I'm so glad to help clear things up 👍
This is truly a fantasic video, i have been trying to learn crewai and i just have crap results every time plus its allot more code, i think i am setting on autogen and your video has been a huge help thank you you just earn another sub :)
A wonderful beginner's tutorial. Thanks for providing the code as we can copy paste and test it quickly. Appreciate it.
how did you get it working i get so many issues with it running locally
@AndyPandy-ni1io what issues are you having?
@@TylerReedAI main thing is when I make from scratch but I want it run run a local LLM do I still need the config.json or do I just put the Equivalent API stuff in main.py?
thank you for these videos. I would like to ask that a TH-camr do a video like this that goes straight into using a local LLM so that those of us without extensive knowledge don't have to piece it together later and hope we have it all right. Because quite frankly anyone looking to set this stuff up is more likely headed to the free version of most all if not all of this.
You’re welcome! And I hear ya and understand. It seemed it might be easier to get something going with OpenAI but I see your point!
This is awesome stuff. But I really recommend you guys to check the documentation :)
Yes documentation always, especially with how often it gets updated
Great beginner's course on AutoGen! A solid starting point for those new to AI development, and an excellent way to build foundational knowledge."
Thank you for this, I really appreciate it!
Great summary. I've been looking for more examples of Autogen. I'd love to see a comparison of CrewAI vs Autogen and the code behind the test.
Only 11 minutes in, and this is great! Have subscribed!
Haha! My execution of the first script, it pip installed the yfinance library!
Awesome! Good job 👍
@@TylerReedAI Typo for the functions section, Euro to USD should be 1 * 1.1
Tyler I believe you have one of the best channels on YT for learning AI frameworks. I just finished watching your CrewAI & Autogen videos but I'm not sure when to pick one or the other. Maybe we could get a video explaining the differences in workflow?
Thank you so much I appreciate that!! 🙌. And yes I have been thinking of a video on my thoughts of it. Will try to do that soon
Great explanation and summary of autogen !
Thank you 👍
This course is fantastic, thank you!!
Sounds fantastic. I will take it as soon as I can.
Awesome, thank you 👍
Thank you Tyler, Awesome as usual!
Thank you!!
What's your thoughts on AutoGPT? it seems like they're going in another direction, I prefer this, it was easy to setup and I prefer a CLI instead of a user friendly interface, otherwise I feel quite restrained, also I'm a complete noob, not a coder so sorry if I'm not making sense, thanks for the tutorial.
Hello, is it ok if i use llama 3 instead of openai with autogen? Would the method of function/tool calling be same with different llm model? Or it would be different method to implement function calling or tool calling? The tutorial was brilliant. Thanks
Yes of course! The local model just needs to support function calling, and if it does then great that’s perfect!
@TylerReedAI thanks for replying, will try
Tyler, excellent video. I learned a lot. God bless you.
Thank you, glad it was helpful!
Quick note: When I copy the code from your github repo, it says human_input_mode="ALWAYS". I didn't notice that and the code completed and asked me for input. I then noticed in the tutorial that you used human_input_mode="NEVER". I made this change and I was able to get the agents to "auto run". Perhaps others might run into this issue as well. Thanks for the tutorial!
Thanks for this! I hope this helps somebody else as well
Wish I would've seen this 6 months ago when you released it!! Phenomenal work, my friend. Subscribed! I'm gonna send you 20k for a new rig when I make my first million! lol Fair?
Thank you for this excellent introduction! I have one question: I would like to have two agents performing an interview with a human on a certain topic. The first agent should ask the questions, while the second agent should reflect on their understanding of the topic and decide whether additional messages are needed. This seems like a good case for a Nested Chat. However, the nested chat seems to be bound to the amount of turns you define in the beginning. Is there a way to have the nested agent decide when to finish the interaction?
Hey I'm glad it has helped and thank you! So yeah you can determine how many max_turns a chat could have in the nested chat. It is sequential, but I guess for that...you may just need to say something in the prompt of each agent. For instance, AssistantAgent could say, ".... When task is done, reply TERMINATE". Then the UserAgent checks for that in the termination message.
res = user.initiate_chats(
[
{"recipient": assistant_1, "message": tasks[0], "max_turns": 1, "summary_method": "last_msg"},
{"recipient": assistant_2, "message": tasks[1]},
]
)
Here, in this example, you can increase the max turns where the user will initiate a chat with another assistant. I get what you're saying, and I think the answer is...No. Like not exactly. The closest would be with the prompting. Hope this helps, if it didn't let me know!
@@TylerReedAI Thank you very much, I'll give it a try!
@tylerreedAI : i have a use case at work, where I need to load a xlsx file with dealer data and then based on user question with year, month for a particular dealer..need to calculate 12 month previous rolling avg including current month on Demand for each part of that dealer and previous month rolling avg for 12 months and then find the difference of current to previous rolling avg on demand and come up with a percentage difference of demand. if the difference is 10% or more create a new file with those entries.
I have it kind of working with 3 agents group chat for all dealer data but when i try to add filters like year or month or dealer or part number, it falls apart.
would love to take your input and get it fixed , if you are willing to help.
this would prove your subscribers that it can work in real scenario than just hello worlds.
thanks
I saw a video that had little game devs working together. I want to make a game in Unreal Engine using little AI agents as helpers, but im not entirely convinced that AI agents are all the way there yet (?) What all can be done in this regard, to your knowledge? Like, i need one agent to interact with me, as my liasian to the other agents, to help prioritize which agents operate / do tasks in / etc., one to give screen reading/ keyboard/ mouse control to, to operate some specific programs (unreal, browser, etc), one to scrape websites for data, one to compile that data into tables, one that can learn unreal engine, one that codes in multiple languages, one for front end, one for back end, one to operate a local ai image generation on my workstation to make 2d pictures for inventory item sprites, and ui design iterations for me to pick through, etc.
Another observation I saw: if I simply stated the user query as “who are you and what is your purpose?” - it starts the initiate_chat() with couple of agents and talk about life and AI etc whereas it should be intelligent enough to know that the question doesn’t really require a swarm of agents to respond to Hi / Hello / Who are you type of questions (which users invariably ask when starting a chat). How can we address these trivial queries within this framework and not spawn unnecessary agents and group chat?
Thanks for this awesome tutorial.
Looks like llama70 has some weird issue when trying the sequential chat. !!
You are welcome! Oh really? That's good to know, thank you for the testing! Hopefully they get better lol
This is a great video. My confusion is this is not Autogen Studio 2.0, but simply Autogen with custom python files, is that correct?
Right this is pyautogen, the python version of autogen. Autogen Studio is the no code website version of that
Thank you :) you are awesome
Thank you :)
Hey Tyler! Amazing video
I am non technical, just wanted to know if I can use jupyter notebook instead of pycharm. If yes, do I need to create separate jsonn file to call Open AI API key like you did by for two way chat?
Thanks
Thank you! And yes you can absolutely use jupyter notebook instead of pycharm. You do not need to create a separate file, you can just add the model and api key to the config list separately. If you need help, email me and I can give you a sample code! tylerreedytlearning@gmail.com
This is excellent! Thank you for your efforts in bringing this tutorial out. Can I ask, can we add pdf’s to agents. Like ask agents to digest pdfs at particular points in the workflow and contribute to the discussion based on what it learns there?
Hey thank you, and absolutely you can. This would be using RAG. I will have a video soon on how to do just this!
How can i do it on autogenstudio? I am in the conda environment but it doesnt do it on my desktop
So once you have it installed with pip install autogenstudio, then run it like this:
Autogenstudio ui -port 8080
Then it will show a localhost url in the terminal
Great!
Amazing tutorial, very clear and packed full!
Thanks Tyler. I see you suggested goinng oai config instead of a .env and they both appear to do the same. What's the difference?
Hey, yeah there really is no functional difference, it's just how they get the properties. Because you could even just import os, and then say like os.environ("OPENAI_API_KEY"), something like that, and you would have that set into your configuration on env path. the oai_json is just the way I like to do it
legend
Thank you 🙏
Hi Tyler I was following along with your repo and it vanished mid tutorial. Any ideas? Great work btw
Hey thank you, what do you mean…like the repo doesn’t exist?
@@TylerReedAI I was following along with your /autogen_beginnner_course repo and I refreshed at one point and got 404. Its gone
I see, I had a different one and migrated because of some issue, I apologize. Try this: github.com/tylerprogramming/autogen-beginner-course
i dont have openAI keys as they are costly. But we have cohere & Gemini APIs offered free of cost. So it will be great if we can get same configurations via cohere & Gemini so that more people can practice freely.
I don't see the reddit url you mention at 1:20:28. I tried to see the url, but I can't make it out when I zoomed in on the video.
fixed it, thank you!
Thanks for sharing this video, it helps me a a lot.
I have one question. Is that possible to dynamically changing base prompt(system_message)?
"dynamically" means that I would like to know how to change system_message during conversation.
Hey I'm glad it could help! And I will look into that. I can think of just adding context in each iteration to shape the output, but you would need to set the human_input_mode="ALWAYS".
is function calling same as adding skills to agents ? if not can you make a video on adding skills to agents in autogen
yeah its the same idea, give them tools to execute some actions. But yes, I plan on making videos on adding skills with autogen studio.
Inorder to run phi-2 locally, what are the hardware requirements. I have a 8GB RAM with i3 processor
That may be fine, I only have 8GB of ram as well. The processor may be an issue, but just try it out and see how long a simple response like what is 2 + 2 takes. Let me know!
Hi - While using functions it is answering 2+2 = 4 then how is it different from Tools ? I am using your exact code from your git
Tools and Functions are very similar. End of the day, just python functions. A couple things though, I think that one of the differences is how they interact with OpenAI. And tools are a little more diverse, and tbh, I prefer them over function calling. You can also assign a bunch of tools to an agent and it will decide which is best. A function must be called when assigned to an agent.
Any way to use AutoGen to login on the website and perform a job?
I mean the functionality where I can describe with the text to login on the specific website with my credentials and do specific tasks, without specifying manually CSS or XPath elements and without writing (or generating) code for Selenium or similar tools?
Hey, I don't think doing that with their native tools just yet, however I know they are hard working (as per last week) on making things like this happen. They mentioned it in a discord call they had.
Can I request the a code written in Next.js(typescript) or .NET(C#) or it is strictly working with Python?
you are in luck! They just added .net support!
Hi, i commented on your other SAAS customer survey video i am using that code of you yours, and i keep getting "openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}" error even though i have default_auto_reply:"..."
A different question, when i followed the exact path of your SAAS customer survey video it once ran the code, it even generated me an output, but couple things i see is:
- I cant see all those requirement interactiosn between the agents,
- and once the execution is done with code generation the last thing i get is the same error mentioned above
PLS HELP
Oh I see, you also have the default auto reply already. Are you using lm studio or ollama for local integration? Or something else?
So, to get all the interactions, just set the human_input_mode="ALWAYS" so you can be a part of the conversation. And you are still using Mistral-7B for this?
when to use register_for_llm and register_for_execution ?
so, these are decorators to be used at the top of the python method. This will let the framework know this is a tool to be used by an agent!
@@TylerReedAI is it possible to coordinate groupchat ? like I want specific agent to be called
Hi So I want to do this with a local run LLM how do I change [
{
"model": "gpt-3.5-turbo",
"api_key": "sk-proj-1111"
}
] to run with say LM studio with Llama 3
model: the actual model from LM studio you are using
api-key: “lm-studio”
base_url: “the url found in LM studio as well”
There is like a snippet of python code when you start local server, both of the model and base url can be found there
import autogen
def main():
llama3 = {
"config_list": [
{
"model": "Meta-Llama-3-8B-Instruct-GGUF",
"base_url": "localhost:1234/v1",
"api_key": "lm-studio",
},
],
"cache_seed": None,
"max_tokens": 1024
}
phil = autogen.ConversableAgent(
"Phil (Phi-2)",
llm_config=llama3,
system_message="""
Your name is Phil and you are a comedian.
""",
)
# Create the agent that represents the user in the conversation.
user_proxy = autogen.UserProxyAgent(
"user_proxy",
code_execution_config=False,
default_auto_reply="...",
human_input_mode="NEVER"
)
user_proxy.initiate_chat(phil, message="Tell me a joke!")
if __name__ == "__main__":
main()
I am not getting the py file at 10:54. the files are not saving for me. i am on mac too.
What model are you using? You using openai or a local model?
@@TylerReedAI openAI api :)
try gpt-4o, just tried it and it created the image and saved the code, let me know if that worked. Sometimes 3.5-turbo is weird
well i wonder, the first programs runs with no error, but it don`t create the coding folder and also don`t create or run files where i see the chart in it, also not if i create it before, also in mode "Never". So it only produce output on output but not the resulting scripts. Any idea? Well i tryed with python 3.10
i also don*t see the 3 dots in the output log...
found the reason: after create project i have to say in the 3thd tab conda as environment and use py 3.10.11. it seams there is a problem with my 3.10.6 installed for automatic1111.
Sorry for late reply, I'm glad you got it figured out. Yeah so this is why I'm soon going to be creating docker images so everybody can have the same workflow with same settings I have. Then we won't have issues like this.
I'm at 10:45 on your tutorial and my code just pop up a META & TESLA Stock price graph! Just an issue: it was a success with Assistant sending "TERMINATE", but then "user" doesn't stop sending empty messages to Assistant, and Assistant responding "good bye" / "feel free to ask more questions"... in an infinite loop, CTRL-C help me get out of here! (^_^)
I'm glad you got the graph! Yeah sorry, it happens, but I will try to update the code with better termination replies and prompts so you don't run into this issue nearly as often. But yeah ctrl + c gets you out of it :D
where can i find the full code?
Hey it should be in a link to my GitHub in the description! Let me know if it works or not
doesn't work from the first 10 mins
xgboost issue
What’s the error you’re getting? Haven’t seen this yet
When I ran the code, I got the following repeated about 12 or more times. Maybe we need to limit replies?
Assistant (to user):
If you have any more questions or need assistance in the future, please feel free to ask. Have a great day! Goodbye!
TERMINATE
--------------------------------------------------------------------------------
user (to Assistant):
I also did not get the same results you did, but I now think I know why. Since I have docker running, I set "use_docker" to True. When I set "user_docker" to false, I get results closer to yours.
I was thinking I needed to use the docker executor, but that causes other issues. You might want to try using docker and see if there are any differences. If so, it might be the subject of another video.
I had more consistent results when I set the temperature to 0, and set use_docker to false.
Gotcha, we talked in discord but yeah it's really interesting to have differences like this.
9:35 Process finished with exit code 0. Not same as the tutorial. End of game. not printing nothing
So it's not outputting anything? It seems that it didn't error out at least. If you want to either join my discord and share code and I can see whats going on or give me some of it here
@@TylerReedAI I have the same issue
How does one join your Discord?
@jsoutter there should be an invite in the description! Let me know if that doesn’t work for some reason
After like 3 weeks of fiddling around with AI, the way to go is to fine-tune the model itself directly to create agents. There's no need for any tool. The AI itself has it all already.
😂😂
How?
Using llama 3, that is a viable strategy. However, consider that the AgentOptimizer autogen workflow from Zhang and Zhang allows you to get the effect while still using the top of the line models.
gpt-4-turbo is current $30 per million tokens. Until the SML agent swarm gets traction, this is going to be the best option
Nope