The Design Philosophy and whole architecture for taskgen was a pleasure to watch. Got to learn so much from it. I would love to contribute to the Project. Do we have any next ideas/features that we want to develop for the Project ? I have joined your Discord Channel as well
Firstly, this explains the inner workings well (I have jumped directly to the code and could only understand like 70-80% of it, but watching this video made it more sense of what's overall happening inside) So is there any way to track the number of API calls made for a given task and even the total number of input tokens sent and the total number of output tokens generated. I looked briefly through the code and could not find it. Would be happy to contribute for the same if such functionality is required
Indeed, we could have a counter in the llm function to track this. You could implement this yourself too in your own custom llm function If you would like to contribute this functionality for the default llm, you can directly modify the chat function in base.py Thanks!
@@johntanchongmin thats exactly I thought after a while. Like one method is to use a counter in the llm function directly which will not alter the actual codebase
Really loved your session. I have been looking at agent frameworks. Felt like most of them are too bloated. But this one i have high hopes. Is there a way i can contribute? Please let me know
@@princemathew9034 sure, can go to the GitHub and make a pull request anytime. Do chat with me on my discord to find out more about what we are doing next :) And we have a session tomorrow as well to run through the latest TaskGen paper.
@johntanchongmin Persisting the agent is a great feature to have. Also if i have a chat interface or API interface then user and agent can have a conversation. How do you handle that in here?
The Design Philosophy and whole architecture for taskgen was a pleasure to watch. Got to learn so much from it.
I would love to contribute to the Project. Do we have any next ideas/features that we want to develop for the Project ? I have joined your Discord Channel as well
Thanks for the affirmation!
I am intending to build a lot of different memory structures, you can help with that!
Updated video here: th-cam.com/video/F3usuxs2p1Y/w-d-xo.html
Firstly, this explains the inner workings well (I have jumped directly to the code and could only understand like 70-80% of it, but watching this video made it more sense of what's overall happening inside)
So is there any way to track the number of API calls made for a given task and even the total number of input tokens sent and the total number of output tokens generated. I looked briefly through the code and could not find it. Would be happy to contribute for the same if such functionality is required
Indeed, we could have a counter in the llm function to track this. You could implement this yourself too in your own custom llm function
If you would like to contribute this functionality for the default llm, you can directly modify the chat function in base.py
Thanks!
@@johntanchongmin thats exactly I thought after a while. Like one method is to use a counter in the llm function directly which will not alter the actual codebase
Really loved your session. I have been looking at agent frameworks. Felt like most of them are too bloated. But this one i have high hopes. Is there a way i can contribute? Please let me know
@@princemathew9034 sure, can go to the GitHub and make a pull request anytime.
Do chat with me on my discord to find out more about what we are doing next :)
And we have a session tomorrow as well to run through the latest TaskGen paper.
Discord link is here btw: discord.com/invite/bzp87AHJy5
@johntanchongmin Persisting the agent is a great feature to have. Also if i have a chat interface or API interface then user and agent can have a conversation. How do you handle that in here?
Check out shared variables for persistent memory. Also we have a conversational interface in tutorial 6 wrapping over the agent
@@johntanchongmin I directly jumped to creating an agent. Got to see it.
Thanks
Part 2 here: th-cam.com/video/OWk7moRfTPE/w-d-xo.html
Part 2 here: th-cam.com/video/OWk7moRfTPE/w-d-xo.html