Excellent explanation. One thing I did previously is make my code validate the generation, and return errors to the LLM telling it to try again. My context was using LLM to write database queries, so for example if it created a query without a LIMIT clause, I would retry the generation with an error attached saying queries without LIMIT are not allowed. This was using OpenAI's function-calling process, where the LLM would generate queries that would get called on the database, and I put this gatekeeper (static checks in code) there to stop it from sending "bad" queries.
Maybe. :) I haven't tried. I was working with the functionality I had available to me. If you provide reference workflows that are cyclic it might be able to infer from that.
Excellent explanation. One thing I did previously is make my code validate the generation, and return errors to the LLM telling it to try again. My context was using LLM to write database queries, so for example if it created a query without a LIMIT clause, I would retry the generation with an error attached saying queries without LIMIT are not allowed. This was using OpenAI's function-calling process, where the LLM would generate queries that would get called on the database, and I put this gatekeeper (static checks in code) there to stop it from sending "bad" queries.
Yup! That's generally the way to handle it. The instructor library leverages Pydantics validation mechanisms making for a great dev experience.
nice work Mike!!!
Thanks!
Thank you for this! I was wondering, does this allows for cyclical workflows as well?
Maybe. :) I haven't tried. I was working with the functionality I had available to me. If you provide reference workflows that are cyclic it might be able to infer from that.
great vid! is magpie accessible now? seems really powerful for custom workflows, would love to jump onboard
Yup. I’ve added a link in the video description.
Very intresting! Cn you make a more compact version (3-5minutes or max 10) so people can understand the concept quicker?