I am curious why OpenAI GPT models were used? On a different and somewhat related note, does LlamaIndex support calling various LLMs including Llama and other open-source kinds?
What are some use cases for having multiple @steps functions that accept the same events. Additionally, do these @step functions that accept the same event run in parallel when the event is broadcast?
One suggestion, can we have a functionality like Collect Events that would check not just the type of events but also ensure that they are executed in the right order ?
It's 2024, we are blessed with AI, and people are putting out good videos on complex topics. But people still refuse to use a decent mic or watch what they themselves create.
Events are just events. The trigger some step to execute, and can also hold extra information to execute that step. The user defines all events themselves, except the StartEvent (which signifies the entry point) and the StopEvent (which stops the workflow)
I am curious why OpenAI GPT models were used? On a different and somewhat related note, does LlamaIndex support calling various LLMs including Llama and other open-source kinds?
What are some use cases for having multiple @steps functions that accept the same events. Additionally, do these @step functions that accept the same event run in parallel when the event is broadcast?
@LlamaIndex, It looks like similar to langgraph? Why one should choose workflows over langgraph??
Looks awesome. Can we get a little auto-help with these 27 different imports to remember each time a new llama-index feature comes out?
One suggestion, can we have a functionality like Collect Events that would check not just the type of events but also ensure that they are executed in the right order ?
I don´t get the point of this. Why I would use this instead of simple and normal conditions and loops? What is the aggregated value?
I guess cause the async better.
It's 2024, we are blessed with AI, and people are putting out good videos on complex topics. But people still refuse to use a decent mic or watch what they themselves create.
what is the difference between the utils "draw_all_possible_flows" and core "draw_all_possible_flows" ?
It was originally in core and has been moved to utils.
What exactly events are referring to? startevent, endevent etc
Events are just events. The trigger some step to execute, and can also hold extra information to execute that step.
The user defines all events themselves, except the StartEvent (which signifies the entry point) and the StopEvent (which stops the workflow)
how to use hugging face models with this?
Is'nt it same as Langraph ?
Personally I really like langgraph implementation. It’s much more readable than this .
I think a better realistic example will make more sense
Lots of examples instead docs! This was meant to explain core concepts and highlight general design patterns workflows