It was possible earlier to ask a model to return response in specific json format via user prompt. But not all models follow this request. For example, Lama 3.1-3.2 in most cases produce broken json, while gemma2 almost always follows the rule. It would be nice to test this structured output behaviour with different models.
Hopefully chatollama in langchain-ollama gets updated to take something other than ' ' or 'json'. As always, great video. I sincerely appreciate your content
It was possible earlier to ask a model to return response in specific json format via user prompt. But not all models follow this request. For example, Lama 3.1-3.2 in most cases produce broken json, while gemma2 almost always follows the rule.
It would be nice to test this structured output behaviour with different models.
Genial video en Español, bravo !!!!
Can it work for any open source LLM?
Lets try Pydantic!
Is Structured Output the same thing that the SDK uses when using Tools?
similar. But tools came out first a year ago, then a new version a few months back. Structured Outputs is anew feature added in the last couple days
Hopefully chatollama in langchain-ollama gets updated to take something other than ' ' or 'json'. As always, great video. I sincerely appreciate your content
👋🏽