Gemini LLM JSON Mode: Generate Structured Output from LLM
ฝัง
- เผยแพร่เมื่อ 14 ต.ค. 2024
- Learn how to harness the power of Gemini LLM's JSON mode to generate consistent and structured JSON outputs from large language models (LLMs). This video demonstrates step-by-step how to configure Gemini LLM, optimize prompts, and ensure accurate JSON responses.
GitHub: github.com/AIA...
Join this channel to get access to perks:
/ @aianytime
To further support the channel, you can contribute via the following methods:
Bitcoin Address: 32zhmo5T9jvu8gJDGW3LTuKBM1KPMHoCsW
UPI: sonu1000raw@ybl
#gemini #ai #llm
Great video! Thanks. You provided two options. 1 where you specify the JSON schema in the actual prompt and 2. Where the JSON schema is part of the config / schema_response part. Which one is more reliable 1 or 2?
Thanks for this amazing video ✨❤I was actually looking for a solution to force JSON output and here you are with the video 😅
return json.loads(response.text.strip('```json').strip('```').strip()) or
return json.loads(response.text.strip(' ```json').strip('```').strip())
use these to get output in json (make sure write "return the response in json format" in prompt)
You don't need to strip manually, simple add instructions in promot to not include ``` for formatting
what about langchain? how do i get json structured output from langchain?
thanks for this video
google JSON output documentation is lousy
Sir can you build one application based on this multi model llm with Gemini llm medical chat bot user want to upload the image based on the image the modal want to give some medicine and food diet please upload this video tomorrow sir
amazing
Thanks
sourcecode link is broken
Updated. Apologies
your link has fall, could u update it
Yes done. Apologies