Dynamic Few-shot Prompting with Llama 3 on local Environment | Ollama | Langchain | SQL Agent
ฝัง
- เผยแพร่เมื่อ 1 ก.ค. 2024
- This video teaches you how to implement dynamic few-shot prompting with open-source LLMs like Llama 3 using Langchain on local environment.
In this tutorial, we will follow these steps:
1. Import Llama3 : Begin by importing the necessary Llama3 library using Ollama.
2. Fetch SQL Data : Connect to your SQL database and fetch the data you need. This involves establishing a connection to sqlite database.
3. Initialize Few-Shot Examples : Select a few-shot learning approach by initializing a set of examples that will guide the model.
4. Convert Examples to Embeddings : Transform the few-shot examples into embeddings.
5. Create Custom Tools : Develop custom tools tailored to your specific needs (Here relate to SQL database).
6. Create Prompt: Design a prompt that will be used to interact with the model.
7. Create an Agent with ReAct Logic : Develop an agent that incorporates ReAct (Reasoning and Acting) logic. This agent will use the prompt and the few-shot examples to perform tasks interactively.
8. Agent Executor : Implement the agent executor, which will manage the execution of tasks by the agent. This component should handle the flow of information between the agent and other parts of your system, ensuring smooth and efficient operation.
Code Link : github.com/TheAILearner/Langc...
Ollama Github - github.com/ollama/ollama
SQL Agent with Llama 3(With Ollama Installation in Local) - • Build an SQL Agent wit...
#dynamicfewshotprompting #sqlagent #llama3 #langchain #ollama #customtools #customagent #fewshotprompting #sql #database #langchain #machinelearning #nlp - วิทยาศาสตร์และเทคโนโลยี
very nicely explained. You helped me a lot, thank you!
Thanks for such nice tutorial on complex topic
İt will be good when this works with cloud
By the way thx for good ai work
Great!!!! Thanks for sharing your knowledge! However I want to ask it the prompt is not too long for the context of ollama3?
Not at all. Llama 3 has context length of 8192 while the prompt shown in the video varies from 450 to 500 tokens only.
How can I run it in colab instead of local Environment?
Can you also provide us the source code
You can check out this video to run Ollama-based models on Google Colab, after which the dynamic few-shot prompting steps can be easily implemented.
th-cam.com/video/XDvTt_TOewU/w-d-xo.htmlsi=RkjXX-jO3VSA08Em
Code Link : github.com/TheAILearner/Langchain-Agents/blob/main/Dynamic%20Few-shot%20Prompting%20with%20Llama%203.ipynb