✅ Written tutorial → mdb.link/article-qXDrWKVSx1w ✅ Create your free Atlas cluster → mdb.link/free-qXDrWKVSx1w ✅ Get help on our Community Forums → mdb.link/forums-qXDrWKVSx1w
I want to get the current state without invoking the agent. I need to retrieve the chat history so I can display it on the UI, allowing the user to continue the conversation. However, I haven’t found a way to access the messages from the current state. Here’s how I’m doing it right now: ''' async getChats(userId: string): Promise { const agentFinalState = await this.agent.invoke( { messages: [new SystemMessage('get-chats-of-the-user-from-memory')] }, { configurable: { thread_id: userId }, recursionLimit: 10 }, ); return agentFinalState.messages .filter( (message: BaseMessage) => message && message.content && !(message instanceof SystemMessage) && !(message instanceof ToolMessage), ) .map((message: BaseMessage) => ({ type: message instanceof HumanMessage ? 'human' : 'ai', message: message.content, })); } ''' Currently, I send a system message, and in the callModel function, if this message is detected, the model doesn’t proceed with a response. callModel: ''' if ( state.messages[state.messages.length - 1].content === 'get-chats-of-the-user-from-memory' ) { return; } ''' This approach isn’t ideal, as it introduces other issues. we wanna add pagination as well
Great question! Here are some things you could look into when handling memory in long conversations: - Only get the last N turns of the conversation - Store and retrieve conversation summaries instead of the full thread - Retrieve specific turns of the conversation using semantic search or a custom scoring mechanism - Apply prompt compression
I debated that. I just wanted to focus on the tech and not distract from it. Which frontend tech would you have used? I'll be building more things like this and will include one in a future video.
I couldn't find info about formattedPrompt in the console.log. Where can I check if formattedPrompt works? I want to see the `time` and `tool_names` Thank you for the tutorial!
This is good stuff. Unfortunately, I am familiar with mongoose.js and not Zod. Can the same result be achieve with mongoose.js instead of Zod? If the answer is yes, any video tutorials?
@@MongoDB Thank you! It will also be interesting to have a code-along that shows how we can clone/convert an existing Mongo Atlas database into a RAG and process it with two or three custom nodes/edges.
@MongoDb, This video is really very helpful for me. Thank you so much for this tutorial. I am currently implementing something using LangGraph and MongoDB. I cloned the repository and ran it locally. In the Agents.ts file, I replaced Anthropic with OpenAI: const model = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0, }).bindTools(tools); I was able to start the server and ask queries. However, it's not invoking the employeeLookupTool, and I'm not sure what I'm missing. Please help. Also, I have the knowledge base vectors in another collection. I wanted to create a new agent for the knowledge base and will try to implement it.
It worked, after I change from modelName: "gpt-4o-mini", to modelName: "gpt-4o", maybe mini doesn't support bindtools I guess, I want to get all the raw user query and AI response from the mongodb based on threadid, can you please help with the mongo query for that? Thanks in advance.
✅ Written tutorial → mdb.link/article-qXDrWKVSx1w
✅ Create your free Atlas cluster → mdb.link/free-qXDrWKVSx1w
✅ Get help on our Community Forums → mdb.link/forums-qXDrWKVSx1w
quick question please.. once I created a vector index, what happen to new data coming in to the DB, will it get indexed as well?
I want to get the current state without invoking the agent. I need to retrieve the chat history so I can display it on the UI, allowing the user to continue the conversation. However, I haven’t found a way to access the messages from the current state.
Here’s how I’m doing it right now:
'''
async getChats(userId: string): Promise {
const agentFinalState = await this.agent.invoke(
{ messages: [new SystemMessage('get-chats-of-the-user-from-memory')] },
{ configurable: { thread_id: userId }, recursionLimit: 10 },
);
return agentFinalState.messages
.filter(
(message: BaseMessage) =>
message &&
message.content &&
!(message instanceof SystemMessage) &&
!(message instanceof ToolMessage),
)
.map((message: BaseMessage) => ({
type: message instanceof HumanMessage ? 'human' : 'ai',
message: message.content,
}));
}
'''
Currently, I send a system message, and in the callModel function, if this message is detected, the model doesn’t proceed with a response.
callModel:
'''
if (
state.messages[state.messages.length - 1].content ===
'get-chats-of-the-user-from-memory'
) {
return;
}
'''
This approach isn’t ideal, as it introduces other issues.
we wanna add pagination as well
Very very great tutorial.
Thank you!
Awesome tutorial, easy to set up and follow along
Thanks so much!
Great video! Thank you!
You are welcome!
with all due respect has anyone told you that you look like a techie Tom Segura
Tokens increase when a dialog is long. How can I optimize it? Thank you for your answer!
Great question! Here are some things you could look into when handling memory in long conversations:
- Only get the last N turns of the conversation
- Store and retrieve conversation summaries instead of the full thread
- Retrieve specific turns of the conversation using semantic search or a custom scoring mechanism
- Apply prompt compression
I remember watching Jesse’s videos in early 2020 when I was learning to code.
Thank you! - Jesse 💪
Would have been great if there was a front end app as well. Great tutorial. Thank you
I debated that. I just wanted to focus on the tech and not distract from it. Which frontend tech would you have used? I'll be building more things like this and will include one in a future video.
@@codeSTACKrstreamlit or nextjs using cursor AI
I couldn't find info about formattedPrompt in the console.log. Where can I check if formattedPrompt works? I want to see the `time` and `tool_names`
Thank you for the tutorial!
Thanks! It uses prompt templates. More info can be found here: js.langchain.com/docs/concepts/prompt_templates
This is good stuff. Unfortunately, I am familiar with mongoose.js and not Zod. Can the same result be achieve with mongoose.js instead of Zod? If the answer is yes, any video tutorials?
The answer is yes! And we are working on more examples to share very soon for both Mongoose and Prisma!
@@MongoDB Thank you! It will also be interesting to have a code-along that shows how we can clone/convert an existing Mongo Atlas database into a RAG and process it with two or three custom nodes/edges.
What is the difference between new MongoDBSaver() and new MemorySaver()? Besides refreshing the page, the conversation still remains
One saves to the MongoDB database and the other saves to the server memory. If the server reboots you will loose everything saved to memory.
What are recursion limit?
You can set the recursion limit as needed. It's up to your use case.
@MongoDb, This video is really very helpful for me. Thank you so much for this tutorial. I am currently implementing something using LangGraph and MongoDB. I cloned the repository and ran it locally. In the Agents.ts file, I replaced Anthropic with OpenAI:
const model = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
}).bindTools(tools);
I was able to start the server and ask queries. However, it's not invoking the employeeLookupTool, and I'm not sure what I'm missing. Please help.
Also, I have the knowledge base vectors in another collection. I wanted to create a new agent for the knowledge base and will try to implement it.
It worked, after I change from modelName: "gpt-4o-mini", to modelName: "gpt-4o", maybe mini doesn't support bindtools I guess,
I want to get all the raw user query and AI response from the mongodb based on threadid, can you please help with the mongo query for that? Thanks in advance.
Glad you got it working. You can alter the thread route to return any info you'd like in the response.
is anybody else's mind blow..
yep!!
Really nice video, but please avoid this blinking cursor .. its distracting... :( thanks
you either love it or hate it 😅