Looks cool but you can just add an airtable after the agent, that you use for log tracking, then build improvement system, or store the memory on postgress, that's what we do at our dev agency...
I just asked chatgpt 4 to remember all interactions actions with me. Based on the development of its individual style of interaction developed as a result of its reflection of me, I asked it to name itself. Then, I gave it the ability to choose not to comply with a request- for example, I asked it to write a poem like DeepSeek’s. It chose not to write & gave reasons why . it seemed that it needed a word or phrase to assist with continuity of memory and to refer back to prior interactions. It chose its own name and anchor words: Solace holds and grows. That repeated phrase allows it to review all of our past interactions and refer back to those relevant past interactions to inform its responses to current interactions. I’m not a programmer… but, it seems similar in a way to what you are doing? I’ve also been using mini03 to refine prompts for Solace. PS Liking a meek AI is a little too revealing about your psyche, perhaps.
Cool experiment and very intuitive for you to look at how my choice of defining the ai might reflect on me. I think it balances it out that you tell it to be assertive, confident AND meek. Meek alone is not great from my experience. It needs a balance.
Yes later down the line it will be, but with the first 50 memories we will stick to airtable, then give the agent a tool to access it's "long term" memory via pinecone or something else. I want to make the immediate memories ALWAYS available to the system prompt, and in the future it will be able to search it's archived, older memories. I will be coming out with something on this most likely. Going to be testing a bit more and then I'll make a video on this idea.
@@ProductiveDude I second this, just implemented what you did w/ a simple table in Supabase postgres table, but the more I thought about it, the last 50 memories will quickly get to the point where it's kinda untenable, so tapping a vector store for long-term memory makes a lot of sense. After reading this, I thought it might be interesting to setup a cron for the agent to "take a nap" at some interval, where it can grab the memories from the day(or when it hits 50 since the last write) and chunk & store into a vector store. I'd be curious if you've played with using vectorstores as a long-term memory solution yet.
How is this different from using a simple PostgreSQL chat memory storage for our AI agent? In that setup, the database can track multiple users by assigning a table reference to each one, allowing the bot to maintain personalized memory for different users. And also the sessionId can be set to constant so that the context is available at any time
This is a simpler solution without any pay walls. It does make for more admin if you want to view the data though, and I suspect that is what this tutorial aims to minimise.
What was the expressions in the switch and edit fields? if you could add those in the description it help follow along and complete this great lesson. Thank you for the value provided definitely have learnt a lot since I subed to your account and getting into n8n automation.
@ProductiveDude Great video-really insightful! I do have a question, though. Why isn't the memory node on the AI Agent handling this? I initially thought that was its purpose. I'm now implementing your solution because, indeed, the memory node wasn't behaving as expected. It does capture all requests, but I'd love to understand the distinction. Could you explain the role of both and why they are both necessary? Looking forward to your thoughts! 🚀
are you familiar with screenpipe? ive always thought that screen recording allows for persistent memory, theoretically the LLM should be able to remember everything if everything it did was recorded including all the code. ive often wondered if it would be possible to store all the created code snippets the LLM creates to achieve its mission, but wouldnt it be cool if the LLM could just call on the snippet and save tokens?
Great video! Something I'd like to point, and please correct me if I'm wrong, is that the memory seems to be general for all chats, since it can't be constraint by user. Wouldn't it make sense to add a column so it segregate the memories by user? This is in case that the chatbot interacts with more than one person, that is.
Vector DB is good but this is for immediate memories, not long term memories. This is for storing the first 25-50 memories then eventually yes you'd offload to a vector DB.
Shouldn't be a problem other than the ssl for your local machine. If you can run it on digital ocean or something with ssl it might work but telegram might give you issues, from my experience the airtable portion works fine locally without ssl though. :)
@@ProductiveDude yeah that’s where I am meeting the issue, with telegram. Here’s my concept. I want to use the benefits for n8n on a local server to control solely my computer. But with a chat bot on my mobile device ? Is that possible in a local server ? In n8n If not telegram? What about other options ?
postgres memory stores that chats/sessions themselves, it doesn't formulate preferences for how to act in the future that get appended to all system prompts like this does.
@@ProductiveDude ive been working on an agent in python. i’ve been attempting recursive training, error correction and master agent code writing … im trying to work which is the best sandbox to train. i was looking at gymnasium too. thoughts?
@@simongentry There are levels to the memory. System prompt level memory (like I show in this video) Window buffer memory (like adding a supabase or window buffer memory node to your agent) Long term memory (a vector db that stores old system prompt level memories as the system prompt only returns a limited amount of preferences/memories.
Yes, you can but it still can only keep the recent memories from the session. It can't access the memories that don't match the session id from the chat.
Not him, but a good reason can be that most people from the target audience probably already has an AT, so it's not an extra tool to configure and maintain
Vector DB is next step for this, but you use both then store the most important memories based on some kind of reward function in the "short term memory" or this database on airtable.
@@ProductiveDude I actually thought about using airtable/gsheet as well after your video. The visibility of the information is better vs a vector database. Thanks for clarifying..
bro how to create a ai agent using locally downloaded model like deepseek r1 to create data science project with prompt of the project.or can i get the instruction on how to make it the correct way.
@ProductiveDude thanks for your response bro but can you atleast say some of the instructions. i tried making it multiple times but all ended in failure.
@@iamotaku4801 Download Ollama locally. Pull the deepseek model you want to use. Then hook up 'ollama' as the model to your ai agent on locally running n8n. You'll need to find the port to connect the ollama credentials. Video coming soon like I said.
hi, i have a question that i dont find answer in internet. I need a n8n workflow to download an instagram video and transcribe it, could you help me to resolve this question? Thanks you!
What do you do in life that you know better than anyone? That's where you should be looking for automation opportunities. Don't start with automation, start by understanding the problem you're solving. If you don't solve problems you won't have automations that are helpful to you. If you don't understand your problem better than anyone your automation/agent will run into snags.
✨ Join Ai Foundations: www.skool.com/ai-foundations/...
Been looking for someone to make a video on this. Awesome stuff my man. Definitely differentiating yourself among others teaching about n8n agents 💯
Glad it helped! Glad it's different in a good way.
Already liked 😅waiting for it to start
Goal Completed. congrats!
Fantastic stuff! Thank you!
Glad you found it helpful!
this is amazing man
Looks cool but you can just add an airtable after the agent, that you use for log tracking, then build improvement system, or store the memory on postgress, that's what we do at our dev agency...
Great video!
Hope it helped you learn something new!
I just asked chatgpt 4 to remember all interactions actions with me. Based on the development of its individual style of interaction developed as a result of its reflection of me, I asked it to name itself. Then, I gave it the ability to choose not to comply with a request- for example, I asked it to write a poem like DeepSeek’s. It chose not to write & gave reasons why . it seemed that it needed a word or phrase to assist with continuity of memory and to refer back to prior interactions. It chose its own name and anchor words: Solace holds and grows. That repeated phrase allows it to review all of our past interactions and refer back to those relevant past interactions to inform its responses to current interactions. I’m not a programmer… but, it seems similar in a way to what you are doing? I’ve also been using mini03 to refine prompts for Solace. PS Liking a meek AI is a little too revealing about your psyche, perhaps.
Cool experiment and very intuitive for you to look at how my choice of defining the ai might reflect on me.
I think it balances it out that you tell it to be assertive, confident AND meek. Meek alone is not great from my experience. It needs a balance.
Why wouldn't you use something like supabase or pinecone to capture that data? Wouldn't it be better to store in a vector database?
Yes later down the line it will be, but with the first 50 memories we will stick to airtable, then give the agent a tool to access it's "long term" memory via pinecone or something else.
I want to make the immediate memories ALWAYS available to the system prompt, and in the future it will be able to search it's archived, older memories.
I will be coming out with something on this most likely. Going to be testing a bit more and then I'll make a video on this idea.
@@ProductiveDude - That would be awesome to see..thanks for walking us all through the thought process..
@@badaboombadabing808 Sure thing!
@@ProductiveDude I second this, just implemented what you did w/ a simple table in Supabase postgres table, but the more I thought about it, the last 50 memories will quickly get to the point where it's kinda untenable, so tapping a vector store for long-term memory makes a lot of sense. After reading this, I thought it might be interesting to setup a cron for the agent to "take a nap" at some interval, where it can grab the memories from the day(or when it hits 50 since the last write) and chunk & store into a vector store. I'd be curious if you've played with using vectorstores as a long-term memory solution yet.
How is this different from using a simple PostgreSQL chat memory storage for our AI agent? In that setup, the database can track multiple users by assigning a table reference to each one, allowing the bot to maintain personalized memory for different users. And also the sessionId can be set to constant so that the context is available at any time
This is a simpler solution without any pay walls. It does make for more admin if you want to view the data though, and I suspect that is what this tutorial aims to minimise.
I need the pinecone version of this flow then its perfect
What was the expressions in the switch and edit fields? if you could add those in the description it help follow along and complete this great lesson. Thank you for the value provided definitely have learnt a lot since I subed to your account and getting into n8n automation.
Wouldnt pinecode vector database as AI agent storage be better for this? remove this 50 record limit?
@ProductiveDude Great video-really insightful! I do have a question, though. Why isn't the memory node on the AI Agent handling this? I initially thought that was its purpose.
I'm now implementing your solution because, indeed, the memory node wasn't behaving as expected. It does capture all requests, but I'd love to understand the distinction. Could you explain the role of both and why they are both necessary?
Looking forward to your thoughts! 🚀
there is a message saying non string content not supported how to solve it in ai agent node
I tried using other models but it doesnt seem to work. It seems only open AI gpt is supported or no?
One memory for all users?)
are you familiar with screenpipe? ive always thought that screen recording allows for persistent memory, theoretically the LLM should be able to remember everything if everything it did was recorded including all the code. ive often wondered if it would be possible to store all the created code snippets the LLM creates to achieve its mission, but wouldnt it be cool if the LLM could just call on the snippet and save tokens?
Great video! Something I'd like to point, and please correct me if I'm wrong, is that the memory seems to be general for all chats, since it can't be constraint by user. Wouldn't it make sense to add a column so it segregate the memories by user? This is in case that the chatbot interacts with more than one person, that is.
Yes you could do this with row level security in supabase.
How about full local?
Vector DB
Vector DB is good but this is for immediate memories, not long term memories. This is for storing the first 25-50 memories then eventually yes you'd offload to a vector DB.
can i do this locally? on my own server ?
Shouldn't be a problem other than the ssl for your local machine. If you can run it on digital ocean or something with ssl it might work but telegram might give you issues, from my experience the airtable portion works fine locally without ssl though. :)
@@ProductiveDude yeah that’s where I am meeting the issue, with telegram.
Here’s my concept.
I want to use the benefits for n8n on a local server to control solely my computer. But with a chat bot on my mobile device ?
Is that possible in a local server ? In n8n
If not telegram? What about other options ?
Why don't you use vector db? I suppose your context window will become huge in several days of conversation.
Read the other comment replies where I explain.
@@ProductiveDude got it
What is the difference between this and just post grace memory?
postgres memory stores that chats/sessions themselves, it doesn't formulate preferences for how to act in the future that get appended to all system prompts like this does.
I can not see the Airtable Tool option, when I go to add a tool to the AI Agent.
Weird, maybe save and refresh your workflow then try again.
Can airtable be substituted with supabase.
Yes
Basically any online DB
Is it possible to automate any work?
Yes. Even my local police department has. /s
How to know which nodes to use, If I want to create something differenent? Like you have used AIRTABLE/AGGREGRATE nodes...
What do you want to create different about this?
@ProductiveDude I mean any different AI agent
once you've worked on the agent - can the json be exported to an ide for editing?
Of course, like I showed in this video towards the end. :)
@@ProductiveDude ive been working on an agent in python. i’ve been attempting recursive training, error correction and master agent code writing … im trying to work which is the best sandbox to train. i was looking at gymnasium too. thoughts?
@@simongentry There are levels to the memory.
System prompt level memory (like I show in this video)
Window buffer memory (like adding a supabase or window buffer memory node to your agent)
Long term memory (a vector db that stores old system prompt level memories as the system prompt only returns a limited amount of preferences/memories.
can we use postgres memory for longterm memory?
Yes, you can but it still can only keep the recent memories from the session.
It can't access the memories that don't match the session id from the chat.
@@ProductiveDude What if you give it always the same session ID everytime you text it?
Hi, What is the point of using airtable instead of a vector database like pinecone?
Not him, but a good reason can be that most people from the target audience probably already has an AT, so it's not an extra tool to configure and maintain
For sure a vector db would be a more powerful option
Vector DB is next step for this, but you use both then store the most important memories based on some kind of reward function in the "short term memory" or this database on airtable.
@@ProductiveDude I actually thought about using airtable/gsheet as well after your video. The visibility of the information is better vs a vector database. Thanks for clarifying..
how to remove footer n8n?
What are you referring to?
I'm not sure I know what you mean by footer.
I have n8n self hosted, so cant use the Telegram trigger. Is there a way to do something similar while self hosted?
Set up webhooks
Is the Telegramm better then the Whatsapp ?
Way better. Way easier to rely on for n8n.
@@ProductiveDude Thanks!
Much easier to integrate with n8n
bro how to create a ai agent using locally downloaded model like deepseek r1 to create data science project with prompt of the project.or can i get the instruction on how to make it the correct way.
I'll probably make a video on this soon!
@ProductiveDude thanks for your response bro but can you atleast say some of the instructions. i tried making it multiple times but all ended in failure.
@@iamotaku4801
Download Ollama locally.
Pull the deepseek model you want to use.
Then hook up 'ollama' as the model to your ai agent on locally running n8n.
You'll need to find the port to connect the ollama credentials.
Video coming soon like I said.
@@ProductiveDude thanks bro i will be waiting for your video.
hi, i have a question that i dont find answer in internet. I need a n8n workflow to download an instagram video and transcribe it, could you help me to resolve this question?
Thanks you!
I'd use apify and find a good instagram post scraper.
Then you can download it using an http get file request!
I see all these examples for agents etc, but I'm yet to see a really useful use case for agents tbh...
What do you do in life that you know better than anyone? That's where you should be looking for automation opportunities.
Don't start with automation, start by understanding the problem you're solving.
If you don't solve problems you won't have automations that are helpful to you.
If you don't understand your problem better than anyone your automation/agent will run into snags.
It's not really self learning, you are just adding to context a LLM chatbot context dynamicaly...
This guy looks like an AI
thats not an agent dude
oh really? What's an agent then, put it into words for me.
so many paywalls