This is so valuable! The idea of embedding enrichment with summarizations will be very useful to one of my projects. Thanks a lot for sharing! Congrats from Brazil!
Great video!! Thanks for sharing the code! One question though: Inside A:tier code -> "print_ai_answer" function, you wrote: for like in extracted_personality["likes"]: text_to_embed = f"The user likes {like}" current_embeddings = embedding_client.embed_query(text_to_embed) dislike_with_metadata = { "id": str(uuid.uuid4()), "values": current_embeddings, "metadata": {"type": "likes", "content": like} } embeddings.append(dislike_with_metadata) Was it not supposed to be something like "likes_with_metadata ={...}" and then "embeddings.append(likes_with_metadata)" ? I guess repeating "dislike_with_metadata" does not make a difference for the code functionality, but it was a bit confusing to understand the code for a moment. Thanks!
Mu gut feeling tells me, that the only how you can actually have a long term memory without flaws and scaffolding, which always accompany RAG, is continuous fine tuning.
Hmmm fine-tuning does not guarantee the model remembering certain facts or details, and moreso for performance tuning. At least from personal experience, fine-tuning has never been a one and done thing, and requires substantial experimenting, unlike RAG.
Yeah, the hype around RAG makes us to believe so. The thing is - what made models to learn the initial set of knowledge is training. We just haven't yet discovered good enough recipes for cheap and effective training.
I think there should be a microphone in the middle of the audience. You had an insightful discussion with them, amazing audience!
Great idea!
This is so valuable! The idea of embedding enrichment with summarizations will be very useful to one of my projects. Thanks a lot for sharing! Congrats from Brazil!
Glad you found it useful
Wonderful narration
Thanks for sharing these meetings!
Hope you found it useful
you're a great teacher!
Thanks! Let me know if you have any feedbacks
Maybe S tier would be using Reranking (Jina) to also select memories based off the users intent.
Can anyone share the anthropic system prompt different, can't find it anywhere
The camera swivel is an L when you get excited / nervous
Yeah it’s a bit too sensitive
Great video!! Thanks for sharing the code! One question though:
Inside A:tier code -> "print_ai_answer" function, you wrote:
for like in extracted_personality["likes"]:
text_to_embed = f"The user likes {like}"
current_embeddings = embedding_client.embed_query(text_to_embed)
dislike_with_metadata = {
"id": str(uuid.uuid4()), "values": current_embeddings, "metadata": {"type": "likes", "content": like}
}
embeddings.append(dislike_with_metadata)
Was it not supposed to be something like "likes_with_metadata ={...}" and then "embeddings.append(likes_with_metadata)" ?
I guess repeating "dislike_with_metadata" does not make a difference for the code functionality, but it was a bit confusing to understand the code for a moment.
Thanks!
Good catch!
Mu gut feeling tells me, that the only how you can actually have a long term memory without flaws and scaffolding, which always accompany RAG, is continuous fine tuning.
Hmmm fine-tuning does not guarantee the model remembering certain facts or details, and moreso for performance tuning. At least from personal experience, fine-tuning has never been a one and done thing, and requires substantial experimenting, unlike RAG.
Yeah, the hype around RAG makes us to believe so. The thing is - what made models to learn the initial set of knowledge is training. We just haven't yet discovered good enough recipes for cheap and effective training.
Is the camera automatic tracking to ur face? Im new into machine learning?
Yeah, it's the DJI Pocket 3
Great stuff man. What’s S-tier though? Maybe I missed it
I forgot to mentioned the S tier at the end. It's
No, I know why you want long-term memory-because your AI girlfriend doesn't remember your past conversations. She forgets you, that's why.
T.T