I'd like to build RAG chat bot via Flowiseai. but my database is non-vector in Supabase. (user information) Should I vectorized them by using Make as you presented? when every row has inserted in Supabase(non-vector) -> OpenAI embedding model will convert into another table(vector) in my Supabase ? But why there are 2 embedding models? are the OpenAI Embeddings in Flowiseai and Make different?
In order for Flowise to work with retrieving data from your Supabase table, you will need these columns in Supabase: metadata (JSONB) embedding (vector, make sure the vector size matches the size of the OpenAI embedding model you use) Here is a relevant post about this: supabase.com/blog/openai-embeddings-postgres-vector The reason why there are two embeddings is this: 1) In Flowise, the embedding is the first node that turns the user message into an embedding, which can then be searched in Supabase 2) In Make, the OpenAI embedding API call turns the data in the row into an embedding that will be searched in Flowise to find relevant answers Both embeddings in Flowise and Make use the same embedding model (size)
I'd like to build RAG chat bot via Flowiseai. but my database is non-vector in Supabase. (user information) Should I vectorized them by using Make as you presented? when every row has inserted in Supabase(non-vector) -> OpenAI embedding model will convert into another table(vector) in my Supabase ?
But why there are 2 embedding models? are the OpenAI Embeddings in Flowiseai and Make different?
In order for Flowise to work with retrieving data from your Supabase table, you will need these columns in Supabase:
metadata (JSONB)
embedding (vector, make sure the vector size matches the size of the OpenAI embedding model you use)
Here is a relevant post about this: supabase.com/blog/openai-embeddings-postgres-vector
The reason why there are two embeddings is this:
1) In Flowise, the embedding is the first node that turns the user message into an embedding, which can then be searched in Supabase
2) In Make, the OpenAI embedding API call turns the data in the row into an embedding that will be searched in Flowise to find relevant answers
Both embeddings in Flowise and Make use the same embedding model (size)
Nice video. Is it possible to do the same using n8n instead of Make?
@@ObiEzeilo Yes, that is possible. Same concepts (APIs), different interface
@@simosme Great. Can you do a video of it using n8n (if it's not too much to ask)? I'm struggling with it.
@@Obinna-ai got it, I am adding this to my list of videos - it will be sometime until I post it. I appreciate you suggesting it!
Ottimi video Grazie
Juste look 10 times your video ! But if your vector just content, you cannot chat with other date ? like url ?
The embedding can also contain URLs and other data types
@@simosme i work with n8n, and i cannot do this. Do you use n8n ?
@@nusquama Ah sorry about that - Yes I use n8n