Thank you for this material! Although I don't fully grasp everything yet, it has opened up some great possibilities for me. Could you please recommend how to connect to this system from external applications (which currently do so through Ollama's REST API)? I understand that this method bypasses the embeddings...
Hi! Thanks for the informative video on using Ollama and Postgres for AI applications. I'm particularly interested in the potential for managing multiple embedding models within the PGI Vectorizer framework. My question is: Given that PostgreSQL allows for the management of different embedding models, would it be feasible to implement a key system within the embedding table to identify the specific model used to generate each embedding? For instance, could an additional column in the embedding table store the model name or ID, enabling filtering of embeddings by the desired model during queries? Would this approach be viable and efficient within the PGI Vectorizer environment? What are the potential advantages and disadvantages in terms of database management and query performance? Thanks in advance for your expertise!
It's "free" but local hardware isn't actually free. You need a PC with a good graphics card and a lot of RAM to do anything useful in a reasonable amount of time.
@@TimescaleDB making it Postgresql 15 ready would be a game changer, as right now it cannot be used with Supabase, which I want to. I have a TH-cam channel talking about Supabase a lot and I already use the extension for big data postgresql but i also want to make a tutorial for Supabase.
Thank you for your time making this great tutorial, much appreciated. 🙇♂ Fingers crossed you get a nice beefy RTX machine soon to play with the bigger models too. 💻⌨🖱
What is the pc config needed
Thank you for this material! Although I don't fully grasp everything yet, it has opened up some great possibilities for me. Could you please recommend how to connect to this system from external applications (which currently do so through Ollama's REST API)? I understand that this method bypasses the embeddings...
Hi! Thanks for the informative video on using Ollama and Postgres for AI applications. I'm particularly interested in the potential for managing multiple embedding models within the PGI Vectorizer framework.
My question is: Given that PostgreSQL allows for the management of different embedding models, would it be feasible to implement a key system within the embedding table to identify the specific model used to generate each embedding?
For instance, could an additional column in the embedding table store the model name or ID, enabling filtering of embeddings by the desired model during queries?
Would this approach be viable and efficient within the PGI Vectorizer environment? What are the potential advantages and disadvantages in terms of database management and query performance?
Thanks in advance for your expertise!
It's "free" but local hardware isn't actually free. You need a PC with a good graphics card and a lot of RAM to do anything useful in a reasonable amount of time.
db viewer name?
Thank you for listening to my request on Github! Hope they had the pgai extension to supabase soon!
Thank you! Let us know if you have more feature requests
@@TimescaleDB making it Postgresql 15 ready would be a game changer, as right now it cannot be used with Supabase, which I want to. I have a TH-cam channel talking about Supabase a lot and I already use the extension for big data postgresql but i also want to make a tutorial for Supabase.
Thank you for your time making this great tutorial, much appreciated. 🙇♂
Fingers crossed you get a nice beefy RTX machine soon to play with the bigger models too. 💻⌨🖱