Local RAG using Llama 3, Ollama, and PostgreSQL

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 ต.ค. 2024

ความคิดเห็น • 6

  • @BertrandDunogier
    @BertrandDunogier 18 นาทีที่ผ่านมา

    Thank you, clear and straightforward.

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 4 วันที่ผ่านมา +2

    great , looking forward learning how to utilize Timescale DB for our AI start-up !

    • @TimescaleDB
      @TimescaleDB  4 วันที่ผ่านมา +1

      Awesome! That's great to hear. Please let us know how it goes, or if you have any questions. 😁

  • @nanomartin
    @nanomartin วันที่ผ่านมา

    Quite useful tutorial! Thanks for bringing it.
    A few questions that come to my mind:
    1- How could I get rid of proprietary image in docker? I bet it is possible to get our existing PG instance and just drop in the necessary extensions and should work as well, but just want to confirm.
    2- It looks easy to delegate to the PG extension to communicate with ollama and get the embedings, however I see to many roundrtrips in that approach. For a programmatic system that would have to do tens of thousands runs a day, how this perform? Is there a more "straight" way to pull embeddings from ollama and query PG?

  • @matarye9745
    @matarye9745 5 วันที่ผ่านมา +2

    Nice tutorial!