Local LLM-Powered Voice Assistant on a single GPU [Llama-3-70B-Instruct]

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.ย. 2024
  • The recent GPT-4o audio demo proved that an LLM-powered voice assistant done correctly could create an awe-inspiring experience. Unfortunately, GPT-4o's audio feature is not publicly available. Instead of busy waiting, we will show you how to build a voice assistant of the same caliber right now in less than 400 lines of Python using Picovoice's on-device voice AI and local LLM stacks.
    Why use Picovoice's tech stack instead of OpenAI? Picovoice is on-device, meaning voice processing and LLM inference are performed locally without the user's data traveling to a third-party API. Products built with Picovoice are private by design, compliant (GDPR, HIPPA, ...), and real-time without unreliable network (API) latency.
    Blog: picovoice.ai/b...
    GitHub: github.com/Pic...

ความคิดเห็น • 3

  • @SyamsQbattar
    @SyamsQbattar 2 หลายเดือนก่อน

    can it integrated to LM studio and anythingLLM?

    • @picovoice
      @picovoice  23 วันที่ผ่านมา

      yes

    • @SyamsQbattar
      @SyamsQbattar 23 วันที่ผ่านมา

      @@picovoice how to do it?