Build a Multi-LLM Voice Assistant in 12 Minutes with Next.JS

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.พ. 2025

ความคิดเห็น • 17

  • @stonedizzleful
    @stonedizzleful ปีที่แล้ว +1

    Fantastic video. So many features in one package!!

  • @mjohnson9104
    @mjohnson9104 3 หลายเดือนก่อน +1

    Wow…this is goal setting.

  • @hellagood9
    @hellagood9 ปีที่แล้ว +1

    Thanks!!!

  • @Amplify-AI
    @Amplify-AI 8 หลายเดือนก่อน +1

    For conversational memory, would you recommend storing that in the client state and using context stuffing when sending the prompt?

    • @Amplify-AI
      @Amplify-AI 8 หลายเดือนก่อน +1

      I was thinking of using the OpenAI agents API which supports threads but each vendors SDK would be very different and hard to unify. So perhaps it’s just easier to use the Vercel AI SDK

    • @DevelopersDigest
      @DevelopersDigest  8 หลายเดือนก่อน

      @@Amplify-AI Good question - the nice thing with the vercel ai sdk is that you can set up the continuity between state of the chat between the front and back end and it is vendor agnostic. Langchain also has a nice template if you are looking for something in next.js as well.
      github.com/langchain-ai/langchain-nextjs-template/

  • @phinneasmctipper7518
    @phinneasmctipper7518 ปีที่แล้ว +1

    This is so cool!

  • @pineappleeeeeeee2920
    @pineappleeeeeeee2920 6 หลายเดือนก่อน

    Hi, can we integrate OpenAI TTS / deepgram here instead of webkit?

  • @prashlovessamosa
    @prashlovessamosa ปีที่แล้ว +1

    Thank you.

  • @Amplify-AI
    @Amplify-AI 8 หลายเดือนก่อน

    What an amazing demo and valuable collection of resources. Thank you for sharing this.
    Would this be possible to run using Google VertexAI sdk for speech to text, text to speech, and multimodal vision capabilities?

    • @DevelopersDigest
      @DevelopersDigest  8 หลายเดือนก่อน +1

      AI Device: github.com/developersdigest/ai-devices
      Gemini Flash Multimodal example: github.com/developersdigest/gemini-flash-api

  • @mrquicky
    @mrquicky ปีที่แล้ว +1

    The remote execution of LLM's through a licensed third party in the clouds. Saves electricity and hardware costs at home!