Run your own GitHub Copilot assistant for Free - Local LLM with Tabby

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024

ความคิดเห็น • 10

  • @m0rrls
    @m0rrls 9 หลายเดือนก่อน +2

    Great explanation, good job with including the potential problems that somebody could encounter (getting CUDA, cloning repo, event installing git). Audio is great and the same for presentation ;) Would you mind showing how to use these models effectively? Like how to phrase prompts to get most of them, how to provide context from multiple files if that is possible (like for writing new tests in existing test class for bug fix, or the other way around - write codefix after new unit tests was added)

    • @codedeck
      @codedeck  9 หลายเดือนก่อน

      Thank you so much! Yes in the future I will make a more detailed video on making use of various models (not with tabby specifically but in general). I am still experimenting with different models for querying files etc.

  • @nurulabrar3561
    @nurulabrar3561 6 หลายเดือนก่อน

    The way you explained that was nice.

  • @notcorrect
    @notcorrect 9 หลายเดือนก่อน +2

    I couldn't add new models by cloning, it would cause a crash with the gguf file.
    To resolve this don't clone the repository and just specify the model you want. It will download it for you.

  • @MainBeta
    @MainBeta 9 หลายเดือนก่อน +2

    Thanks for the video. I'm using ollama + continue extension which uses ollama api endpoint directly. Is there any reason to use tabbyml instead?

    • @codedeck
      @codedeck  9 หลายเดือนก่อน +1

      Honestly it comes down to preference. I just like how tabby has its own extension and is more streamlined!

    • @АлексейКиреев-н7н
      @АлексейКиреев-н7н 8 หลายเดือนก่อน +1

      Tabby faster on same models. No idea why, maybe because caching

  • @GeorgeVasil
    @GeorgeVasil หลายเดือนก่อน

    Is it possible to run this stuff with CPU (eg i5 8500) not Cuda?