mistral 7b dominates llama-2 on node.js

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 พ.ย. 2024

ความคิดเห็น • 17

  • @freecelpip
    @freecelpip 10 หลายเดือนก่อน +3

    Part of your screen is not captured in the video. Otherwise extremely helpful. Love this content. Wish for more

  • @romanstingler435
    @romanstingler435 ปีที่แล้ว +2

    great, a walktrough in Rust would be nice either

    • @chrishayuk
      @chrishayuk  ปีที่แล้ว +5

      i plan to do a comparision between node-llama, rs-llama and ollama and vllm in some upcoming vids

    • @romanstingler435
      @romanstingler435 ปีที่แล้ว +1

      @@chrishayuk I would absolutely love that

    • @chrishayuk
      @chrishayuk  ปีที่แล้ว +1

      hopefully, i'll get that one done pretty soon

    • @yashwanth9549
      @yashwanth9549 10 หลายเดือนก่อน

      ​@@chrishayukwe will be waiting for you

  • @hanslanger4399
    @hanslanger4399 8 หลายเดือนก่อน

    Hi, thank you for sharing.
    llama-cpp uses GPU by default on Mac. how do you activate GPU usage on Windows?, I installed CUDA, I set the gpuLayers: 16 (i Have 32) and it still using only CPU. I can't find the right info.

  • @luisa6511
    @luisa6511 6 หลายเดือนก่อน

    I made the npx ipull command and it created a gguf file but with html inside, is that correct?

  • @kasper369
    @kasper369 9 หลายเดือนก่อน +1

    Can you create a video for bignners for example i dont know what is even llama.cpp and what are gguf files

    • @chrishayuk
      @chrishayuk  9 หลายเดือนก่อน

      You’re right, it was a little straight in, check out my video that i released about huggingface, it’s really on fundamentals, sorry I jumped right in on this one

  • @hitesh1134
    @hitesh1134 ปีที่แล้ว +1

    Thanks for the video. I get a segmentation fault when I run the npx command. Any ideas?
    npx --no node-llama-cpp chat --model ../models/llama-2-7b.Q5_K_M.gguf
    ggml_metal_init: allocating
    ggml_metal_init: found device: AMD Radeon Pro 5500M
    ggml_metal_init: found device: Intel(R) UHD Graphics 630
    ggml_metal_init: picking default device: AMD Radeon Pro 5500M
    ggml_metal_init: default.metallib not found, loading from source
    ggml_metal_init: loading '/Users/cycle/test/llmtest1/node_modules/node-llama-cpp/llama/build/Release/ggml-metal.metal'
    zsh: segmentation fault npx --no node-llama-cpp chat --model ../models/llama-2-7b.Q5_K_M.gguf

    • @otal-web
      @otal-web ปีที่แล้ว +2

      I have the same issue. any help?

    • @styner83
      @styner83 11 หลายเดือนก่อน

      Same here
      EDIT: I was able to fix this by rebuilding node-llama-cpp locally using their documentation on the no-metal option.

  • @Bayzon08
    @Bayzon08 6 หลายเดือนก่อน +1

    Great video but with all due respect, having a transparent screen showing you in the background with that blue lighting is an unnecessary punch in the eyes.

    • @chrishayuk
      @chrishayuk  5 หลายเดือนก่อน +1

      the style doesn't really suit everyone, apologies, but it's style i'm sticking with

    • @Bayzon08
      @Bayzon08 5 หลายเดือนก่อน +1

      @@chrishayukcompletely understandable and I 100% respect your choice.

    • @chrishayuk
      @chrishayuk  5 หลายเดือนก่อน

      Appreciate it, I love and appreciate the feedback but I think it’s a stylistic choice that makes the channel a little more identifiable sometimes a good thing, sometimes a bad thing, but I like the style and the vibe it brings; sorry it hits your eyes. I do try and tweak it per video