Hi, thank you for sharing. llama-cpp uses GPU by default on Mac. how do you activate GPU usage on Windows?, I installed CUDA, I set the gpuLayers: 16 (i Have 32) and it still using only CPU. I can't find the right info.
You’re right, it was a little straight in, check out my video that i released about huggingface, it’s really on fundamentals, sorry I jumped right in on this one
Thanks for the video. I get a segmentation fault when I run the npx command. Any ideas? npx --no node-llama-cpp chat --model ../models/llama-2-7b.Q5_K_M.gguf ggml_metal_init: allocating ggml_metal_init: found device: AMD Radeon Pro 5500M ggml_metal_init: found device: Intel(R) UHD Graphics 630 ggml_metal_init: picking default device: AMD Radeon Pro 5500M ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: loading '/Users/cycle/test/llmtest1/node_modules/node-llama-cpp/llama/build/Release/ggml-metal.metal' zsh: segmentation fault npx --no node-llama-cpp chat --model ../models/llama-2-7b.Q5_K_M.gguf
Great video but with all due respect, having a transparent screen showing you in the background with that blue lighting is an unnecessary punch in the eyes.
Appreciate it, I love and appreciate the feedback but I think it’s a stylistic choice that makes the channel a little more identifiable sometimes a good thing, sometimes a bad thing, but I like the style and the vibe it brings; sorry it hits your eyes. I do try and tweak it per video
Part of your screen is not captured in the video. Otherwise extremely helpful. Love this content. Wish for more
great, a walktrough in Rust would be nice either
i plan to do a comparision between node-llama, rs-llama and ollama and vllm in some upcoming vids
@@chrishayuk I would absolutely love that
hopefully, i'll get that one done pretty soon
@@chrishayukwe will be waiting for you
Hi, thank you for sharing.
llama-cpp uses GPU by default on Mac. how do you activate GPU usage on Windows?, I installed CUDA, I set the gpuLayers: 16 (i Have 32) and it still using only CPU. I can't find the right info.
I made the npx ipull command and it created a gguf file but with html inside, is that correct?
Can you create a video for bignners for example i dont know what is even llama.cpp and what are gguf files
You’re right, it was a little straight in, check out my video that i released about huggingface, it’s really on fundamentals, sorry I jumped right in on this one
Thanks for the video. I get a segmentation fault when I run the npx command. Any ideas?
npx --no node-llama-cpp chat --model ../models/llama-2-7b.Q5_K_M.gguf
ggml_metal_init: allocating
ggml_metal_init: found device: AMD Radeon Pro 5500M
ggml_metal_init: found device: Intel(R) UHD Graphics 630
ggml_metal_init: picking default device: AMD Radeon Pro 5500M
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading '/Users/cycle/test/llmtest1/node_modules/node-llama-cpp/llama/build/Release/ggml-metal.metal'
zsh: segmentation fault npx --no node-llama-cpp chat --model ../models/llama-2-7b.Q5_K_M.gguf
I have the same issue. any help?
Same here
EDIT: I was able to fix this by rebuilding node-llama-cpp locally using their documentation on the no-metal option.
Great video but with all due respect, having a transparent screen showing you in the background with that blue lighting is an unnecessary punch in the eyes.
the style doesn't really suit everyone, apologies, but it's style i'm sticking with
@@chrishayukcompletely understandable and I 100% respect your choice.
Appreciate it, I love and appreciate the feedback but I think it’s a stylistic choice that makes the channel a little more identifiable sometimes a good thing, sometimes a bad thing, but I like the style and the vibe it brings; sorry it hits your eyes. I do try and tweak it per video