Self-host StableLM-2-Zephyr-1.6B. Portable across GPUs CPUs OSes

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ม.ค. 2024
  • www.secondstate.io/articles/s...
    “Small” LLMs are the ones that have 1-2B parameters (instead of 7-200B). They are still trained with trillions of words. The idea is to push the envelope on “information compression” to develop models that can be much faster and much smaller for specialized use cases, such as as a “pre-processor” for larger models on the edge.
    StableLM-2-Zephyr-1.6B is one such model. The video shows an LlamaEdge app runs this model at real-time speed on a MacBook. With the LlamaEdge cross-platform runtime, you can customize the app on a MacBook and deploy it on a Raspberry Pi or Jetson Nano device!
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น •