Run Berkeley's Starling 7B LLM across devices without Python hassles

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ต.ค. 2024
  • As we run out of human words to train the next-gen LLMs, many believe that the future is for LLMs to train on synthetic text (think AlphaGo vs AlphaZero). The #Starling7B model from @Berkeley_EECS
    is great attempt on this. It matches GPT4 performance in subjects such as humanities and writing.
    With WasmEdge, you can run this model on any device with a single binary app. Zero Python dependency. It runs at 20 tokens per second on M1 👇
    secondstate.io...
    Model details: starling.cs.be...
    We asked it to write a humanities essay. Try run it on your Mac too

ความคิดเห็น •