why llama-3-8B is 8 billion parameters instead of 7?

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ต.ค. 2024
  • llama-3 has ditched it's tokenizer and has instead opted to use the same tokenizer as gpt-4 (tiktoken created by openai), it's even using the same first 100K token vocabulary.
    In this video chris walks through why Meta has switched tokenizer and the implications on the model sizes, embeddings layer and multi-lingual tokenization.
    he also runs his tokenizer benchmark and show's how it's more efficient in languages such as japanese
    repos
    ------
    github.com/chr...
    github.com/chr...

ความคิดเห็น • 15