Install Reflection-70B with Ollama Locally and Test Reflection-Tuning

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ก.ย. 2024
  • This video shows how to locally install Reflection Llama-3.1 70B and test it on various benchmarks and understand how reflection tuning works.
    🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
    🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
    bit.ly/fahd-mirza
    Coupon code: FahdMirza
    ▶ Become a Patron 🔥 - / fahdmirza
    #reflection70b #reflectiontuning
    PLEASE FOLLOW ME:
    ▶ LinkedIn: / fahdmirza
    ▶ TH-cam: / @fahdmirza
    ▶ Blog: www.fahdmirza.com
    RELATED VIDEOS:
    ▶ Resource huggingface.co...
    All rights reserved © 2021 Fahd Mirza

ความคิดเห็น • 17

  • @olekbuzunov
    @olekbuzunov 9 วันที่ผ่านมา

    They gave this specific example in demo and it worked :-))

  • @Sanguen666
    @Sanguen666 9 วันที่ผ่านมา +1

    Reflection succeeded in their goal: millions in funding for a dataset that was trained on benchmarks.
    And even the benchmarks results are not reproducible.
    Oh, and its not even an FFT, it's a LoRA.

  • @TolgaOzisik
    @TolgaOzisik 10 วันที่ผ่านมา +1

    I openned all notifications!

    • @fahdmirza
      @fahdmirza  9 วันที่ผ่านมา

      cheers

  • @aa-xn5hc
    @aa-xn5hc 10 วันที่ผ่านมา

    Love when you give your opinions

    • @fahdmirza
      @fahdmirza  9 วันที่ผ่านมา

      cheers

  • @TimothyMusson
    @TimothyMusson 9 วันที่ผ่านมา

    Nice one - thanks for the great channel :) BTW, I was playing with Gemma2:27B while listening to this video, so tried the 10 sentences ending with the word 'beauty' test. It succeeded. It failed the "how many 'r's in 'strawberry'" test, but was able to count letters correctly if I asked it to first list each letter of the word line-by-line, as a numbered list. Anyway, just fun :)

  • @vd3fyn
    @vd3fyn 9 วันที่ผ่านมา

    Hi Fahd Well said the thing about Hype :)

    • @fahdmirza
      @fahdmirza  7 วันที่ผ่านมา

      thanks.

  • @jessedbrown1980
    @jessedbrown1980 9 วันที่ผ่านมา

    I smell progress.

  • @thoughtslibrary
    @thoughtslibrary 9 วันที่ผ่านมา

    are you using proxmox for virtualization ?

  • @ROKKor-hs8tg
    @ROKKor-hs8tg 6 วันที่ผ่านมา

    not create with modelfile....why?

    • @ROKKor-hs8tg
      @ROKKor-hs8tg 6 วันที่ผ่านมา

      But work with llama-cpp-python but not with ollama modelfile

  • @justinbenavidez1985
    @justinbenavidez1985 9 วันที่ผ่านมา

    why is mine running so slow?

  • @tirtir1401
    @tirtir1401 9 วันที่ผ่านมา

    simple prompt fails:
    "Write a script that implements the “tree -L 2” functionality, but in bash without using tree. make a one-line script, without arguments, for the current directory."
    ANY other LLM can do this more or less correct, except reflection (70b-q8_0 tested). Reflection code just do something useless.

    • @fahdmirza
      @fahdmirza  9 วันที่ผ่านมา

      thanks for sharing.

  • @teske122
    @teske122 9 วันที่ผ่านมา

    Stopped watching 5 minutes in, but I think it would help if you actually used the system prompt that is mentioned you should use.