Run Qwen2VL Model with Llama.CPP Locally

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 7

  • @user-wp8yx
    @user-wp8yx 2 วันที่ผ่านมา +1

    I just want to take a moment to thank you for the thumbnail art. The amazed women give me a chuckle every time.

    • @fahdmirza
      @fahdmirza  2 วันที่ผ่านมา +1

      cheers

  • @fontenbleau
    @fontenbleau 2 วันที่ผ่านมา

    Finally llama support, so easier in launchers soon. The only main use for this is better description images for training better image generation. I dream to train a perfect architecture generator, but for that the training data must be precisely described, for that better to use very large VL models which can measure every detail or maybe every pixel.

    • @fahdmirza
      @fahdmirza  2 วันที่ผ่านมา

      good feedback

  • @user-mdrc57cbnjjd
    @user-mdrc57cbnjjd 2 วันที่ผ่านมา +1

    I am glad you are on Ubuntu, Fahd. Too many Mac ai bros out there.

    • @fahdmirza
      @fahdmirza  2 วันที่ผ่านมา

      thanks

    • @timothywcrane
      @timothywcrane 2 วันที่ผ่านมา

      @@fahdmirza Can I convert you to the Debian cult? Mature & never Snapd ! Hard to find both nowadays ;) The netinstall is a small ISO, but it feels so good loading down... you know where to find it... the URL is well known. 👹