Florence 2 - The Best Small VLM Out There?

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ย. 2024

ความคิดเห็น • 43

  • @danielmz99
    @danielmz99 3 หลายเดือนก่อน +12

    Thanks for the great content. A video going through the fine-tuning process on this one would be amazing. I am not sure how this could scale to a video implementation (probably passing a frame each time).

    • @coolmcdude
      @coolmcdude 3 หลายเดือนก่อน +1

      I also would love a video/notebook for a Florence 2 fine tune

    • @shangonghowe
      @shangonghowe 2 หลายเดือนก่อน

      Another video which I appreciate a lot! thank you for sharing.
      I would also like it if you could do another going through the fine-tuning process :)

  • @parkerspitzer
    @parkerspitzer 3 หลายเดือนก่อน +10

    Thanks for your work on sharing this information. Much easier to watch your content than keep my ear to the ground all day trying to keep up. Much appreciated, sir.

  • @IsxaaqAcademy
    @IsxaaqAcademy 3 หลายเดือนก่อน +4

    It's also good at OCR for hand written documents

  • @mukkeshmckenzie7386
    @mukkeshmckenzie7386 3 หลายเดือนก่อน +6

    Vqa tutorial would be nice!

  • @srk5702
    @srk5702 3 หลายเดือนก่อน +1

    We request you to do fune tuning on object detection. Because, all llms are useful generating text oupit only. Thanks in advance

  • @xl000
    @xl000 3 วันที่ผ่านมา

    Is this really idea to use data created by another model to train your model ? 1:54
    Isn't it going to replicate the errors from other models ?

  • @RishabhMathur06
    @RishabhMathur06 2 หลายเดือนก่อน

    @samwitteveenai please make a fine-tuning video about VLMs such as: Llava, Florence-2 and if possible try to use Ollama so that we can make the inference on local device.

  • @ShravanKumar147
    @ShravanKumar147 3 หลายเดือนก่อน

    what would you pick for fine-tuning ?
    Any specific application ideas?

  • @ranu9376
    @ranu9376 3 หลายเดือนก่อน

    I've tried this model, describing the image is great. I've also tried the docvqa, but giving only one word answers and not getting even simplest questions right. i had hoped to do some classification and compare with other models.

  • @pandian1537
    @pandian1537 2 หลายเดือนก่อน

    Is it possible to train ocr task prompt for custom dataset and if we train Florence-2 for ocr task the will it affect performance of the model?

  • @sohitshivhare1541
    @sohitshivhare1541 3 หลายเดือนก่อน +1

    Thanks for the information this is great.
    Can i fine tune it for certain specific images like few short learning. Can you put a tutorial for the same it will be great full.

  • @yassinebouchoucha
    @yassinebouchoucha 2 หลายเดือนก่อน

    When will you release a demo on to fine-tune such model ?

  • @IanScrivener
    @IanScrivener 3 หลายเดือนก่อน +1

    Thanks Sam!!
    Please keep up the great work...

  • @toadlguy
    @toadlguy 3 หลายเดือนก่อน +2

    Would be interested on how much memory is required to run these models. they seem pretty small even unquantized. Maybe I will try it later on my 8GB M1 Mini. One thing I am curious about: at 3:38 , the description for the image is wrong in ways that seem odd. The title is described as being on top with the "20 Years of ..." underneath and Ron's tie is described as red and hair blonde. I wonder if this is just vagaries of the model (placement data would be strange) or over reliance on training data. Or a straight up mistake in 'creating' the paper (which would probably be the most disturbing😉).

  • @ariramkilowan8051
    @ariramkilowan8051 3 หลายเดือนก่อน +1

    I think fine-tuning for OCR would be a good demo. OCR in the real world with images of documents is much harder than OCR on electronic documents so would be cool to see how a small model like this does as an alternative to Claude/GPT4.

    • @MH-ke2wi
      @MH-ke2wi 3 หลายเดือนก่อน +1

      I tried the OCR and OCR with region on images converted (no scanned) from PDF pages. Nothing fancy, standard text with some titles, sections, lists... it is absolutely unusable. When it detects something, it usually got it right, but it could only see around 25% of the text.

    • @ariramkilowan8051
      @ariramkilowan8051 3 หลายเดือนก่อน

      @MH-ke2wi yeah also been struggling to get decent results with OCR

  • @richardobiri2642
    @richardobiri2642 2 หลายเดือนก่อน

    Thanks a lot for this I wish you could consider the continuing process for identifying authentic and fake certificates 🙏🙏🙏

  • @jefframpe5075
    @jefframpe5075 3 หลายเดือนก่อน

    Thanks, Sam! I always appreciate your videos.
    I would love your take on how Florence-2 compare with Apple's 4M-21.

  • @tonyrungeetech
    @tonyrungeetech 3 หลายเดือนก่อน

    Hi Sam. Thank you for the videos. I've been playing around with some of the smaller vision models and trying to implement batched inferencing with little success. If you were trying to accomplish running multiple VQA style questions against the same image quickly, how would you go about that goal? Is batching even in the right direction I should be looking?

  • @SaiManojPrakhya-mp4oe
    @SaiManojPrakhya-mp4oe 3 หลายเดือนก่อน

    It would be great if you can show a finetuning example!

  • @jeremybristol4374
    @jeremybristol4374 3 หลายเดือนก่อน

    I'm enthusiastic about these smaller models. Thanks for covering this!

  • @JustEmbraceTheChallenge
    @JustEmbraceTheChallenge 3 หลายเดือนก่อน

    Please do fine-tuning for Object detection

  • @aa-xn5hc
    @aa-xn5hc 3 หลายเดือนก่อน

    Great, yes, fine tune would be very interesting.

  • @mshonle
    @mshonle 3 หลายเดือนก่อน +2

    I wonder how much performance would be affected when something so distilled then gets quantized?
    Also, it seems amazing that it can handle segmentation for an unspecified set size! With Phi3 Vision you would need to provide a token to represent, say, each giraffe you want to identify.

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน +3

      quantization is a good question! I would expect it to suffer more than a big model. Might give it a test tomorrow.

  • @SinanAkkoyun
    @SinanAkkoyun 3 หลายเดือนก่อน

    Where is the dataset? I couldn't find the release

  • @micbab-vg2mu
    @micbab-vg2mu 3 หลายเดือนก่อน

    Thank you - it looks interesting:)

  • @AbhishekKotecha
    @AbhishekKotecha 3 หลายเดือนก่อน

    Hi Sam, thanks for the video. What do you think about how does it compare with Phi3-V? My take is that this is more raw and better for fine tuning, do you also think so?

    • @Walczyk
      @Walczyk 3 หลายเดือนก่อน

      this is completely better and more advanced than phi 3 v crap image detection

  • @Dodomiaolegemi
    @Dodomiaolegemi 2 หลายเดือนก่อน

    Thank you so much!!

  • @GiovaniFerreiraS
    @GiovaniFerreiraS 3 หลายเดือนก่อน

    I'd love seeing a fine tuning video, specially if it's not question answering, just so it's a different use case from the documentation. Maybe with a quick intro talking about what are possible scenarios where fine tune would be specially helpful.

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน

      Noted!

    • @marcoscipioni132
      @marcoscipioni132 3 หลายเดือนก่อน

      Yes, I'm trying to use it for table extraction out of scanned pdfs with little success so far. Would love to see how you implement that.

  • @unclecode
    @unclecode 3 หลายเดือนก่อน

    This is what people should call "small", anything below 1B! Thanks for your video. By the way, I played around with the quantized version, the result is unbelievably good! I shared a post on Twitter and mentioned you and shared the Colab. Take a look at it. I tried 8 bits and 4 bits. It's odd how 4 bits is almost the same as the base model!

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน +1

      I saw you tweet and retweeted it, very cool stuff. I will check it out. just been knee deep in Gemma stuff for last few days

    • @unclecode
      @unclecode 3 หลายเดือนก่อน

      @@samwitteveenai Thanks, and yes, it's Gemma2's turn. Waiting for your TH-cam notification about the Gemma video!

  • @ALEXPREMIUMGAME
    @ALEXPREMIUMGAME 3 หลายเดือนก่อน

    awesome, thanks