Flux Tools For Low VRAM GPU's | Introduction to Inpainting & Outpainting

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 81

  • @MonzonMedia
    @MonzonMedia  หลายเดือนก่อน +10

    By the way, there is Flux Tools support for SwamUI and SDNext. Fingers crossed that Forge adds this when the updates get done soon!

    • @havemoney
      @havemoney หลายเดือนก่อน

      This is what JackDainzh answered, I don’t know who he is.
      It's not a controlnet model, it's an entirely separate model that is designed to be used as a model to generate, not to guide. The issue is that, as of now, currently with img2img implementation of flux, there is no way to guide the model, say, with pix2pix intrsuct's Image CFG Scale slider, because it doesn't effect anything at the moment. (I forced it visible in the ui, when using flux), but because flux's conditioning is different to that of regular sd models, it get's skipped.
      To implement the guidence from img2img tab, is to rewrite the whole backend engine, which I have no idea how long will it take, maybe months, or maybe 1 day.

  • @SouthbayJay_com
    @SouthbayJay_com หลายเดือนก่อน +1

    These are so cool! Can't wait to dive into these! Thanks for sharing the info! 🙌🙌

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      @@SouthbayJay_com appreciate it bro! Have fun! 🙌🏼

  • @AimanFatani
    @AimanFatani หลายเดือนก่อน +1

    Been waiting for such things .. thanks for sharing ❤❤

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      You’re welcome 😊

  • @onurerdagli
    @onurerdagli หลายเดือนก่อน +1

    Thank you, I just tried outpaint and inpaint truly amazing quality

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Indeed! So far I haven't seen any major issues yet but still testing. Impressive so far, especially with outpainting. 👍

  • @vVinchi
    @vVinchi หลายเดือนก่อน +1

    This will be a good series of videos

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Indeed! Already working on the next one. Good to hear from ya bud!

  • @Elwaves2925
    @Elwaves2925 หลายเดือนก่อน +1

    I finally got to do the inpainting I needed from Flux. On 12Gb VRAM with the full 'fill' model it was a lot quicker than I expected. That's the only one I've tried so far but with how well it worked I'm looking forward to the others.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      @@Elwaves2925 good to know it can run on 12gb vram. Have you tried the FP8 and if you do is it any faster?

    • @Elwaves2925
      @Elwaves2925 หลายเดือนก่อน +1

      @@MonzonMedia Haven't had chance to try the fp8 version as I didn't know it existed until your video. I will be trying it later.

  • @FranktheSquirell
    @FranktheSquirell หลายเดือนก่อน +1

    ya did it again , great job as usual 😊😊
    only trouble is i've been using foocus for in/out painting and now you've made me want to try it in comfyui grrrrr lol
    I gave up on comfy cause the update broke the LLM generator i was using in it , come to mention it i cant even remember the name of the generator now .. damn🤣🤣🤣🤣

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      😊 It does take some time to get used to, I have a love/hate relationship with Comfyui hehehe. But it is worth knowing though especially since it get's all the latest features quickly. At the very least just learn how to use drag and drop workflows and install any missing nodes. That's pretty much all most people need to know.

  • @havemoney
    @havemoney หลายเดือนก่อน +9

    We are waiting for lllyasviel to attach it to Forge

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +2

      🙏😊 I do see some action on the github page and no other "delays" posted. Fingers crossed my friend!

    • @mik3lang3lo
      @mik3lang3lo หลายเดือนก่อน +1

      ❤ we are all waiting ❤

  • @TheColonelJJ
    @TheColonelJJ หลายเดือนก่อน +1

    As always, your videos are a welcome view. Favor to ask. As things come so fast to Comfy could you add a sound bite when things aren't quite ready for Forge?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Yeah I normally do but forgot this time although I did post a pinned comment that there is support for other platforms like SDNext and SwarmUI.

  • @hotlineoperator
    @hotlineoperator หลายเดือนก่อน +1

    Some people keep several functions or "workflows" on one desktop, which they turn on and off as needed. Others keep separate workflows on completely different desktops or use them one-by-one. Is there a convenient function in ComfyUI that allows you to switch between different Workflows, as if you have several desktops open and choose the one that suits what you are doing.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      The new ComfyUI has a workflow panel on the left that allows you to select your saved workflows or recently used. Alternatively there is a fairly new tool I've been trying out called Flow that has several workflows pre-designed. Downside of it is that you can't save custom workflows yet but I hear that option will come soon. I'll be doing a video on it soon. Other than that, yeah it really is a personal thing on what works best for you.

  • @contrarian8870
    @contrarian8870 หลายเดือนก่อน +3

    Thanks. Do the Redux next, then Depth, then Canny

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +2

      Welcome! Redux is pretty cool! Will likely do it next, then combine the 2 controlnets in another video.

  • @skrotov
    @skrotov หลายเดือนก่อน +1

    great, thanks. by the way you can hide noodles just pressing eye icon on the down right side of your screen

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      Indeed! I do like to use the straight ones though but switch to spline when I need to remember where everything is connected. 😊 👍

  • @Scn64
    @Scn64 หลายเดือนก่อน +1

    When painting the mask, what effect do the different colors (black, white, negative) and opacity have on the outcome? Does the resulting inpaint change at all depending on which color/opacity you choose?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      It's just for visual preference it has no effect on the outcome.

  • @generalawareness101
    @generalawareness101 หลายเดือนก่อน +1

    How do I get flux to inpaint text? I have tried everything when all I want is to take an image and have flux add the text it generates over it.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Same way you would prompt for it, just state in your prompt something like "text saying _________" and inpaint the area you want it to show up.

    • @generalawareness101
      @generalawareness101 หลายเดือนก่อน

      @@MonzonMedia Tried doing that for a few days it just never worked. I could say a lake, or an army, or whatever and that it would do, but never the text. Stumped.

  •  หลายเดือนก่อน +1

    Thanks for the vid! And do you know if the flux.1-fill-dev (23Gb) version is an extended version of the original flux.1-dev? or a whole new thing, and you have to install both?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      Welcome! Typically in-outpaint models are just trained differently but should be based on the original model.

    •  หลายเดือนก่อน +1

      @@MonzonMedia Got it! thanks!

  • @LucaSerafiniLukeZerfini
    @LucaSerafiniLukeZerfini หลายเดือนก่อน +1

    Can't wait. I found flux less effective in design rather than SDXL.

  • @cekuhnen
    @cekuhnen หลายเดือนก่อน +1

    Redux will be fun for MJ to deal with.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Hey my friend! Nice to see you here! I haven't used MJ in a while but there is a lot you can do locally compared to MJ's features, plus way more models to choose from. Hope all is well with you. 👍

    • @cekuhnen
      @cekuhnen หลายเดือนก่อน +1

      @@MonzonMedia My MJ sub will end this year and I wont go back. Vizcom became so powerful and Rubbrband also is shaping up really well.

  • @Maylin-ze6qx
    @Maylin-ze6qx หลายเดือนก่อน +2

    ❤❤❤❤

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Thank you! 😊

  • @skrotov
    @skrotov หลายเดือนก่อน +1

    and what i don't like in this new fill model is that it seems works on the actual pixels without enlarging painted area as we did in automatic. As a result we have low detailed and crappy quality if masked object was not so big

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      That has more to do with the platform you are using, for example fooocus and invoke ai has methods where when inpaint is used it generates the inpainted areas in it's native resolution. I can't recall on comfyui if there is a node that does that but I'm pretty sure there is. Might make a good video topic. 👍

  • @ProvenFlawless
    @ProvenFlawless หลายเดือนก่อน +1

    Huh. What is the difference between xlabs and Shakker-Labs controlnets of canny/depth. Why is this special? We already have two of them. Someone please explain.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      From what I recall they have 2. The union pro controlnet (6GB) which is an all in one controlnet with multiple controlnets. It's pretty decent but still needs more training. They also have a separate depth model that is 3GB, this one is only 1.2GB. I've yet to do side by side comparisons yet though. It was the same with SDXL, we will get other controlnets from the community until one is trained better. Keep in mind controlnet for flux is still very new.

  • @RiftWarth
    @RiftWarth หลายเดือนก่อน +1

    Could you please do a video on crop and stitch with Flux tool inpainting?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Yes of course! Will be doing it on my next inpainting video 👍

    • @RiftWarth
      @RiftWarth หลายเดือนก่อน +1

      @MonzonMedia Thank you so much. Your tutorials are really good and easy to follow.

  • @bause6182
    @bause6182 หลายเดือนก่อน +3

    Thanks for the guide , it is possible to run redux with low vram ?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +3

      How low? The redux model itself is very small, only 129MB so if you have low vram gpu just use the GGUF flux models and you should be good to go! Runs great on my 3060Ti 8GB VRAM with Q8 GGUF model.

    • @bause6182
      @bause6182 หลายเดือนก่อน +1

      ​@@MonzonMediathank you, do we need another workflow for gguf models?

  • @eledah9098
    @eledah9098 หลายเดือนก่อน +1

    Is there a way to include LoRA models for inpainting?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Not sure what you mean? Do you want to use a lora to inpaint? It doesn't work that way.

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน +2

    With inpainting, it disturbs the composition of the image, the results are not that good, so are the results for the outpainting, the final images are distorted at the edges and lost details, using the fp8 model.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      I've had a good experience so far with both inpainting and outpainting. Make sure you are increasing the flux guidance. There are other methods to doing inpainting that should help with the original composition which I will cover soon.

  • @Xenon0000000
    @Xenon0000000 หลายเดือนก่อน +1

    When I try the outpainting workflow, the pictures come out all pixelated, especially the added part. What am I doing wrong? I'm using the same parameters, denoise is already at 1.
    Thank you for your videos by the way, you should have way more subs!

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      @@Xenon0000000 appreciate the support and kind words. Are you using a high flux guidance? 20-30 works for me.

    • @Xenon0000000
      @Xenon0000000 หลายเดือนก่อน +1

      @@MonzonMedia I left it at 30, I'll try changing that parameter too, thank you.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      I have a much better workflow that I'll be sharing with you all soon that gives better results. Hope to post it some time tomorrow (Wed).

  • @RikkTheGaijin
    @RikkTheGaijin หลายเดือนก่อน +1

    SwarmUI tutorial please

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Working on it! 😊

  • @FranktheSquirell
    @FranktheSquirell หลายเดือนก่อน +1

    me again lol 😊
    Have you tried the "DMD2" SDXL models yet? not that many about but wow are they impressive . prompt adherence is about the same as Flux schnell, but the image quality is really good, they say 4-8 steps, but a 12 step DMD2 image gives better results imo.
    Then again i am getting old now and me eyes aren't as good as they used to be .. that's my excuse 🤣🤣

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      Not yet but I remember reading about it on Reddit. Thanks for the reminder!

  • @LucaSerafiniLukeZerfini
    @LucaSerafiniLukeZerfini หลายเดือนก่อน

    Great to follow. Updated Comfy but returning this:
    RuntimeError: Error(s) in loading state_dict for Flux:
    size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
    by the way, depth and canny would be the best to see

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Context? What were you doing? What are your system specs?

    • @LucaSerafiniLukeZerfini
      @LucaSerafiniLukeZerfini หลายเดือนก่อน +1

      I managed to pull a comfyui update and now it works. Still having outlines visible on the outpainting. Thanks for the reply. I'm on windows and 4090 rtx.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Cool! Yeah always update when new features come out. If you’re seeing seams when outpainting try increasing the feathering or do 1-2 sides at a time. Results can vary.

    • @LucaSerafiniLukeZerfini
      @LucaSerafiniLukeZerfini หลายเดือนก่อน +1

      Yes maybe side by side works better. Other point I'm trying to manage background switching for car but the results still awful with flux.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      @@LucaSerafiniLukeZerfini crop and stich inpaint node might be better for that but also the redux model can do it. I'll be posting a video on redux soon.

  • @sokphea-h5q
    @sokphea-h5q หลายเดือนก่อน

    Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). how i'm can do ?

    • @benkamphuis5614
      @benkamphuis5614 หลายเดือนก่อน

      same here!

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Did you do an update?

    • @_O_o_
      @_O_o_ หลายเดือนก่อน +1

      I had the same problem. My DualCLIPLoader Type was set to "Sdxl" not "Flux" ... maybe that helps haha

  • @rogersnelson7483
    @rogersnelson7483 หลายเดือนก่อน +1

    I tried both the big model and the FP8. Nothing but really BAD results. I don't know why. I'm using a 8 Gig VRam.
    All I get is random noise around the the outpaint areas and the original image is changed to mostly to noise.
    Also should it take 6 to 10 minutes for 1 image?

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      I'm going to do a follow up video on inpainting. What is shown here is very basic and sometimes not the best results. There are a couple other nodes that will help to get better results. Stay tuned!

    • @rogersnelson7483
      @rogersnelson7483 หลายเดือนก่อน +1

      @@MonzonMedia Thanks for you reply. I'll be watching. Keep up the good work as usual.
      Man, I started watching you at Easy Diffusion.

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน

      Whoa! That's awesome! 😁 I appreciate the support since then and now.

  • @havemoney
    @havemoney หลายเดือนก่อน +2

    I'll go play Project Zomboid, I recommend it

    • @MonzonMedia
      @MonzonMedia  หลายเดือนก่อน +1

      Ooohhh, will check it out! I finally played final fantasy vii remake! 😬😊 loved it!