BrushNet - The Best InPainting Method Yet? FREE Local Install!

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 พ.ย. 2024

ความคิดเห็น • 66

  • @ducttapebattleship
    @ducttapebattleship 6 หลายเดือนก่อน +10

    Thanks for still explaining everything in detail (conda, git, etc.) even after you've explained everyting a dozen of times already. This really helps.

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน

      Glad it was helpful!

  • @loubakalouba
    @loubakalouba 7 หลายเดือนก่อน +7

    I see a new Nerdy Rodent video I click instantly, Never failed me. Thank you!

  • @MrSporf
    @MrSporf 6 หลายเดือนก่อน +2

    I thought fooocus was good, but this thing is next level!

  • @forfreeiran8749
    @forfreeiran8749 7 หลายเดือนก่อน +4

    You got the best vids man

  • @rudeasmith5098
    @rudeasmith5098 6 หลายเดือนก่อน +1

    That shining suit of armor is suss! Love your videos though. 👍

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน

      I like the way it does the reflections too!

  • @Lamson777
    @Lamson777 7 หลายเดือนก่อน +3

    What a funny intro 😂

  • @MUZIXHAV3R
    @MUZIXHAV3R 7 หลายเดือนก่อน +1

    Excellent work as always mate. One quick question though. Where is the default output "dir" after generating the images?

  • @knightride9635
    @knightride9635 7 หลายเดือนก่อน +1

    This is really good, thanks !

  • @tonywhite4476
    @tonywhite4476 7 หลายเดือนก่อน +3

    Brushnet or Invoke?

  • @Elwaves2925
    @Elwaves2925 7 หลายเดือนก่อน +2

    Careful with those puns, Sebastian Kamph will become jealous. 🙂
    The results look great but the one time I tried Anaconda it didn't work and seemed to be more of a mess. Then the hassle with the models so I think I'll wait until it hits Forge, which shouldn't be too long. Cheers NR.

    • @NerdyRodent
      @NerdyRodent  7 หลายเดือนก่อน +2

      Anaconda is the way forward and has never let me down in over four years!

    • @Elwaves2925
      @Elwaves2925 7 หลายเดือนก่อน

      @@NerdyRodent That's cool, it may have been something my end like a misunderstanding on my part. I also realised after that it might have been Miniconda, not Anaconda that I was using.

  • @MarcSpctr
    @MarcSpctr 7 หลายเดือนก่อน +1

    hey can you do a video on ANYDOOR, please ?
    I read their github and it seems such a cool tool to change inject location and stuff, but sadly the GRADIO DEMO only has inject transfer from one image to another.
    Can you make a tutorial on how to use all the other tools that Anydoor provides ?

  • @vi6ddarkking
    @vi6ddarkking 7 หลายเดือนก่อน +3

    Ok so I am going to ask the obvious question.
    How well does it fix hands and feet?

    • @JonnyCrackers
      @JonnyCrackers 7 หลายเดือนก่อน

      It's stable diffusion so probably not very well at all.

    • @zacharyshort384
      @zacharyshort384 6 หลายเดือนก่อน

      @@JonnyCrackers ? You can *fix* hands and feet quite easily in SD with inpainting. People do it all the time...

  • @DivinityIsPurity
    @DivinityIsPurity 7 หลายเดือนก่อน +1

    Great tut

  • @bobbyboe
    @bobbyboe 6 หลายเดือนก่อน

    I wonder what is more powerful, the IOPaint (that I noticed through your video last month), or this BrushNet. Evaluating after having seen both of your videos, I would tend to rather use IOPaint. But I am sure you must have compared both, or at least have a much better insight?

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน +2

      There’s a brushnet inspired PowerPaint for IOPaint now - best of both worlds!

    • @bobbyboe
      @bobbyboe 6 หลายเดือนก่อน

      @@NerdyRodent maybe we can expect some video from you about how to use this "PowerPaint" feature you described? Also a comparison of different models and how you use them in IOPaint would be very interesting. I cannot find a lot of input on how IOPaint can be power-used...

  • @industrialvectors
    @industrialvectors 7 หลายเดือนก่อน

    What file explorer and image viewer are you using?
    Your setup looks like a mix of Linux and Windows in a good way.

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน +1

      I’m using Caja with the default image viewer 😉

  • @worldwidewebcap
    @worldwidewebcap 6 หลายเดือนก่อน

    I wasn't able to get it running. I keep getting a ton of errors, like "OSError: Error no file named diffusion_pytorch_model.bin found in directory data/ckpt/segmentation_mask_brushnet_ckpt."
    But that file ends in .safetensors not .bin

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน +1

      You can download the required files from their Google Drive

  • @lockos
    @lockos 7 หลายเดือนก่อน

    @NerdyRodent I completed every steps without any issue for installation, but in the end, the local url is unavailable, it fails to connect to it, any idea why ?

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน

      If you’ve got extra security set up, you may need to modify it to allow local connectivity

    • @lockos
      @lockos 6 หลายเดือนก่อน

      @@NerdyRodent What'd'ya mean by extra security ? I tried with temporarily disabled firewall, checked my browser settings and flushed my DNS, that changed nothing wether I use Chrome, firefox or Edge. Now maybe there is a security setting within Brrushnet that I'm not aware of, if that is the case please could you tell ?

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน

      If you’ve not got any extra security running, it could be that you’re using Microsoft Windows which will fail on basic routing tasks. If that’s your case, you’ll need to use localhost as the address instead!

  • @joe-5D
    @joe-5D 3 หลายเดือนก่อน

    is the python method give higer quality/resolution results?

    • @NerdyRodent
      @NerdyRodent  3 หลายเดือนก่อน +1

      Vs which non-python method?

    • @joe-5D
      @joe-5D 3 หลายเดือนก่อน

      @@NerdyRodent or i should say locally vs website/huggingface

    • @NerdyRodent
      @NerdyRodent  3 หลายเดือนก่อน +1

      It’s the same running the program on your computer or one someone else’s 😀

    • @joe-5D
      @joe-5D 3 หลายเดือนก่อน

      @@NerdyRodent aright just making sure thanks, man !

  • @arashsohi8551
    @arashsohi8551 6 หลายเดือนก่อน

    Thank you so much!

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน +1

      You're welcome!

    • @arashsohi8551
      @arashsohi8551 6 หลายเดือนก่อน

      ​@@NerdyRodent Could you help me with this if you can please?
      I tried to find out what is the problem but nothing was found :(
      I have all the data structure and checkpoints and have the "sam_vit_h_4b8939.pth" but I don't know why this happens when I run the app_brushnet.py.
      Traceback (most recent call last):
      File "D:\AI-Programs\BrushNet\examples\brushnet\app_brushnet.py", line 13, in
      mobile_sam = sam_model_registry['vit_h'] (checkpoint='data/ckpt/sam_vit_h_4b8939.pth').to("cuda")
      File "C:\ProgramData\miniconda3\Lib\site-packages\segment_anything\build_sam.py", line 15, in build_sam_vit_h return _build_sam(
      ΑΑΑΑΑΑΑΑΑΑΑ
      File "C:\ProgramData\miniconda3\Lib\site-packages\segment_anything\build_sam.py", line 104, in _build_sam with open(checkpoint, "rb") as f:
      ΑΑΑΑΑΑΑΑΑΑΑΑ ΑΑΑΑΑΑΑΑΑ
      FileNotFoundError: [Errno 2] No such file or directory: 'data/ckpt/sam_vit_h_4b8939.pth'
      [process exited with code 1 (0x00000001)]

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน

      Looks like you forgot to download that model!

  • @MilesBellas
    @MilesBellas 7 หลายเดือนก่อน

    Automatic1111 isn't deprecated?

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน +1

      Does kinda seem deprecated at this point, yes 🫤

    • @zacharyshort384
      @zacharyshort384 6 หลายเดือนก่อน +1

      I'm using Forge which is just another implementation of it.

  • @Shadowman0
    @Shadowman0 6 หลายเดือนก่อน

    I think your differential diffusion workflow doesn't use its full potential for its blending capabilities. Ideally after segemting the bear you could expand the mask and blur the expanded area so you get a gradient arround the bear. (or even blur without expanding to give it a bit of bear to change) Depending on the blur parameters and the added padding, your results should be much better. The differential diffusion allows you to use non binary masks and let the gray level define the allowed changes afaik.

    • @NerdyRodent
      @NerdyRodent  6 หลายเดือนก่อน +1

      Yup, I’ve got a node in there for you to set the blur amount though I find the depth maps are often pretty good without extra blur!

  • @eammon7144
    @eammon7144 6 หลายเดือนก่อน +1

    Are all the output images low resolution?

  • @secretsather
    @secretsather 5 หลายเดือนก่อน

    Seems like the checkpoints in Google Drive are hosed!

    • @NerdyRodent
      @NerdyRodent  5 หลายเดือนก่อน

      Google drive still working here!

  • @testales
    @testales 6 หลายเดือนก่อน

    The standard models are not made for inpainting! With SD 1.5 models you can do an on-the-fly merge by bascially subtracting the 1.5 inpainting checkpoint from the regular base model and adding this to the checkpoint of your liking. Then you will get way better results when inpainting stuff. Probably this can be done with SDXL too but since I still use SD 1.5 checkpoints most of the time, I can't tell for sure.

    • @zacharyshort384
      @zacharyshort384 6 หลายเดือนก่อน

      Hmm. Interesting. Do you have a vid link or tutorial you can point me to? :)

  • @LouisGedo
    @LouisGedo 7 หลายเดือนก่อน

    👋

  • @Pauluz_The_Web_Gnome
    @Pauluz_The_Web_Gnome 7 หลายเดือนก่อน

    I noticed that the quality of the generated images are really low quality...

  • @relaxandlearn7996
    @relaxandlearn7996 7 หลายเดือนก่อน

    Dont see any difference between this and normal inpaint models.