ComfyUI Fundamentals - Masking - Inpainting

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ย. 2024
  • A series of tutorials about fundamental comfyUI skills
    This tutorial covers masking, inpainting and image manipulation.
    Discord:
    Join the community, friendly people, advice and even 1 on 1 tutoring is available.
    / discord
    Workflow: drive.google.c...

ความคิดเห็น • 152

  • @Puckerization
    @Puckerization ปีที่แล้ว +6

    Excellent tutorial, thank you! I've learned a lot from this series. "Set Latent Noise mask" is a revelation. I would never have thought to use that instead of the default.

  • @DarkPhantomchannel
    @DarkPhantomchannel 2 หลายเดือนก่อน

    Quick thing i found out: "IMAGE" input/outputs only have R,G,B channels, while "MASKS" only works with ALPHA (or, at least, have a single channel).
    So "Load Image" has two outputs: "Image" (RGB) and "Mask" (A); it splits the channels that way. So if you try to convert from "Load Image" to mask as alpha you get an error.
    Following the reasoning, nodes as "Mix Color By Mask" use "image" as input (and not mask) so we have more freedom. Also they have only r, g, b options and not alpha because "image" data in ComfyUI does not have it, but on the other hand if they used "mask" they should use a mask (1 channel only) already processed previously

  • @human-error
    @human-error 8 หลายเดือนก่อน +2

    I love the way you explain, you start from the basics and add complexity as you explain in detail. It is also noteworthy the neatness in the nodes, THANK YOU!

  • @crobinso2010
    @crobinso2010 ปีที่แล้ว +4

    Wow would never have figured that out myself. Thanks!

  • @travotravo6190
    @travotravo6190 8 หลายเดือนก่อน +1

    Despite for some reason being a nightmare to install the nodes today (doesn't want to do it automatically through comfyui manager and written instructions aren't great) once I finally got it loading this is absolutely amazing and trounces other ways of masking, positioning, and inpainting. Thanks a lot for the heads up about these very cool nodes!

  • @PRLLC
    @PRLLC ปีที่แล้ว +6

    At the 17:04 mark, you were about to explain how to paste characters from different renders into one picture. I wish you would have shown examples! If you ever find the time, a demo of that would be great. Thank you!

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +6

      Ill be making a tutorial on 'compositing' for that process I think. however that doesnt help you right now.
      Grab the masquerade nodes pack, use crop by mask and paste by mask, there are other tools which can be used in conjunction with this to direct where to paste masked stuff.
      Hopefully thats enough to work with for you to figure it out.

    • @PRLLC
      @PRLLC ปีที่แล้ว

      @@ferniclestix Thank you; I’ll experiment with that for now! I’m eagerly anticipating the new tutorial. I truly love the content you’ve been releasing. You’re among the few teachers who can be technical without being overly complicated. Kudos for consistently releasing tutorials, especially given the fast pace of updates-it must be challenging to keep up. So, thanks once more! 🤝

  • @linkmanplays
    @linkmanplays 8 หลายเดือนก่อน

    You are a life saver, I kept trying and trying and messing around with the mask and it turns out what I was missing was using a second load image for the mask. That put me back so much for so long.

    • @ferniclestix
      @ferniclestix  8 หลายเดือนก่อน +1

      Happy to help, once you master masking you can do almost anything in comfyUI
      Theres a cool technique I recently found thats not in this tutorial.
      If you make an image out of three colors red, green and blue you can use a single image to make 3 different masks using the image to mask node and setting the appropriate channels. :D

  • @Enricii
    @Enricii ปีที่แล้ว +4

    That different node to set a latent noise mask is gem, I wasn't happy about comfyUI inpainting but that should improve the results.
    However, I still think inpainting in A1111 is better.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      I'd love if there were more tools in the comfyui inpainting window, like brush blur, transparancy and stuff. I love easydiffusion for that reason.

    • @nickb9342
      @nickb9342 ปีที่แล้ว

      i agree. so i use comfyUi for faster generation and then a1111 for inpainting.

  • @randomscandinavian6094
    @randomscandinavian6094 7 หลายเดือนก่อน +1

    This is the one! Seen a few tutorials here and there and read a lot of Reddit posts that didn’t make a whole lot of sense but this is solid! Thank you!

    • @ferniclestix
      @ferniclestix  7 หลายเดือนก่อน

      Happy to help :D

  • @Ranoka
    @Ranoka 6 หลายเดือนก่อน +1

    Thanks for the video, you really helped me understand the different approaches and the pros/cons! I'm going to watch the Masquerade video now

    • @ferniclestix
      @ferniclestix  6 หลายเดือนก่อน

      Glad it was helpful :D

  • @donkeyplay
    @donkeyplay ปีที่แล้ว +1

    I just realized that "pipes" like kpipeloader that I just saw on the Impact Pack custom nodes tutorial page might run your bus a little more effeciently for you. And thanks for these vids, they help a lot!

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +2

      Yes, impact packs pipes are cool! however I find using a split up bus more flexible for my purposes... plus its easier to make a tutorial where people can see all the things im doing, if I fill my workflow with custom nodes then it creates a barrier for learning so yeah, I generally avoid too many custom nodes if I can :) Thanks for the advice tho! there are some amazing nodes in impact pack.

    • @donkeyplay
      @donkeyplay ปีที่แล้ว +1

      @@ferniclestix I figured you had a special reason, lol. I'm still very new obviously, but now I can inpaint in comfy! Thanks again! 🖌

  • @calvinkao7612
    @calvinkao7612 8 หลายเดือนก่อน +1

    the vae inpaint was screwing me up. thank you for showing us the right way!

  • @Satscape
    @Satscape ปีที่แล้ว +1

    Well yes, I made that mistake too, then I set up a controlnet for inpainting, that didn't work, then I found your video!
    Thank you, liked and subscribed.

  • @AL_Xmst
    @AL_Xmst 6 หลายเดือนก่อน +1

    Thank you!
    Very useful!
    Also thanks for including the json file!

  • @EH21UTB
    @EH21UTB ปีที่แล้ว +3

    you should be able to control-shift V and paste with wiring intact for your bus

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      doesnt work on reroute nodes, unless there has been a recent update im unaware of.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      AH, your right, they patched it, nice!

  • @tuurblaffe
    @tuurblaffe ปีที่แล้ว

    i love how we went from here's some text go figure it out. to planning and customizing processes which are used to optimize the process. One almost could say it feels like factorio from the future. which i love! the idea behind it is allready a good base to build on. stuff like native linux support, modular and people can mod the software to their own liking. It builds community. Where we were sharing blueprints of factorio we're now sharing png's to make images. What a wonderful time to be alive!

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      only until the corps find a way of ruining it. screeeeeeee, capitalism screeee! :P

  • @arnaudcaplier7909
    @arnaudcaplier7909 11 หลายเดือนก่อน +1

    The best tutorial on this topic. Bravo and thank you !

  • @AIAngelGallery
    @AIAngelGallery ปีที่แล้ว +1

    Wow, thx for the correct method for inpainting in comfyui

  • @AnnisNaeemOfficial
    @AnnisNaeemOfficial 6 หลายเดือนก่อน +1

    Thank you for this. This was a great tutorial.

  • @Dingle.Donger
    @Dingle.Donger 10 หลายเดือนก่อน +1

    This is exactly what I was looking for. Thank you!

  • @tobiasmuller4840
    @tobiasmuller4840 ปีที่แล้ว +1

    Eventually someone who addresses those topics! Thank you! For some reason this inpainting process for me has the same issues like you had with the VAEEncodeForInpaint. In your example the VAEEncodeForInpaint works surprisingly well anyway. I still get inpaint areas where the subject of the new prompt is not even visible with the Latent Noise Mask. At least when the new subject has a much different size (like mouse vs. elephant). I feel like I'd have to crop & upscale the masked area first and then put it back into position (with something like WAS nodes). However I didn't figure out yet how to do this with latents only.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      with latent noise mask, if you lower the denoise ammount your result should conform more to the original and becomes more likely to fit within the masked area.
      if you use vaeencodeforinpaint, it gets greyer and greyer the lower the denoise.
      Latents are bad for image type processes like cropping and stuff. its better to go to images for that.

  • @marjolein_pas
    @marjolein_pas ปีที่แล้ว

    Thank again. Your Comfyui tutorial series are very informative and valuable

  • @Chad-xd3vr
    @Chad-xd3vr 6 หลายเดือนก่อน +1

    great tutorial, clipseg looks like a useful node, thank you

    • @ferniclestix
      @ferniclestix  6 หลายเดือนก่อน

      I'll be doing an updated tutorial using something better than clipseg soon :D

    • @Chad-xd3vr
      @Chad-xd3vr 6 หลายเดือนก่อน

      I look forward to it@@ferniclestix

  • @ted328
    @ted328 4 หลายเดือนก่อน

    Another life-saver! Thanks so much!

  • @schrodinger5091
    @schrodinger5091 7 หลายเดือนก่อน +1

    I'm trying to figure out how to use the 'Mix Color by Mask' node, but I'm having some trouble locating it. I've searched in the manager tab, but can't find anything with that name. Any guidance on where I can find this node or how to use it would be greatly appreciated!

    • @ferniclestix
      @ferniclestix  7 หลายเดือนก่อน

      You would need the Masquerade node pack, which is what this node belongs to in order to use it.

  • @shiftyjesusfish
    @shiftyjesusfish ปีที่แล้ว

    This was literally the best tutorial. Thank you

  • @VIKclips
    @VIKclips 7 หลายเดือนก่อน +1

    what if i want to edit the image i already inpainted, how do i do that?? For example the image you have in the end, what if i want to change that one, maybe change a nest, and keep the dragon?? how can i do this? It doesnt work for me.

    • @ferniclestix
      @ferniclestix  7 หลายเดือนก่อน

      You would want to use image load nodes to re-run it maybe, or add some more samplers and masks, you can prettymuch extend the workflow by building it out and connecting the different outputs.
      It helps if you build the workflow yourself but basically you can bring in a finished image using 'image load' then do a vae encode and send this to your samplers instead of 'empty latent'

  • @eucharistenjoyer
    @eucharistenjoyer 8 หลายเดือนก่อน +1

    Very instructive video! Does ComfyUI have anything similar to "Inpaint Conditional Mask Strength"?

    • @ferniclestix
      @ferniclestix  8 หลายเดือนก่อน

      im unfamiliar with what you refer to.
      for most things in A1111 there are similar things in comfyui, however comfyUI's inpainting works differently than A1111 so they don't really behave the same way.

  •  2 หลายเดือนก่อน

    thank you

  • @excido7107
    @excido7107 9 หลายเดือนก่อน +1

    Thank you, your tutorials are actually the best. I'm trying to put something into an image but it doesnt seem to be working. I'd love a tutorial on like a img2img masking sort of thing. Like putting a dragon in my backyard for example 🤣

    • @ferniclestix
      @ferniclestix  9 หลายเดือนก่อน

      I cover img2img masking in the compositing video and artist inpainting video :D hope they can help.

    • @excido7107
      @excido7107 9 หลายเดือนก่อน +1

      Yes thank you I watched it and achieved what I wanted :)

  • @virtualpantherx
    @virtualpantherx 10 หลายเดือนก่อน

    3:25 Because it's not a mask in a pixel image format (blue). It's only the values of the alpha channel (green), I think.

    • @ferniclestix
      @ferniclestix  10 หลายเดือนก่อน

      alpha channels are still stored as a black and white pixel image, it is black and white with no rgb which is the difference.

  • @FranckSitbon
    @FranckSitbon 22 วันที่ผ่านมา

    nice tutorial thank you

  • @silkypixel
    @silkypixel หลายเดือนก่อน

    Thank you so much!!

  • @ns_the_one
    @ns_the_one 6 หลายเดือนก่อน +1

    Great video. Thank you a lot

  • @ArielTavori
    @ArielTavori ปีที่แล้ว +1

    Thank you so much, this is incredibly helpful. I don't understand why they did not take the open source code from Blender Nodes, with years of Polish behind it and tons of functionality and extensions available for it. Instead there is this absurdly unconventional interaction (ctrl to select and shift to drag? Really? Wow guys!..) I'm amazed at the utter lack of documentation in this age of writting documentation...
    There's so much I admire and respect about StabilityAI, and then I still can't see the full file name of a file on huggingface on mobile? Come on! There are paid professionals with budgets and timelines behind these tools right? I don't understand why I can't even find a document explaining how to code a node for ComfyUI? Honestly quite absurd.
    I get that this is early development, but these seem like very strange priorities.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      I dont do coding, but I believe ive seen someone reference a node template if that helps, no idea what it is though. someone was using chatGPT to make nodes.

    • @swextr
      @swextr ปีที่แล้ว +1

      As for the Blender Nodes - ComfyUI is mainly made in Python - not C (as blender is), so it'd be difficult to take and/or use code from Blender itself. But I do agree Blender's nodes are better, and ComfyUI could definitely have taken more inspiration from them - because everything in Comfy just feels messy. Reroute nodes are awful, IMO.

  • @RyanAdorben
    @RyanAdorben 4 หลายเดือนก่อน

    Wow great video very helpful!

  • @TheArghnono
    @TheArghnono ปีที่แล้ว +1

    Very useful! Thank you.

  • @trobinou47
    @trobinou47 ปีที่แล้ว +1

    Really useful ! Thanks

  • @zoemorn
    @zoemorn 6 หลายเดือนก่อน +1

    keep trying to incorporate this into an img2img workflow (so no starting ksampler, image loader and the maskloader) but results arent coming out at all. i can tell its trying to impact the mask but doing so in undesirable ways.. not seeming to take into account the big picture which you talk about and provided a solution for in a non-img2img way so not sure why it doesnt work in an img2img (or i've prolly got something wrong in my flow)

    • @ferniclestix
      @ferniclestix  6 หลายเดือนก่อน +1

      make sure you are masking correctly and that everything is plugged in correctly.
      for the most part img2img is no different from standard but you have to make sure you treat it as the first step in the workflow (the loaded image should replace the starting sampler at 1.0 denoise)

    • @zoemorn
      @zoemorn 6 หลายเดือนก่อน

      @@ferniclestix so blessed for a reply! so far its been interesting to compare vae inpainting vs setlatentnoise. i've got a flow that parallels both options with the same input image & mask to compare and sometimes theyre very close, and sometimes theyre not close and so far i dont know of a pattern, just depends on the seed and other variables i guess. i need to review your vids some more around cleaning up images, sometimes masking lines are noticeable though the generation is good, just needs cleaned up and i believe you mention being able to do that by sending it back through another ksampler or other. just need more time to dive into the videos more and play around.

    • @ferniclestix
      @ferniclestix  6 หลายเดือนก่อน +1

      I probably should do an indepth on getting good results from inpainting.. which is kind of a skill you have to learn. but its really dependant on your method of approach and with so many different ways of doing it... kinda hard to tailor a good tutorial for it.

    • @zoemorn
      @zoemorn 6 หลายเดือนก่อน

      @@ferniclestix Understandable, we don't have time to do everything. Thanks for what you can give time to.

  • @Commodore_1979
    @Commodore_1979 7 หลายเดือนก่อน

    Excelent tutorial, thanks! Sadly the "Image Blend by mask" is missing in my confyui... I have searched for it on the web, and in the manager, with no luck... Where could I find it? Thanks again!

    • @ferniclestix
      @ferniclestix  7 หลายเดือนก่อน

      I'm fairly sure it is a WAS suite node.

  • @johnriperti3127
    @johnriperti3127 2 หลายเดือนก่อน

    We miss your videos

  • @trongnguyenquoc2940
    @trongnguyenquoc2940 5 หลายเดือนก่อน +1

    i love you

  • @BrandosLounge
    @BrandosLounge 10 หลายเดือนก่อน

    i'm trying to do a group photo using roop, and i was able to draw my friends face, but i am having trouble drawing a long beard on the one guy without it targeting the wrong person. Any advice on how i could fix that?

    • @ferniclestix
      @ferniclestix  10 หลายเดือนก่อน

      Reactor let you pick which face to replace, its 'rooplike' i think I cover it in the face restoration tutorial.

  • @gb183
    @gb183 11 หลายเดือนก่อน +1

    Many thanks!!😀

  • @user-ot6mg1tu3e
    @user-ot6mg1tu3e 11 หลายเดือนก่อน +1

    Merci beaucoup pour ce tutoriel :)

  • @martinniesyto1656
    @martinniesyto1656 9 หลายเดือนก่อน

    Very helpful, thanks! Unfortunately I cannot find the Clipseg node in the search window after I installed it with manager which shows that it is installed. Do you have eventually have any clue why?

    • @ferniclestix
      @ferniclestix  9 หลายเดือนก่อน +1

      unfortunately I cannot help with debugging nodes.
      My advice head to the github page of the node in question and make a bug report.
      I would love to help but there are simply too many different possible setups for a comfyUI install and as a result I'm not really able to devote time to this, my work and tutorials.
      About clipseg though, it like some other nodes relies on external modules to do its magic and when those are broken often it will break the node. recently i think clipseg and blip implimentations have been a little glitchy.
      second possible issue.
      make sure you restart the server after an install and reload your comfyUI tab. or it wont update new nodes.

    • @martinniesyto1656
      @martinniesyto1656 9 หลายเดือนก่อน

      @@ferniclestix thanks for your suggestions! today after uninstalling and reinstalling it for severel times over some days clipseg seems to show up in the search bar...perhaps they fixed something.

    • @ferniclestix
      @ferniclestix  9 หลายเดือนก่อน

      yes, there are people working behind the scenes all over the place to get these kind of things sorted out, its a good idea to keep an eye on the github pages of nodes you used or better yet find one of the places where all us AI artists hang out and chat. the comfyui reddit is a great place to get info.

  • @gb183
    @gb183 11 หลายเดือนก่อน

    I have downloaded your workflow, it's very useful to me, many thanks. Will you launch fix hand tutorials? It's very hard to fix hands for me. Please arrange fix hands tutorial, thank you again.

    • @ferniclestix
      @ferniclestix  11 หลายเดือนก่อน +1

      try the face restore tutorial, one of the nodes there can do hands although you may have to use clipseg to find the hands.

  • @AkshayTravelFilms2
    @AkshayTravelFilms2 10 หลายเดือนก่อน +1

    thanks for workflow

  • @nothingrhymeswithferg3744
    @nothingrhymeswithferg3744 9 หลายเดือนก่อน

    great tutorial - but I can't find the CLIPseg node. is this a custom install, where can I find it?

    • @nothingrhymeswithferg3744
      @nothingrhymeswithferg3744 9 หลายเดือนก่อน

      oops just got to the end of the video and found my answer ... great tutorial thank you!

    • @ferniclestix
      @ferniclestix  9 หลายเดือนก่อน

      :D must have missed that issue I try to mention important nodes near the start. eh. ill do better in future :)

  • @0A01amir
    @0A01amir ปีที่แล้ว

    Very well done, thank you.

  • @mstew8386
    @mstew8386 ปีที่แล้ว

    I am having trouble finding where to put the masking.json. I usually use the png images to load workflows.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      click load from the comfyui interface and go find the masking.json

    • @mstew8386
      @mstew8386 ปีที่แล้ว

      Thank you !@@ferniclestix

  • @LeKhang98
    @LeKhang98 11 หลายเดือนก่อน

    Wow, how did you discover such a useful and important trick about the Set Latent Noise Mask node? I have a few questions:
    - How did you make the Ksampler to node show preview images like that? I can only use Save/Preview image nodes to see the final image.
    - How to make invert mask? For example, I want to keep subject 1 and change everything else.
    Thank you very much for sharing.

    • @ferniclestix
      @ferniclestix  11 หลายเดือนก่อน +1

      There is an invert node in... was suite? lets you invert an image, you can convert a mask to an image then invert it and plug it in again and it will do the opposite of what was masked.
      Alternatively using the word "background" in clipseg can be successful.
      how to do live previews: th-cam.com/video/hdWQhb98M2s/w-d-xo.html

    • @LeKhang98
      @LeKhang98 11 หลายเดือนก่อน

      @@ferniclestix Nice thank you again I should try them asap.

    • @LeKhang98
      @LeKhang98 11 หลายเดือนก่อน +1

      @@ferniclestix I just found out that there are Invert Image Node and Invert Mask Node and they are working great (I think that they are default nodes of ComfyUI). Thank you very much.

  • @carstenli
    @carstenli ปีที่แล้ว

    Great tutorial, thank you.

  • @Mootai1
    @Mootai1 6 หลายเดือนก่อน +1

    You're pretty bad at baby dragons :D... but your tutorial is very well explained and interesting. Thanks a lot !

  • @juanchogarzonmiranda
    @juanchogarzonmiranda 11 หลายเดือนก่อน +1

    Thanks a Lot!

  • @jhPampoo
    @jhPampoo 5 หลายเดือนก่อน

    is there any tutorial on inpainting that it can disturb the mask to make the transition between the mask and the background seamlessly sir, i found that the transition of the mask and the outside is hard

    • @ferniclestix
      @ferniclestix  4 หลายเดือนก่อน

      Ive found fixes for this kind of stuff but usually its related to the models/inpaint method being used. you can do some stuff like blurring the mask to try and smooth the edges but this can be unreliable with certain inpaint methods.

    • @jhPampoo
      @jhPampoo 4 หลายเดือนก่อน

      ​@@ferniclestix thank you
      Differential inpaint which had just been released may do the great job, i'm going to try it

  • @froztbytesyoutubealt3201
    @froztbytesyoutubealt3201 ปีที่แล้ว +1

    How to inpaint at full resolution?

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      hopefuly the reply on reddit makes sense. I'm working on an example image atm to show how it might be achived

  • @olexryba
    @olexryba 10 หลายเดือนก่อน

    This works very nicely! But it is also quite slow for larger images, even when the mask is very small. I suspect that it denoises the full image and then leaves only the mask. Is there a way to constrain denoising only to the masked area, plus some padding for additional info (like it can be done in a1111)? I imagine facedetailer nodes do something like that, because they operate much faster with smaller masks.

    • @ferniclestix
      @ferniclestix  10 หลายเดือนก่อน

      I think I use the Impact pack detailer in my inpainting for artists video which just denoises a selected area.

  • @mikealbert728
    @mikealbert728 9 หลายเดือนก่อน

    Thanks for this. Can you explain the setup in comfyui for inpainting models?

    • @ferniclestix
      @ferniclestix  9 หลายเดือนก่อน

      inpainting model setups just use the inpainting model in the checkpoint loader really. you could use vae for inpainting or set latent noise mask. its really up to you. generally I find inpainting models less useful than normal ones but it really depends what im doing .

  • @carll.2330
    @carll.2330 ปีที่แล้ว

    This is great but why can't I find a single tutorial/workflow for outpainting? Literally can find 0 discussions on ComfyUI + outpainting. Is it just as simple as adding extra whitespace to the image in Paint and painting a mask over that area? Because I tried that and it did nothing. Please help.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      I mean, you would crop your image into a larger image and then mask the white space and sample it using inpainting.... basically. - This assumes you don't have an outpainting node or special workflow.
      I may cover this in my compositing tutorial if I have time, currently its going to be over 20 minutes.

  • @gaulllum8802
    @gaulllum8802 ปีที่แล้ว +1

    What ksampler do you use in order to have an output while generating ?

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +2

      its a command line argument, open your bat file that starts comfyUI, add the line --preview-method auto to the end of your command line. restart the server and now your samplers will have a preview.

    • @gaulllum8802
      @gaulllum8802 ปีที่แล้ว

      @@ferniclestix thanks

  • @SuperFunHappyTobi
    @SuperFunHappyTobi ปีที่แล้ว

    I'v downloaded the workflow and I am following the tutorial but can't seem to get the workflow to actually inpaint it just renders a new image every time?? no clue why

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      check the mode below seed on your sampler, youll want to run the one you are inpainting on as fixed so that the mask doesn't have to be re-built everytime.

    • @royimiron
      @royimiron ปีที่แล้ว

      first of all tnx for the tutorial, i have the same problem, ksampler set to fixed and still it renders a new image every time@@ferniclestix

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      huh, thats strange. you sure its plugged into the right places? may take a while to respond, youtube doesnt like showing comments in the depths of other comments :P if you have more questions post a fresh comment I'm more likely to spot it.

  • @w_chadly
    @w_chadly ปีที่แล้ว

    could you do an in-depth tutorial on clipseg? everytime I use it with the cut/paste-by-mask, it creates this fuzzy mask that pastes with this non-100% opacity making the thing I cut out with clipseg look "ghostly" when I don't want that and I'm struggling to figure out how to fix that.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      I mean, clipseg theres not a huge ammount there, put in a keyword, set sensitivity settings and output mask... my advice, pull out preview nodes from all your clipseg outputs and see if they look un-usual, off colored and stuff.
      It could also be a downstream issue somewhere and not related to clipseg.
      If you want to get in touch via reddit, ill take a look at your workflow.

  • @Resmarax
    @Resmarax ปีที่แล้ว

    Thanks for adressing this. But how can i inpaint using latent noise mask on a png i created earlier?

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      image load to load the png, vae encode to latent, latent noise mask then sampler. easy.

    • @Resmarax
      @Resmarax ปีที่แล้ว

      @@ferniclestix alright, thanks!

  • @hleet
    @hleet ปีที่แล้ว

    Inpainting video for hands please. How can I use the "Set latent noise mask" if it needs a "samples" input ? I mean, the normal way would be to input an image with a drawn mask on it that goes straight to a VAE Encode (for Inpainting) that accepts input from pixels / vae and mask. Should I have to redo the whole image process with a fixed seed and take it out to a Latent Noise mask node ?

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      I'd completely remove the 'vae encode (for inpainting) node because its not needed and not fit for purpose. instead just use the latent noise mask node.

  • @TrungCaha
    @TrungCaha ปีที่แล้ว

    I can;t file " mix color by mask".Where I can find it ? Thank for making great video series

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      it belongs to the masquerade node pack.

    • @TrungCaha
      @TrungCaha ปีที่แล้ว

      @@ferniclestix tks

  • @RuliKoto
    @RuliKoto ปีที่แล้ว

    Great vids you have here, help a lot , thanks!
    Just wondering if you can do video about Roop / face swap in comfyUi

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      This will be the topic of my next video, although I haven't tried roop specifically so Ill have to do some more research.
      probably take me a couple of days.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      Annnnd just finished it, th-cam.com/video/FShlpMxbU0E/w-d-xo.html

    • @RuliKoto
      @RuliKoto ปีที่แล้ว

      Been watching it, thank you very much, really appreciate it. Will wait for another great videos 🤟🤟🤟

  • @ioio7408
    @ioio7408 11 หลายเดือนก่อน

    looks good but in your workflow shared i dont have any image in my ksampler like in your video.

    • @ferniclestix
      @ferniclestix  11 หลายเดือนก่อน

      th-cam.com/video/hdWQhb98M2s/w-d-xo.html I show how to set that up in this tutorial

  • @mstew8386
    @mstew8386 ปีที่แล้ว

    How are you able to see the rendering going on in Ksampler? Mine doesn't do that at all? I am running super low VRAM could this be the reason?

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      th-cam.com/video/hdWQhb98M2s/w-d-xo.html from the basic introduction tutorial

    • @mstew8386
      @mstew8386 ปีที่แล้ว

      You are amazing THANK YOU SO MUCH!!!!!! WORKS PERFECTLY@@ferniclestix

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 ปีที่แล้ว

    The tutorial is just so awesome.
    What kind of change would you recommend if I wanted to inpaint using control net?

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      I haven't made much use of control net so far in comfyUI, I used it plenty in A1111 though.
      I think because control net acts on the conditioning of the image... you should be able to use it in conjunction with this without any problem. just make sure its applying to the ksampler and it will probably influence the output correctly.

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 ปีที่แล้ว +1

      @@ferniclestix Hello , here is an issue that I am facing. You can use this workflow when you are directly feeding the latent node from a Ksampler to the set latent noise mask node.
      But If I am using a preexisting image and I plug into it using VAE encode . Then it's not working at all ,it just gives me the same image back. what may be the cause.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      www.reddit.com/r/comfyui/comments/15ldwds/can_anyone_help_me_with_this/ was helping someone with similar issues, have a look in there and you may find a solution.

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 ปีที่แล้ว

      @@ferniclestix 😂😂 . I am actually the guy named darkmeme 9. I accidentally replied to your post. Also I have no issue with image to image . But the moment I use inpaint in it is when issue happens

  • @brmawe
    @brmawe ปีที่แล้ว

    Wow!...all that for inpaint? Insane but nonetheless great tutorial, keep up the good work.

  • @Nevalopo
    @Nevalopo ปีที่แล้ว

    What GPU are you using? Seems like fast prompts

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว

      2080 8gb, 42gb system ram, the graphics card is dedicated for SD as there is no monitor plugged in nor is it being used by windows. this makes it quite fast, additionally these are 512x512 images and there is no upscaling.

  • @dougmaisner
    @dougmaisner ปีที่แล้ว

    informative!

  • @USBEN.
    @USBEN. ปีที่แล้ว

    Dude your audio is too low even on full volume without headphones. Please up them decibels so i hear it on speakers.

    • @ferniclestix
      @ferniclestix  ปีที่แล้ว +1

      Ill see what I can do, I've got a microphone on order. -id like to add, its as loud as it goes lol.

  • @spierdlajify
    @spierdlajify 7 หลายเดือนก่อน

    How to upload my photo to this workflow?

    • @ferniclestix
      @ferniclestix  6 หลายเดือนก่อน +1

      image load node, replace the first sampler. basically.

  • @8561
    @8561 7 หลายเดือนก่อน

    Do you know how to avoid artifacts being created around a masked out subject lets say when Inpainting a background? For me, VAE Incode for Inpaint is working but adding artifacts, meanwhile SetLatentNoiseMask is not really inpainting much of the background. Maybe too little noise?

    • @ferniclestix
      @ferniclestix  7 หลายเดือนก่อน

      for me its usually a matter of using setlatent noise mask and setting denoise lower. also making use of greyscale masking can really help, unfortunatley comfyUI uses binary masking which is what causes the artifacting issues.

    • @8561
      @8561 7 หลายเดือนก่อน

      I'll look into greyscale masking. Right now I am trying to generate backgrounds around a masked out subject. I think SetLatentNoiseMask is struggling to have enough noise to inpaint the larger area. I also tried injecting more noise but still not much luck. VAE Incode for Inpaint has enough noise for larger areas but definitely too many artifacts. Would you suggest A1111 for this type of inpainting? What are the fundamental differences?@@ferniclestix

    • @ferniclestix
      @ferniclestix  7 หลายเดือนก่อน

      ComfyUI is fine,
      the thing is in comfyUI you do have to do some extra steps after inpainting to fix the inpainted area. this usually involves a low level denoising pass.
      I've seen people attempt to denoise the area around the inpainted area and all kinds of stuff, you can get very complex, generally speaking though a simple 0.10 denoise pass tends to fix it.