Easy Face Swaps in ComfyUI with Reactor

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 พ.ย. 2024

ความคิดเห็น • 46

  • @wronglatitude
    @wronglatitude 9 หลายเดือนก่อน +1

    Crtl shift v will paste with noodles, you can just regular ctrl c to copy

    • @PromptingPixels
      @PromptingPixels  9 หลายเดือนก่อน

      Lol - I am pinning this comment! Thanks so much man!

  • @ОлегЧернышев-н5ь
    @ОлегЧернышев-н5ь 9 หลายเดือนก่อน +1

    Nice channel! Love to see SD videos, especially ones showing how it works on Mac.

    • @PromptingPixels
      @PromptingPixels  9 หลายเดือนก่อน

      Glad you like them - hope to have more on the way!

  • @ThELUzZs
    @ThELUzZs 9 หลายเดือนก่อน

    hey, didn't know ther was a Reactor node for Comfy this is going to save my life on a new project I'm working on THANKS!

    • @PromptingPixels
      @PromptingPixels  9 หลายเดือนก่อน

      Awesome - hope ya got some cool stuff cooking - best of luck on the project!

  • @hurrayrahqureshi2440
    @hurrayrahqureshi2440 7 หลายเดือนก่อน +1

    install failed: ReActor Node for ComfyUI this error pops up every time I install reactor node please make a complete step by step video how to install this without any error

    • @camilovallejo5024
      @camilovallejo5024 7 หลายเดือนก่อน

      I'm getting the same one...

    • @PromptingPixels
      @PromptingPixels  7 หลายเดือนก่อน

      Hey sorry i am late getting back on this - perhaps check out the github issue page: github.com/Gourieff/comfyui-reactor-node/issues I know insightface can be a bit tricky for some reason. Hope this helps

  • @andrewholdun7910
    @andrewholdun7910 4 หลายเดือนก่อน +1

    Getting my feet wet and trying not to drown with all this well-presented material. Are you using Mac M1 as well as an Nvidia setup? It seems some of your tutorials are Mac and some Nvidia you can go through what setup would be the best or if one should just use rundiffusion online setups?

    • @PromptingPixels
      @PromptingPixels  4 หลายเดือนก่อน +1

      Hey there - happy to hear the videos are helpful.
      So my personal setup is using a local PC with a RTX 3060 over the local network. When I boot comfy or automatic1111, i add the --listen command to open up LAN access. This allows me to generate via my MBP - or any other device.
      Some earlier tutorials I was just exclusively using a MBP.
      As far as RunDiffusion, Think Diffusion, etc. I need to do some videos on this. They are very beginner friendly, but can become a bit pricey and have some downsides that aren't really apparent at first (i.e. you can't access a file manager without booting up an instance). I think generally the best route is to scope out a project locally and offload the processing to them as it will help reduce total time use.
      Hope this answers some of your questions - if anything else, feel free to ask!

  • @zengrath
    @zengrath 3 หลายเดือนก่อน

    I'm stuck at installing reactor node for comfyui, i had this problem in past due, i don't have reactor node, when i check manager again i see it says, "import failed" and no additional info. I also tried updating comfyui and no luck.

  • @loquacious1956
    @loquacious1956 3 หลายเดือนก่อน

    Hi great video. I had a look at your workflows and I have no idea how to use them. Muy workflows are generally a .json file looking at your I just seem to get a lot of "code".

  • @rudyNok
    @rudyNok 2 หลายเดือนก่อน

    How does it work that whenever you hit queue the generation doesn't start from scratch?
    Seems like you always continue from the last generated image, how?

    • @PromptingPixels
      @PromptingPixels  2 หลายเดือนก่อน +1

      Is this a general ComfyUI question? If so, if there are no changes in the workflow then it won't render a new image/repeat processes. To prevent redundant processing, I always use a fixed seed value rather than random. Hopefully this response answers your question 😅

  • @gainsum
    @gainsum 7 หลายเดือนก่อน

    Hey sir, do you know which custom node or model interferes with reactor? I had to reinstall comfyui and it takes so long to get set up that I kind of don't want to do one by one and generate an image to find out. My log says:
    Error occurred when executing ReActorFaceSwap:
    D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded.
    basically that I don't have the right cuda or cudnn installed, but that's some bs. lol because i already downloaded all these files from nvidia. I've chopped it up to a node or a model installed from those nodes.

  • @noonesbiznass5389
    @noonesbiznass5389 6 หลายเดือนก่อน

    Any recommendation on a workflow to add some expression to the face AFTER the face swap? ReActor/Inswapper is great, but it kills a lot of the expression that I'm going in the face.

    • @PromptingPixels
      @PromptingPixels  5 หลายเดือนก่อน

      LoRAs are going to be better for that. Other option is inpainting.
      There are a few threads on reddit that may be worth trying here:
      www.reddit.com/r/StableDiffusion/comments/17b2yuc/found_a_cool_little_trick_to_change_expressions/
      www.reddit.com/r/StableDiffusion/comments/1b18r89/is_it_possible_to_use_reactor_and_controlnet/

    • @noonesbiznass5389
      @noonesbiznass5389 5 หลายเดือนก่อน

      @@PromptingPixels Thanks much for the links! Greatly appreciated and will check them out!

  • @NurtureNestTube
    @NurtureNestTube 7 หลายเดือนก่อน

    Thank you! Would be nice if you could do one of InstantID on m-series machines. Seem to be hitting a snag with ONNX runtime..

  • @ezdeezytube
    @ezdeezytube 6 หลายเดือนก่อน

    Is reactor supposed to use the cuda cores in the GPU? Because even when opening comfyUI via the GPU batch file, it seems like only the CPU is being used for reactor.

  • @hytalegermany1095
    @hytalegermany1095 5 หลายเดือนก่อน

    I have the face restore model installed but nevertheless the result is blurred/pixelated. Someone in the comments has the same issue?

  • @KineticGas
    @KineticGas 7 หลายเดือนก่อน

    how can i restore multiple faces using more than 1 source image ?

  • @hitmanehsan
    @hitmanehsan 8 หลายเดือนก่อน +1

    i dont have those folders: facedetecion and facerestore.why?!

    • @PromptingPixels
      @PromptingPixels  8 หลายเดือนก่อน +1

      Honestly this is hard to help without any sort of readout on the error. However, you might just need to install onnxruntime on your machine. To do that, you'll need to run the following command (assuming you are using conda):
      conda install onnxruntime -c conda-forge
      If this doesn't work, make sure python 3.10 or earlier is the default python version on your machine.
      If all else fails check the repo for similar issues: github.com/Gourieff/comfyui-reactor-node/issues
      Please post back with either the error or if you were able to fix (and what commands) so it can help any others that having the same issue.

    • @hitmanehsan
      @hitmanehsan 8 หลายเดือนก่อน +1

      @@PromptingPixels fixed bro thanks😍

    • @PromptingPixels
      @PromptingPixels  8 หลายเดือนก่อน

      Awesome!

  • @goodie2shoes
    @goodie2shoes 8 หลายเดือนก่อน

    I'm always fighting with comfyUI and setting up custom nodes. Some are plug and play but others do need a lot of troubleshooting. I will try this one out., Hopefully I won't be tweaking the install all night and instead doing some actual faceswaps.

    • @PromptingPixels
      @PromptingPixels  8 หลายเดือนก่อน

      Yeah, conflicts are never fun. If memory serves, you'll need insightface installed on this one. If you are running Windows, you may need to check out this section of the Readme for troubleshooting: github.com/Gourieff/comfyui-reactor-node?tab=readme-ov-file#troubleshooting

  • @benoitguitard2887
    @benoitguitard2887 8 หลายเดือนก่อน

    Hi, reactor on comfyui deliver an image of a much younger person and very smooth skin. do you know how to trick reactor to keep or add details like with a Lora or an ipadapter? Every time I try to resample enough to see wrinkles or else I loose the similarity with the model. The best I have done to add some contrast and colors is to perform a “extreme” face detailer after reactor and to pass reactor a second time. Not very good but it fixes the expressions and eyes and if you made a prompt with “large eyebrows, lipstick,eye liner,…” for face detailing the final result is better. Is somebody find other tricks ?

    • @PromptingPixels
      @PromptingPixels  8 หลายเดือนก่อน

      Reactor definitely has limitations when it comes to final composition. A trained LoRA will provide superior results.
      This comment on Reddit was good for getting more natural looking results: "Put codeformer weight at 1 and restore face visibility at 0.2. Use a high resolution source image that shows pores, wrinkles, freckles etc"
      Source - www.reddit.com/r/StableDiffusion/comments/18gfj0v/tricks_to_make_face_look_less_aiy_using_txt2img/
      Minor inpainting is another approach, but best to do that in a separate app like Auto1111 or Forge.

  • @HuTrzy
    @HuTrzy 6 หลายเดือนก่อน

    Thx for tutorial

  • @croxyg6
    @croxyg6 7 หลายเดือนก่อน

    Does this work well with an animated face as the source_image? ie. a cartoon or game character?

    • @PromptingPixels
      @PromptingPixels  7 หลายเดือนก่อน +1

      Would have to try it to know for certain - I’d imagine human faces should be fine - but a face of say a dog probably not so much

  • @MikevomMars
    @MikevomMars 9 หลายเดือนก่อน

    ComfyUI Manager says "Import failed" when trying to install the ReActor node 😐

    • @PromptingPixels
      @PromptingPixels  9 หลายเดือนก่อน

      Mind pasting in here what it says in your terminal/command prompt? Might be able to help diagnose. You can also send me a DM in the Discord (link in description) if you prefer.

  • @GrantLylick
    @GrantLylick 7 หลายเดือนก่อน

    If you want to do the animatediff face swap, you will need ffmpeg as well.

  • @nitishnaik8747
    @nitishnaik8747 9 หลายเดือนก่อน

    which model of macbook do you use..?

    • @PromptingPixels
      @PromptingPixels  9 หลายเดือนก่อน

      M1 MBP (this demo was using a remote windows GPU)

  • @zinheader6882
    @zinheader6882 6 หลายเดือนก่อน

    for videos, why does the process take so long?

  • @GrantLylick
    @GrantLylick 7 หลายเดือนก่อน

    The models will auto d/l with installation. It did for me anyways.

  • @hiurylurian6539
    @hiurylurian6539 หลายเดือนก่อน

    Why are you deleting my comment?

  • @electricseaturtle
    @electricseaturtle 6 หลายเดือนก่อน

    So silly that you're demonstrating a face swap using 2 faces that are almost identical...

    • @PromptingPixels
      @PromptingPixels  6 หลายเดือนก่อน

      Definitely a valid comment - looking back I should have made a more contrasting change. Thank you for the feedback.

    • @electricseaturtle
      @electricseaturtle 6 หลายเดือนก่อน

      @@PromptingPixels Sorry if my comment sounded snippy. Sometimes I'm just in a mood... More importantly, I do. appreciate all of your informative content!

    • @PromptingPixels
      @PromptingPixels  6 หลายเดือนก่อน

      Oh no worries man! When reading what people have to say, I try to take an objective view on comments to see what is going well and where the videos are coming up short.
      Ultimately, I am just hoping that the few minutes people take out of there day to watch a video here to learn a few things is something of value rather than being a waste of time. Will never hit the mark 100% of the time, but hope to just be a net positive to the community.
      All the best!