EASY Outpainting in ComfyUI: 3 Simple Ways with Auto-Tagging (Joytag) | Creative Workflow Tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ต.ค. 2024
  • In this video I will illustrate three ways of outpainting in confyui. I've been wanting to do this for a while, I hope you enjoy it!
    ** Links from the Video Tutorial **
    ComfyUI Inpaint Nodes: github.com/Acl...
    Comfy Fit Size: github.com/bro...
    ComfyUI-N-Suite: github.com/Nuk...
    JoyTag: github.com/fpg...
    Moondream: github.com/vik...
    Rob Adams's video: • A method of Out Painti...
    Workflow** : / tutorial-19-99178213
    ** Let me be EXTREMELY clear: I don't want you to feel obligated to join my Patreon just to access this workflow. My Patreon is there for those who genuinely want to support my work. If you're interested in the workflow, feel free to watch the video - it's not that long, I promise! 🙏
    ❤️❤️❤️Support Links❤️❤️❤️
    Patreon: / dreamingaichannel
    Buy Me a Coffee ☕: ko-fi.com/C0C0...
    Thanks to @Jay Guthrie for the support ❤️
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 20

  • @leftclot
    @leftclot หลายเดือนก่อน

    Thank you so much for combining Rob Adam's workflow into a node. I was struggling to recreate it in the same way he did. Appreciate it!!!

  • @businessefficiencyforshort2155
    @businessefficiencyforshort2155 4 หลายเดือนก่อน +1

    Excellent. Exactly what I've been looking for. Option 3 is working the best for me in nearly every case. The Image Pad for Outpainting Advanced node seems to work really well depending on the KSampler settings. I have several versions of this workflow now where I am experimenting with ControlNets but I mainly use a 'streamlined' flow that is the 3rd option from the video as my go-to. The GPT Text Sampler is pretty much indispensable. Really nice work.

  • @lance3301
    @lance3301 7 หลายเดือนก่อน +3

    Excellent content as always. Thanks for sharing!

  • @testales
    @testales 7 หลายเดือนก่อน +1

    You could try to "stabilize" the image of the 3rd method with ControlNets like OpenPose or a weak tile net (0.35) only active up to 0.4 or 0.5 (40%-50%) steps. LineArt works too if you remove the background first so the line art is only a sketch of subject. Adding limbs with low weights (foot:0.5 etc.) to the negative prompt might help too to reduce the chance of body parts lying around. I really have to try this new "manly" padding node myself. :)

  • @sdafsdf9628
    @sdafsdf9628 3 หลายเดือนก่อน

    Thank you very much for the exciting experiments. I have tested with an AI image in which a narrow idyllic alley in an Italian village is created. There are cobblestones, windows, doors and flowers. Unfortunately, all this creativity is lost in the outpaint. Fooocus handles it a little better, but the images are too dark there. The hard test is to enlarge an image, then reduce it to the original size in the graphics program and then enlarge it again using Outpaint. Repeating this 5 times (optically we go backwards) shows all the weaknesses. How can we use the creativity that is in the AI in the outpaint? Even with the original promt there is no improvement. It is also not possible to draw conclusions about the enlargement only from the original, the user has to say (text prompt) how the world should change, even if only slightly. If a light comes in from the right, then the lamp must come in at some point. If there is a shadow, there must be a person standing there at some point...

  • @ViratxDoodle
    @ViratxDoodle 7 หลายเดือนก่อน +1

    Hey, if someone is currently not in the ability to join your Patreon, then are they supposed to recreate your workflow manually by watching the tutorial?

    • @DreamingAIChannel
      @DreamingAIChannel  7 หลายเดือนก่อน +1

      Hi! As I wrote in the description, even if you have the option to do it, I would like people to watch the whole video and go step by step instead of downloading the final workflow from Patreon, so that you can really learn what you are doing! 😋 I put it behind Patreon because there are people who don't want to/can't do it for one reason or another, but that is not the main reason behind Patreon.

  • @Bikini_Beats
    @Bikini_Beats 6 หลายเดือนก่อน +2

    Amazing tutorial Thank you. My question is related to Face detailer. it's fairly easy to detail 1 person. If you have 4 people and want to detailed them individually, is there a way to do that? have you covered that in any previous video? Thanks

    • @DreamingAIChannel
      @DreamingAIChannel  6 หลายเดือนก่อน

      Hi! I haven't covered this topic yet, but I think this may help you: github.com/ltdrdata/ComfyUI-Impact-Pack/issues/148

  • @leftclot
    @leftclot หลายเดือนก่อน

    Error occurred when executing GPT Sampler [n-suite]:
    The expanded size of the tensor (749) must match the existing size (750) at non-singleton dimension 1. Target sizes: [1, 749]. Tensor sizes: [1, 750]
    Any solutions?

  • @boroborable
    @boroborable 7 หลายเดือนก่อน

    really nice video, and there is also comfyui-inpaint-nodes which has outpaint and I think it uses different inpaint/outpaint method (fooocus) which I wish to see in comparison, and what is the custom node name for the top bar progress bar?

    • @DreamingAIChannel
      @DreamingAIChannel  7 หลายเดือนก่อน +1

      Hi! I didn't know that node, I'll check it out! The name of the progress bar custom node is rgthree-comfy (github.com/rgthree/rgthree-comfy)

  • @Cingku
    @Cingku 7 หลายเดือนก่อน

    Joytag is good but when I used moondream, after it finished downloading the model, this error happened:
    Error occurred when executing GPT Loader Simple [n-suite]:
    Unknown model (vit_so400m_patch14_siglip_384)
    File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "D:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 368, in load_gpt_checkpoint
    llm = MODEL_LOAD_FUNCTIONS[ckpt_name](ckpt_path,cpu)
    File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 128, in load_moondream
    moondream = Moondream.from_pretrained(os.path.join(models_base_path,"moondream")).to(device=device, dtype=dtype)
    File "D:\ComfyUI\ComfyUI\python_embeded\lib\site-packages\transformers\modeling_utils.py", line 3594, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
    File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\libs\moondream_repo\moondream\moondream.py", line 16, in __init__
    self.vision_encoder = VisionEncoder()
    File "D:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\libs\moondream_repo\moondream\vision_encoder.py", line 98, in __init__
    VisualHolder(timm.create_model("vit_so400m_patch14_siglip_384"))
    File "D:\ComfyUI\ComfyUI\python_embeded\lib\site-packages\timm\models\factory.py", line 67, in create_model

    • @DreamingAIChannel
      @DreamingAIChannel  7 หลายเดือนก่อน

      i think is a "timm" related problem if you reboot confyui do you have some error regarding the "timm" installation?

    • @Cingku
      @Cingku 7 หลายเดือนก่อน +1

      Thanks it's working now after rebooting comfyui. :D@@DreamingAIChannel

    • @DreamingAIChannel
      @DreamingAIChannel  7 หลายเดือนก่อน

      @@Cingku perfect! 👍

  • @rbbdzz
    @rbbdzz 7 หลายเดือนก่อน

    Hi
    Thanks for the tutorial
    When i use denoise less then 1, always get gray borders.. what is wrong with my workflow?

    • @DreamingAIChannel
      @DreamingAIChannel  7 หลายเดือนก่อน

      Hi! uhm, is it possible that you've attacched the wrong latent at the ksampler? Do you have this problem in all three "flow"?

    • @rbbdzz
      @rbbdzz 7 หลายเดือนก่อน

      @@DreamingAIChannel no, third one with this Advanced outpainting node works ok.
      double checked, all nodes connected correctly, but just noticed, if i increase denoise around 0.8-0.85, it start working
      the only difference i can see - i am using different checkpoints (not Anything V3) but not sure if this could be a reason

    • @DreamingAIChannel
      @DreamingAIChannel  7 หลายเดือนก่อน

      @@rbbdzz well if all nodes are correctly connected it could be only that! PS: At the end i've used MeinaMixV11