ComfyUI Architectural design plan workflow

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ส.ค. 2024
  • ComfyUI interior design ControlNet IPadapter workflow
    From an architectural design plan, up to endless design possibilities
    you can upload an existing plan, prompt and examples and receive endless variations , adapted to the dimensions of the furniture and spaces you have created.
    #comfyui #stablediffusion #ipadapter #controlnet #depthanything #img2img
    follow me @ / pixeleasel
    Workflow:
    *update Workflow Version (Compatible with Ip-adapter V2)
    drive.google.c...
    old version
    drive.google.c...
    DepthAnyThing demo:
    huggingface.co...
    DepthAnyThing Model
    huggingface.co...
    LineArt (and other ControlnNets) Model
    huggingface.co...
    WidStudio
    www.wid.co.il/...

ความคิดเห็น • 56

  • @FlorinGN
    @FlorinGN 20 วันที่ผ่านมา

    Amazing work!

    • @PixelEasel
      @PixelEasel  20 วันที่ผ่านมา +1

      thanks 😊

  • @ysy69
    @ysy69 6 หลายเดือนก่อน +1

    very cool. So the first step is to use one of the available virtual decoration tools and then bring the basic decoration over to ComfyUI to generate multiple versions with different esthetics.

    • @PixelEasel
      @PixelEasel  6 หลายเดือนก่อน +1

      yep! very useful for getting ideas

  • @VRVR-mk6cy
    @VRVR-mk6cy 2 หลายเดือนก่อน

    good job,Thank you very much.

    • @PixelEasel
      @PixelEasel  หลายเดือนก่อน

      thx!!!

  • @60tiantian
    @60tiantian 5 หลายเดือนก่อน +1

    good job bro,i will follow your steps

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน

      good luck!!

  • @user-ef4df8xp8p
    @user-ef4df8xp8p 6 หลายเดือนก่อน +1

    Nice...Thank you...

    • @PixelEasel
      @PixelEasel  6 หลายเดือนก่อน

      more than welcome!

  • @artiovisual6528
    @artiovisual6528 4 หลายเดือนก่อน +1

    😍Thank you very much for your video! Very good content, honestly shared. I have a question: Lora is important because I don't have Lora like yours. Thank you very much , Wish you all the best !

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน

      thanks!! it's just lora for lcm so it will be a bit faster. you can use the same workflow without it . just adjust the k sampler accordingly

  • @vojtechpiroch6461
    @vojtechpiroch6461 4 หลายเดือนก่อน +2

    Hi, thanks for the video. What type of lora are you using? My results are blurry, I think it's because I'm using the wrong lora. Thanks

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน +1

      it's lcm , just to make it a bit faster
      you can also bypass (just change your ksampler to match)

  • @TheMondriam
    @TheMondriam 5 หลายเดือนก่อน +2

    Thanks for the Tut. Everything works if I bypass the IP adapter groupnode. But If I activate it I get this:
    Prompt outputs failed validation
    IPAdapterModelLoader:
    - Value not in list: ipadapter_file: 'ip-adapter-plus_sd15.safetensors' not in []
    CLIPVisionLoader:
    - Value not in list: clip_name: 'model.safetensors' not in []
    By the way I downloaded the 3 ip adapter files from hugging face and place them into the "d-webui-controlnet\models" folder
    Any guidance will be appreciated.

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน

      it's seems that you need to download the clip vision models and put it in the right directory as it's mentioned in the message you got

    • @brianedgin
      @brianedgin 5 หลายเดือนก่อน +2

      "model.safetensors" is so very generic that it seems impossible to find the correct file. There are many projects that generate files with that name. Can you provide a link to the specific one you use? @@PixelEasel

    • @TheMondriam
      @TheMondriam 5 หลายเดือนก่อน

      @@PixelEaselYou were right that was missing, but then I placed everything were it's supposed to be and it gave me this error:
      Error occurred when executing IPAdapterApply:
      Error(s) in loading state_dict for Resampler:
      size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

    • @TheMondriam
      @TheMondriam 5 หลายเดือนก่อน

      @@brianedgin After a looot of hustling I finnaly made it work but not for sd_xl_base 1.0 model that was my goal. I've tried lots of combinations of VAE, IPAdapter, CLIPVision, but I just never make it work for that XL model. Do you happen to know the recomended (VAE, IPAdapter, CLIPVision) checkpoints to work with that model, or I'm missing something else?? Thanks for any clue...

  • @pavangandhi7808
    @pavangandhi7808 3 หลายเดือนก่อน +1

    Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding..........I am getting this error what do I do?

    • @PixelEasel
      @PixelEasel  3 หลายเดือนก่อน

      it's seems you run out of memory...

  • @aipamagica1
    @aipamagica1 6 หลายเดือนก่อน +1

    This is awesome. After loading the depthanything, i see it in load advanced controlnet model, but it doesn't show up in the aio aux preprocessor. Is something missing?

    • @PixelEasel
      @PixelEasel  6 หลายเดือนก่อน

      thx. did you try to update comfy?

    • @aipamagica1
      @aipamagica1 6 หลายเดือนก่อน +1

      I did thanks. After it showed. It would also be worth noting that there was a custom node that needed to be updated and it was causing an error. But got it worked out and this is an amazing workflow. Thanks for sharing :)

    • @PixelEasel
      @PixelEasel  6 หลายเดือนก่อน

      i'm glad its working

  • @jindongzhu8459
    @jindongzhu8459 5 หลายเดือนก่อน +1

    Hello! Let me ask you: How to install Depth-Anything? Thanks!

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน

      I'm showing how to install it in the end of this video. you also have the link to the model page in the description

    • @jindongzhu8459
      @jindongzhu8459 5 หลายเดือนก่อน

      @@PixelEasel Thanks

  • @mechanickun
    @mechanickun 4 หลายเดือนก่อน

    I started working after many attempts. In the saved value, only the Aio aux Perprocessor part did not work as set, so I set both work windows to none. The output image comes out well as the image in the inserted image and the image in the depth map, but the color does not apply. Is this related to the role of the Aio aux Perprocessor?

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน +1

      if I understand correctly... it can be the problem . you can use any other preprocessor, but it's important to preprocesse

    • @mechanickun
      @mechanickun 4 หลายเดือนก่อน

      @@PixelEasel Thank you for your answer. I think we need to make efforts to create a more similar environment.

  • @cirociro-wb6bz
    @cirociro-wb6bz 4 หลายเดือนก่อน

    I am not sure where to download deepanything preprocessor, just downloaded the model.
    I got bug report below:
    Error occurred when executing AIO_Preprocessor:
    An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน

      I think the preprocessor should be downloaded automatically
      In any case, you can use any other node, and not necessarily in aio

  • @mechanickun
    @mechanickun 4 หลายเดือนก่อน

    I downloaded the workflow you provided for me to run. I found and collected various necessary parts, but I couldn't find the pytorch_lora_weights_SD.safetensors of the error message. Do you know where I can get this file?

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน +1

      huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6 here you can find the models for sdxl and sd 1.5

  • @AbbaComunic.
    @AbbaComunic. 5 หลายเดือนก่อน

    Error occurred when executing KSampler:
    list index out of range

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน

      Try reloading the Workflow, this sounds like a strange message regarding this Workflow

  • @salehkamel160
    @salehkamel160 3 หลายเดือนก่อน

    hi after installing Ip adapter i have everything but apply Ip adapter node what should i do i re install it but noting happened and i updated the comfy too

    • @PixelEasel
      @PixelEasel  3 หลายเดือนก่อน

      the name has changed, u can use the IPAdapter Advanced

    • @dario7888
      @dario7888 3 หลายเดือนก่อน

      @@PixelEasel in the weight_type of the new IPAdapter version there is no "channel penalty" ...which other type would be the best?

  • @myrandaslipp3929
    @myrandaslipp3929 5 หลายเดือนก่อน

    🤦 "Promo sm"

  • @kevinwang7340
    @kevinwang7340 4 หลายเดือนก่อน

    For some reason my ksample kept on throwing errors with this, even with the latest update?
    mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI
    odes.py", line 1344, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI
    odes.py", line 1314, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample
    return orig_comfy_sample(model, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน

      try to resize the sketch out of comfy. and make sure u up to date

    • @kevinwang7340
      @kevinwang7340 4 หลายเดือนก่อน

      @@PixelEasel do both image need to be the exact same resolution ?
      I got those after re-size it down
      mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
      output_data, output_ui = get_output_data(obj, input_data_all)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
      return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
      results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI
      odes.py", line 1344, in sample
      return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI
      odes.py", line 1314, in common_ksampler
      samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
      raise e
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
      return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample
      return orig_comfy_sample(model, *args, **kwargs)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 37, in sample
      samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @PixelEasel
      @PixelEasel  4 หลายเดือนก่อน

      no, the sketch go through the get image size, just so the latent be in the same size, the reference image for the design can be different in size and proportion

    • @JonBekk
      @JonBekk 2 หลายเดือนก่อน

      how did you fix this error?

    • @kevinwang7340
      @kevinwang7340 2 หลายเดือนก่อน

      @@JonBekk make sure to use the right checkpoint. I noticed that was the problem.

  • @zhiyuanli9196
    @zhiyuanli9196 5 หลายเดือนก่อน

    Hello, I download the workflow, and I also have a problem when running IPAdapter.
    Error occurred when executing IPAdapterApply:
    Error(s) in loading state_dict for Resampler:
    size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
    File "E:\05-Software
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\05-Software
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\05-Software
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\05-Software
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 769, in apply_ipadapter
    self.ipadapter = IPAdapter(
    ^^^^^^^^^^
    File "E:\05-Software
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 369, in __init__
    self.image_proj_model.load_state_dict(ipadapter_model["image_proj"])
    File "E:\05-Software
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 2152, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:
    \t{}'.format(

    • @zhiyuanli9196
      @zhiyuanli9196 5 หลายเดือนก่อน

      Have you also ever encounter some errors like this?

    • @zhiyuanli9196
      @zhiyuanli9196 5 หลายเดือนก่อน

      I fixed it, i think it is because I use the wrong clip vision model

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน

      good to know! thx

  • @tvandang3234
    @tvandang3234 5 หลายเดือนก่อน +1

    i have this error when running it. can you assist? Thank you.
    Error occurred when executing IPAdapterApply:
    'NoneType' object has no attribute 'patcher'
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 751, in apply_ipadapter
    clip_embed = encode_image_masked(clip_vision, image, clip_vision_mask)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 270, in encode_image_masked
    comfy.model_management.load_model_gpu(clip_vision.patcher)
    ^^^^^^^^^^^^^^^^^^^

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน

      if you can use ipadapter with different model (of adapter) than the problem is with the plus model and you need to install it ... if not try reinstall the ipadapter

    • @tvandang3234
      @tvandang3234 5 หลายเดือนก่อน +1

      Thank you very much for your quick reply. When I loaded your workflow, I missed ipadapter plus model and so I got that one downloaded. I don’t have the clip vision model. I believe it has to be the safetensor model that needs to be in the clip vision folder. Do you have the link to that model?

    • @PixelEasel
      @PixelEasel  5 หลายเดือนก่อน +1

      here you have all the models you need
      huggingface.co/h94/IP-Adapter/tree/main/models