Clothes Swapping Made Easy! ComfyUI

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ธ.ค. 2024

ความคิดเห็น • 89

  • @tunghoang4161
    @tunghoang4161 8 หลายเดือนก่อน +6

    Love it, this led me to comfy ui.

    • @creatorbrew
      @creatorbrew  8 หลายเดือนก่อน +2

      Comfyui is fun, great that you’ll get to try it!

  • @alecubudulecu
    @alecubudulecu 9 หลายเดือนก่อน +2

    Loved this. Thank you!

  • @HosseinAhmadi-x3n
    @HosseinAhmadi-x3n 10 หลายเดือนก่อน +1

    It is GOLD!!! Virtual try-on for free

  • @Spinaster
    @Spinaster 11 หลายเดือนก่อน +2

    Thank you for sharing the workflow but the result it's not the same as your, it change the jacket with something completely different from the loaded jacket image.
    I follow all your steps and load the same models except for the ip-adapter_sd15.safetensor, it seems that something is missing in your tutorial.
    Also, I don't understand how the masked latent would be interpreted from the ipadapter... how it will recognize the are to fill?
    The Ksampler is 10 time more slower than usual even after all the models are loaded in memory (16GB gtx vcard). Should I use an inpainting version of the SDXL model?
    Thanks.

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน +1

      Hi, do you have a way to share your result? /workflow. Yesterday on a different project I was on a 2060 and ran the same workflow in a 4050 and got a different visual result even though everything was fixed and used the same models and images.
      The Segment anything can be time consuming the first time it runs, after that the mark is created and then the rest of the nodes can be adjusted.
      You can try inpainting it on your image and then bypass the segment anything mode.

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน +1

      Latent acts as the dream, with giving so much info to what the Kssmpler will dream up the only place to be creative is on the masked out area. The IPadapter pushed the interpretation of the new jacket forward into the Ksampler where the latent of the inpainted jacket (via segment anything) meets up with the model to create the effect.

  • @SamhainBaucogna
    @SamhainBaucogna 6 หลายเดือนก่อน

    Thanks, very useful, it works. One question: Wouldn't it be possible to create the mask by hand, would it be more convenient? Greetings, great tutorial explained in a simple and effective way.

    • @creatorbrew
      @creatorbrew  6 หลายเดือนก่อน +1

      Hi, thank you for watching and enjoying. Yes, you can use the image and then wire in the mask as an image to skip segment anything. It would simplify. Segment anything is better suited for when you want your mages to be batched and don’t have the time to mask.

    • @SamhainBaucogna
      @SamhainBaucogna 6 หลายเดือนก่อน

      @@creatorbrew Thank you for your very kind reply, best regards.

  • @claudioestevez61
    @claudioestevez61 9 หลายเดือนก่อน +1

    Like number 100 👍

  • @davidwang6541
    @davidwang6541 10 หลายเดือนก่อน +1

    How can I directly use your workflow or can you give a customized version if I pay ?
    Is it possible if I ask a model photo to take an appointed product like a new brand thermos cup trained by a dataset with various model’s poses, can it be done by comfyui?

    • @creatorbrew
      @creatorbrew  10 หลายเดือนก่อน +2

      Hi - there are two approaches that you can combine with the technique in the video. In-painting from a reference or Lora training. Or forcing an image via IPAdpater or the diffusion itself. I think Lora training would be the first good step to take. Lora is forcing the the visual by feeding in a limited number of images to refine a model (SD XL) to give back a result. What's the best way to connect with you to talk about further, want to make sure I'm 100% understanding you need.

    • @davidwang6541
      @davidwang6541 10 หลายเดือนก่อน +1

      @@creatorbrew i have tried Leonardo AI. dataset training for a product, but result is deformed , and it can't be taken by a model , i have no idea to connect both sides, so i think that only comfy UI look promising in its accurately control , but i don't understand it at all, it's somewhat complicated for me to tweak it, and it spends much time to learn it , if you can give a help , i would like to pay and get the result directly , sincerely

    • @creatorbrew
      @creatorbrew  10 หลายเดือนก่อน +1

      How to contact you? And did you look at my Lora training video yet?

  • @ckhmod
    @ckhmod 2 หลายเดือนก่อน

    Is there a method to use a brush to paint your own mask?

  • @HosseinAhmadi-x3n
    @HosseinAhmadi-x3n 10 หลายเดือนก่อน +1

    I can't find Clip version "SD1.5\model.safetensor"
    can you share its HuggingFace page, please

    • @creatorbrew
      @creatorbrew  10 หลายเดือนก่อน +2

      huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors

  • @klopp6308
    @klopp6308 7 หลายเดือนก่อน

    Hey thx for the amazing video! Just a simple question: how do the VAE encoders and decoders work without a vae? For me, the encoder and the decoder don't work like they did in your video, so the image with the mask superimposed doen't show up

    • @BriO853
      @BriO853 5 หลายเดือนก่อน

      I think he's using a model with a baked vae, so it's not loaded by a seperate node.

  • @abellos
    @abellos 6 หลายเดือนก่อน +1

    i not found the ApplyIPA adaptor node, and i downloaded the model ip-adapter_sdxl_vit-h.safetensors but when i start the workflow, i have undefined in the node of load ip adapter model.
    As every pithon script none work also on comfy

  • @zaraarmalk1084
    @zaraarmalk1084 4 หลายเดือนก่อน

    I am unable to find sd1.5/model.safetensors, can u please provide the link where the safe tensors are present

  • @zengze4858
    @zengze4858 11 หลายเดือนก่อน

    Hello, uploader. I have two questions for you. First, what format should the original image of the clothes be in? Second, the workflow you shared can only produce preliminary images. These images are very rough. How can I optimize them using ComfyUI?

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน +1

      Hi - PNGs are good to start to avoid the JPG compression (boxes).
      This demo workflow is the simple level to show how to create a task and link them together, so you can keep on growing.
      For further refining, to continue with ComfyUI would be linking an Upscaler and detailer node.
      My source image of the person isn't that hi-res, starting with better source material will help.
      The other way to go is to start making a daisy chain of the masks process as shown in the video, such as 1) taking the person as the final output and running segment anything with the prompt of "person" or "character" and that to 2) an invert mask to knock out the background. 3) Connecting that chain to an inpaint vae and 4) drop in a background (image or via prompt). This will get rid of the stray pixel smudges around the person.
      Then to focus back on the jacket, 1) copy and paste the jacket workflow for masking and add rerun it with the newly cleaned up background version of the person. This could help eliminate some of the rough pixels around the wrist.
      What I have been doing is breaking outfit into pieces, and combining those masks together with a focus on each area. Also, for something quick, I take the general output and do retouching in photoshop (or photopea.com if you want to use a free image editor). Using Comfyui as away to get the general position / pose of the person correct with the current outfit and then correcting imperfections on the pixel layer there.
      ComfyUI, Automatic 1111, and Photoshop are tools that are part of a workflow. Sometimes it is "quicker" to dip into a few tools to get refined output than to try to use just one tool. In theory, I can keep using masking and inpainting to wire together a big network of nodes that could automate the process. I've done this when creating Kiosk experiences with a front end with stable diffusion. But if it is for personal or production art, then using each tool's strengths to save time is a way to go.

    • @zengze4858
      @zengze4858 11 หลายเดือนก่อน

      Thank you for your answer. I have learned how to remove the background from a long image and make it transparent, so it can be used as the original image for the clothing. Then I tried to get a high-definition optimized image through SD enlargement, but the effect was not ideal. I tried many combinations, and I feel that it seems impossible to get the ideal image with only comfyui. Do I really need to use several software together to get the ideal image? Is there any way to get a very good image with only comfyui?@@creatorbrew

  • @valorantacemiyimben
    @valorantacemiyimben 5 หลายเดือนก่อน

    Hello, I am getting the following error, how can I fix it?
    When loading the graph, the following node types were not found:
    IPAdapterApplyEncoded
    Nodes that have failed to load will show as red on the graph.

    • @Avalon1951
      @Avalon1951 5 หลายเดือนก่อน

      The Ipadapter apply node is replaced with the ipadapter adavanced, but the problem there is the image connection which I am running into

  • @sunnytomy
    @sunnytomy 9 หลายเดือนก่อน

    hi just tested the workflow, but didn't get good results compared to yours. I noticed the garment image has gray background, does it mean we need to remove the background of the clothing image before using the ipdapter? Thanks for this inspiring workflow

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน

      Hi, Can you share your result via github? or other place? I'm about to make another tutorial for little tweaks to improve. But if you have a business reason (other than hobby) reach out to me directly. What I share on TH-cam are the "starts" of things as I perfect the final outcome for clients.

    • @sunnytomy
      @sunnytomy 9 หลายเดือนก่อน

      @@creatorbrew Thanks, Yes, I would love to share some of the bad results. Is there other ways than github that I can share the output pictures with you? Cheers.

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน

      @@sunnytomy make a reddit channel in the Stable Diffusion area, then drop the link back here.

  • @shashanksrivastava8638
    @shashanksrivastava8638 10 หลายเดือนก่อน +1

    i can't use Segment anything while using on my system...please help me out

    • @creatorbrew
      @creatorbrew  10 หลายเดือนก่อน +1

      What is the issue you are experiencing? Have you 1) updated your comfyUI 2) downloaded the models

  • @SasukeGER
    @SasukeGER 9 หลายเดือนก่อน

    clothes all blurry for me or placed in center and not covering the Arms... any tips ?

  • @rakztagaming3233
    @rakztagaming3233 หลายเดือนก่อน

    I've got a very greenmind idea about your workflow🤭🤭🤭,

  • @sirjcbcbsh
    @sirjcbcbsh 4 หลายเดือนก่อน

    I'm so confused about for which SAMmodel Loader and GroundingDino Model Loader shall I choose... I just chose them by chance.

    • @sirjcbcbsh
      @sirjcbcbsh 4 หลายเดือนก่อน

      What GPU you're using, I'm using RTX4070 Ti Super, but still occured:
      Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding.
      Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.

  • @fintech1378
    @fintech1378 10 หลายเดือนก่อน +1

    awesome

  • @jamesyang4026
    @jamesyang4026 9 หลายเดือนก่อน

    hi! I tried what you did but the results weren't really good, is there a way to get in touch so we can discuss maybe? I appreciate your video!

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน

      Start a reddit threat and link to it here, then we can chat.

  • @pedroquintanilla
    @pedroquintanilla 9 หลายเดือนก่อน

    I get red IPADANTER and CLIP VISION and I don't know how to solve this problem, could you help me with it?

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน

      RED = those nodes are not installed. See the other response about installing missing nodes.

  • @ogamaniuk
    @ogamaniuk 8 หลายเดือนก่อน

    Is it possible to make the jacket look exactly as it is in the source image?

    • @creatorbrew
      @creatorbrew  8 หลายเดือนก่อน

      Lora training, requires multiple images Train ComfyUI Lora On Your Own Art!
      th-cam.com/video/5PtLQSFrU38/w-d-xo.html instead of a character in the video.

  • @ИльяКачеловский-й1д
    @ИльяКачеловский-й1д 7 หลายเดือนก่อน

    Which folder should I paste the downloaded files into?

    • @creatorbrew
      @creatorbrew  7 หลายเดือนก่อน

      Which files? The image is dragged to comfyui interface (you probably knew that), the models can be placed in the stable diffusion models folder.

  • @DwaynePaisleyMarshall
    @DwaynePaisleyMarshall 7 หลายเดือนก่อน

    I now also get Error occurred when executing VAEEncodeForInpaint:
    expected scalar type Byte but found Float

    • @creatorbrew
      @creatorbrew  7 หลายเดือนก่อน

      Hi - 1. download the portable version of comfyui. And test it by running the default workflow that shows (the case) up 2. Install the manager. 3. Update comfyui 4. Download the workflow. 5. Go to the manager and install the missing nodes. 6. Restart confyui manually (check your see if there are any node conflicts. 7. Download the models and place them in the model folder. 8. Restart comfyui (or click the refresh button) to have those models listed. 9. Start out simple, 1024x1024 images for the person snd the clothes.

    • @DwaynePaisleyMarshall
      @DwaynePaisleyMarshall 7 หลายเดือนก่อน

      @@creatorbrew I'm on a Mac

    • @creatorbrew
      @creatorbrew  7 หลายเดือนก่อน

      @@DwaynePaisleyMarshall If I remember, I think that error happens when the wrong model is used. (the only other reason would wrong node).

  • @SlappyMarsden
    @SlappyMarsden 10 หลายเดือนก่อน

    What folder is the file ip-adapter_sdxl_vit-h.safetensors supposed to be in? Everything points to my A1111 install but the Load IPAdapter Model node keeps failing. The file is located in my C:\AI\StableDiffusion\stable-diffusion-webui\extensions\IP-Adapter folder...any ideas? Thanks

    • @creatorbrew
      @creatorbrew  10 หลายเดือนก่อน

      within the Comfyui/customnodes/ipadapter models.

  • @Avalon1951
    @Avalon1951 5 หลายเดือนก่อน

    I think you need to update the tutorial, because the IPAdapter apply is no longer used it is replaced with IPAdapeter Advanced, the settings are different

    • @creatorbrew
      @creatorbrew  5 หลายเดือนก่อน

      You can download the original ipadapters from git. Or, with a little rework, you can rewire the new ipadapters to the flow (swap in). How much should I help? Always a question, part of learning is working through a minor bump since 3 years from now who knows what the exact adapters will be but the workflow concept will be the same. But I can see if anyone wishes a one click solution having a new tutorial would accommodate them.

  • @ysy69
    @ysy69 5 หลายเดือนก่อน

    Thanks for this. Too bad we cannot replicate the exact jacket but only a close approximation due to the nature of diffusion models

    • @creatorbrew
      @creatorbrew  5 หลายเดือนก่อน +1

      Training a Lora on the jacket would help keep consistency. Or using a photo of the jacket and inpainting to switch people.

    • @ysy69
      @ysy69 5 หลายเดือนก่อน

      @@creatorbrew we have hundreds of items and unfortunately training a Lora on each would not be feasible compared to traditional methods

  • @Adesigner-in9mn
    @Adesigner-in9mn 9 หลายเดือนก่อน

    hey I keep getting this error any info on a fix?
    Error occurred when executing VAEEncodeForInpaint:
    expected scalar type Byte but found Float
    File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/workspace/ComfyUI/nodes.py", line 360, in encode
    mask_erosion = torch.clamp(torch.nn.functional.conv2d(mask.round(), kernel_tensor, padding=padding), 0, 1)

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน

      Hi- have you tried the following: download the new portable version of comfyui (that’s the self contained folder with no installs) then install the comfyui manager.. update “all” the comfyui and nodes. Then follow the workflow .

  • @chanansiegel834
    @chanansiegel834 8 หลายเดือนก่อน

    I am getting SystemError: Unexpected non-whitespace character after json at postion 4 (line 1 Column 5)

    • @creatorbrew
      @creatorbrew  8 หลายเดือนก่อน

      It could be the usual 1) update comfyui, 2) update the nodes 3) make sure the models are the correct version. Which node does it fail at?

  • @DwaynePaisleyMarshall
    @DwaynePaisleyMarshall 7 หลายเดือนก่อน

    Is anyone else having trouble with ApplyIPA adaptor? Can't seem to find the node in Ipadapter

    • @creatorbrew
      @creatorbrew  7 หลายเดือนก่อน

      Did you down load the workflow, went to the manager, added the missing nodes, restart comfy? As long as the provided workflow works you can reproduce on your own.

    • @DwaynePaisleyMarshall
      @DwaynePaisleyMarshall 7 หลายเดือนก่อน

      @@creatorbrew Yep I did all of this, there just isn't an equlivent node for ApplyIPA adaptor, not unless the IPA has had an update?

    • @ChAzR89
      @ChAzR89 5 หลายเดือนก่อน

      @@creatorbrew seems like there is no "apply ipadapter" node in ipadapter plus which is the package it should come with.

  • @talhadar5038
    @talhadar5038 11 หลายเดือนก่อน

    I followed your video however I am getting this error:
    Error occurred when executing KSampler:
    Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
    My models are:
    clip_vision: SD1.5/pytorch_model.bin
    ipadapter: ip-adapter-plus-face_sdxl_vit-h.safetensors
    checkpoint: sd_xl_base_1.0.safetensors
    EDIT:
    Its working now.
    Do you have an older 10XX Nvidia card? If so the Dtype mismatch might need --force-fp16 in the launch options.

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน +1

      Hi - if you post your workflow to the same URL I posted mine, I can take a look. A difference I see, I didn't use the plus-face version, I used this model: ip-adapter_sdxl_vit-h.safetensors.
      To trouble shoot. Hold down the CONTROL key and drag a selection around the top nodes. Then press the CONTROL M key (this deactivates those nodes). Slick on the Load Checkpoint and press CONTROL M to reactivate that node. See if the bottom part of the workflows works. If it does, the means it is one of the models specified on the top one (probably ip-adapter_sdxl_vit-h.safetensors) as a guess.

    • @talhadar5038
      @talhadar5038 11 หลายเดือนก่อน

      @@creatorbrew I mistyped. I actually used ip-adapter_sdxl_vit_h.safetensors
      Can you tell me which clip vision are you using? Its definitely a model mismatch according to my research

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน

      It's the SD 1.5 version model.safetensors, based on my browser search history, I believe I downloaded it from here: huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors

  • @jasonyu8020
    @jasonyu8020 8 หลายเดือนก่อน

    They are not the same jacket....WHY?

    • @creatorbrew
      @creatorbrew  8 หลายเดือนก่อน

      Why = AI isn’t Photoshop and it has to dream up what you are asking. However, with the right prompts and control you can force the randomness into a closer space . The strongest way is to train a Lora on about 10 images which will be a strong guide to getting a close result Train ComfyUI Lora On Your Own Art!
      th-cam.com/video/5PtLQSFrU38/w-d-xo.html

  • @djivanoff13
    @djivanoff13 11 หลายเดือนก่อน

    attach the workflow for uploading so that we don't have to do it manually!please

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน +1

      comfyworkflows.com/workflows/7e9b0e9f-e012-4f2d-ac7b-989bd8589fd1
      this one is a simple one and one way to burn into our minds a task is to try it out :)

    • @djivanoff13
      @djivanoff13 11 หลายเดือนก่อน

      @@creatorbrew Error occurred when executing GroundingDinoSAMSegment (segment anything): How can I fix this?

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน

      Are you running on CPU/GPU? Which GPU?
      The usual cause is the wrong models mixing. Here are the models I'm using. Here's the list of models...
      Main Model:
      sd_xl_base_1.0.safetensors
      Ipadapter Model
      ip-adapter_sdxl_vit-h.safetensors
      Clipvision
      SD1.5\model.safetensors
      SAM Model Loader (segment anything)
      sam_vit_h (2.56GB)
      Grounding Dino Model loader
      GroundingDINO_SwinT_OGC (694MB)

    • @djivanoff13
      @djivanoff13 11 หลายเดือนก่อน

      @@creatorbrew rtx 3060 12g

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน

      @@djivanoff13 that's enough Vram, so it's probably a matter of getting the models listed in the description/this reply from huggingface.

  • @pedroquintanilla
    @pedroquintanilla 9 หลายเดือนก่อน

    Error occurred when executing IPAdapterModelLoader:
    Error while deserializing header: HeaderTooLarge
    File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 593, in load_ipadapter_model
    model = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file
    sd = safetensors.torch.load_file(ckpt, device=device.type)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\safetensors\torch.py", line 308, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน

      1. Do you have ComfyUI Manager installed? If so, click the Manger button. When it is done updating Then click Update ALL. (check the terminal window). Exit ComfyUI and restart.
      2. If you don't have Comfyui Manager, then download the Comfyui Portable, then install the ComfyUI Manger. Then click the ManagerButton and install the missing nodes.

  • @yuanzhouli6983
    @yuanzhouli6983 9 หลายเดือนก่อน

    Hi, can you just offer some suggestions on swapping logo on Tshirt , It should be easier than swapping cloth? I mean if a model is standing in side view, can somehow we swap the big logo on the Tshirt that the model waers?

    • @creatorbrew
      @creatorbrew  9 หลายเดือนก่อน +1

      Does the segment anything node pick up the logo? such as typing in the word "logo" or "square" or whatever it looks like? In the end it is an inpainting that will do it. This workflow is automating the inpainting by creating the mask of the jacket automatically. Do you have a picture of the model/logo anywhere to share?

  • @renwar_G
    @renwar_G 10 หลายเดือนก่อน

    your a G bruv

  • @utkarshaggarwal6057
    @utkarshaggarwal6057 4 หลายเดือนก่อน

    doesnt work bro

  • @SayedSaadmanArefin
    @SayedSaadmanArefin 11 หลายเดือนก่อน

    im getting this error
    "Error occurred when executing KSampler:
    Query/Key/Value should all have the same dtype
    query.dtype: torch.float16
    key.dtype : torch.float32
    value.dtype: torch.float32"
    please help

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน

      what is the size of your graphic card's memory? What is the length of your video?
      Check to see if the models match the names. There are some low version and high version of models.
      An alternative approach is to launch comfyui with the command line switch so it is all running on the CPU to get everything on the CPU.
      Also, check the image sizes. Start with everything at 512x512 then bump up the size to 1024x1024 and go up from therre.

    • @creatorbrew
      @creatorbrew  11 หลายเดือนก่อน

      ... a way to troubleshoot is to see at the start of the video when it is two separate workflows.? try each workflow along and the one that doesn't work has the wrong model (maybe) you can wire it like in the video. Then select the nodes that you don't want to run and press CONTROL M on the keyboard to deactivate those nodes. Then reactivate them and deactivate the other work flow. Hopefully that will give an answer.

  • @rakztagaming3233
    @rakztagaming3233 หลายเดือนก่อน

    I've got a very greenmind idea about your workflow🤭🤭🤭,