ComfyUI: Imposing Consistent Light (IC-Light Workflow Tutorial)

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ก.ย. 2024

ความคิดเห็น • 153

  • @controlaltai
    @controlaltai  หลายเดือนก่อน +13

    Update (Sep 5, 2024): The Preview Bridge node in the latest update as a new option called "Block". Ensure that it is set to "never" and not "if_empty_mask". This will allow the preview bridge node to pass on the image to the H/L Frequency Node as shown in the video and transfer the details. If set to "if_empty_mask" you will not get any preview, it will show as a black output. I had asked the dev to update the node so that the default behavior is always set to never, he has fixed and done the same. Update the node again to the latest version.
    Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow.
    IC-Light is based on SD1.5, but all generates are SDXL Resolution, then 4x Upscale. I hope you find the tutorial helpful. Please note: at 5:17 layered diffusion custom node is needed, even though no nodes are used otherwise you will get an error as follows:
    RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead

    • @williamsaton8812
      @williamsaton8812 หลายเดือนก่อน

      thx sooooo much

    • @ivanivan9301
      @ivanivan9301 หลายเดือนก่อน

      Thank you so much, I followed the video for 2 days and finally managed to make it, awesome tutorial! 👍

    • @ismgroov4094
      @ismgroov4094 หลายเดือนก่อน +1

      sir help me!!!!! i bought your workflow :( ... plz

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Already replied to you.

    • @eme4117
      @eme4117 19 วันที่ผ่านมา

      @@ismgroov4094 Where did you buy it?

  • @CerebricTech
    @CerebricTech 5 วันที่ผ่านมา

    Its amazing, even this is rocket science for me yet, this is most detailed explained video for product iv seen till now..
    Thanx.

  • @esuvari
    @esuvari หลายเดือนก่อน +1

    Oh MY GOD! This is incredible! The first two random images I tried off the top turned out amazing, first try. You're the most underrated SD channel on youtube, thank you for this amazing work. Can't wait to get my hands dirty with this. Wish you the best.

    • @yuvish00
      @yuvish00 หลายเดือนก่อน

      Hi, did you get the error: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead ?

    • @esuvari
      @esuvari หลายเดือนก่อน

      Nope, it worked on mine

  • @kobe5113
    @kobe5113 หลายเดือนก่อน +2

    honestly this is too good, thank you so much

    • @kobe5113
      @kobe5113 หลายเดือนก่อน

      really really well done

  • @GoodArt
    @GoodArt หลายเดือนก่อน +1

    that was just the coolest video I've ever seen. comfy rules.

  • @rcj1337
    @rcj1337 21 วันที่ผ่านมา

    Impressive stuff, amazing work!

  • @oohlala5394
    @oohlala5394 23 วันที่ผ่านมา +2

    Thank you for this tutorial. However, I don't understand why we need to segment the image again at 16:51. We already have the mask and the image with the new composition (product size and placement) as output of the "ImageBlendAdvance V2" node. Why are we repeating the segmentation process? The resulting images and masks of the new segmentation seem to me to be the same as the outputs from the "ImageBlendAdvance V2" . Sorry to ask about that. I'm a sub, and thoroughly enjoy your tutorials.

    • @controlaltai
      @controlaltai  23 วันที่ผ่านมา +1

      Hi, There are Multiple reasons. First, we blend the object with a grey background in this node. It's only the mask; there is no image. We need the PNG image again to be transparent. Second, this mask is not that good for some objects as it fails to mask them properly from this node after resizing. In the test, 1 out of 10 times, it caused an issue. Since I had to use the transparent PNG, I thought we should give options for masking and getting the mask again.

    • @oohlala5394
      @oohlala5394 22 วันที่ผ่านมา

      @@controlaltai thanks

  • @jd38
    @jd38 หลายเดือนก่อน

    Yes, finally! thank you for this tutorial

  • @PrithivThanga
    @PrithivThanga หลายเดือนก่อน

    Must be a worthy one. will test and post here..

  • @dankazama09
    @dankazama09 10 วันที่ผ่านมา

    can we have this kind of workflow with flux? this video deserve more views. Good work sir/ma'am!

    • @controlaltai
      @controlaltai  10 วันที่ผ่านมา +1

      Hi, No we can't unfortunately. The IC Light model was trained on SD 1.5. Its not supported on anything else but SD 1.5 based or fined tuned checkpoints of SD 1.5.

  • @jjagdishwar
    @jjagdishwar หลายเดือนก่อน

    Love this. Thank you so much

  • @dankazama09
    @dankazama09 11 วันที่ผ่านมา

    Magnific 👌

  • @SteMax-d6z
    @SteMax-d6z 11 วันที่ผ่านมา

    at upscale part, i ImapactInt = 2, the product image get bigger, it bigger than bg imgae. i dont know why ,sir help

    • @controlaltai
      @controlaltai  11 วันที่ผ่านมา

      Are you building the workflow from scratch? Double check the video. The bg has to be upscaled .

  • @ronshalev1842
    @ronshalev1842 16 วันที่ผ่านมา

    Is it possible to control the image background blur result?

    • @controlaltai
      @controlaltai  16 วันที่ผ่านมา

      Yeah with prompting. In the video tutorial I use dof, which is depth of field. You can add clear sharp in the positive and dof in the negative. Once you have a clear bg you can add depth of field using a blur node from layer style nodes.

  • @user-ep1fz6oq5x
    @user-ep1fz6oq5x 20 วันที่ผ่านมา

    May I ask what caused this error
    Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead

    • @controlaltai
      @controlaltai  20 วันที่ผ่านมา

      Make sure you have layered diffusion custom node installed.

    • @user-ep1fz6oq5x
      @user-ep1fz6oq5x 20 วันที่ผ่านมา

      @@controlaltai There is an installation, but after running it on the K sampler, this problem occurs, which is very frustrating

    • @controlaltai
      @controlaltai  20 วันที่ผ่านมา

      @@user-ep1fz6oq5x make sure you downloaded the correct ic apply model, these are ldm models and not the standard ones.

    • @controlaltai
      @controlaltai  15 วันที่ผ่านมา

      Hi, issue seems to be fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."

  • @user-ep1fz6oq5x
    @user-ep1fz6oq5x 20 วันที่ผ่านมา

    Hello, I cannot find VAE Encode ArgMax in my comfyui. Which plugin do I want to download

    • @controlaltai
      @controlaltai  20 วันที่ผ่านมา

      Hello, Check the video for custom nodes requirements. It's a part of the main ic light custom node as shown.

    • @user-ep1fz6oq5x
      @user-ep1fz6oq5x 20 วันที่ผ่านมา +1

      ​@@controlaltai Thank you, I have found a solution. The version I downloaded had an issue, so I couldn't find it

  • @agusdor1044
    @agusdor1044 15 วันที่ผ่านมา

    hi! I've successfully loaded the workflow in a cloud instance. Everything is up and running, but I'm encountering the same error that others have reported:
    Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 104, 152] to have 4 channels, but got 8 channels instead.
    I'm running the workflow with a 24GB GPU and 64GB RAM.
    I've selected and downloaded the correct ldm version of IC Light.
    All nodes are installed and updated (including LayerDiffusion).
    I've tried all the wight_dtype settings in IC Light, but I keep getting the same error.
    Do you know what might be causing this?

    • @controlaltai
      @controlaltai  15 วันที่ผ่านมา

      Hi, There is an issue with the latest comfy ui, it’s broke the ic light node, developer is working on a fix, you have to use the legacy front end or wait till the developer fixes it.

    • @mohammadjavadnazari7941
      @mohammadjavadnazari7941 15 วันที่ผ่านมา

      @@controlaltai fixed now!

    • @controlaltai
      @controlaltai  15 วันที่ผ่านมา +1

      Yeah check and and updated pinned comments. For anyone else seeing this: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."

    • @agusdor1044
      @agusdor1044 14 วันที่ผ่านมา

      @@controlaltai ALL WORKING NOW YASSSSSSS!

  • @Lifejoy88
    @Lifejoy88 วันที่ผ่านมา

    Hi, where I can download your workflow (json file)?

    • @controlaltai
      @controlaltai  วันที่ผ่านมา

      Hi, Workflow is only made available for paid channel members. You don't need to become a paid member. Everything is shown in the video to recreate the workflow from scratch.

  • @LinhLe-ib9gi
    @LinhLe-ib9gi 14 วันที่ผ่านมา

    i'm error node Switch ,  it say : node 29 says it needs i nput inpu 0, but there is no input to that node at all . Help me

    • @controlaltai
      @controlaltai  14 วันที่ผ่านมา

      I cannot understand what node 29 you are talking about. Visually see which node the error is coming from, along with the cmd error. That will help me understand what the issue is.

    • @LinhLe-ib9gi
      @LinhLe-ib9gi 14 วันที่ผ่านมา

      @@controlaltai error node Switch ( impact Pack ) . Error occurred when executing ImpactSwitch:
      Node 5 says it needs input input0, but there is no input to that node at all
      File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 294, in execute
      execution_list.make_input_strong_link(unique_id, i)
      File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\comfy_execution\graph.py", line 94, in make_input_strong_link
      raise NodeInputError(f"Node {to_node_id} says it needs input {to_input}, but there is no input to that node at all")

    • @controlaltai
      @controlaltai  13 วันที่ผ่านมา

      Can you email me a screenshot of the workflow and zoom in on the node which has the error, I need to look at what is going on. mail @ controlaltai . com (without spaces).

  • @ivanivan9301
    @ivanivan9301 29 วันที่ผ่านมา

    Hello, thank you for the course, I've realized the whole workflow, and I just can't make the product look transparent when placing transparent glass products such as perfume and wine glasses, it can't show the background content through the glass at all, I've watch the tutorials again and again and didn't find where to set the transparency of the product, may I ask where to set it? Looking forward to your answer, thanks!

    • @controlaltai
      @controlaltai  29 วันที่ผ่านมา

      There is no transparency setting. If you are looking for clear objects see through glass you need to switch from here to photoshop and do manually. Hence we put a bokeh background in the prompt.

    • @ivanivan9301
      @ivanivan9301 28 วันที่ผ่านมา

      @@controlaltai Thanks for the reply, I'll give it a try

  • @SaoirseChen-v8b
    @SaoirseChen-v8b หลายเดือนก่อน

    Thank you for the incredible workflow, I got an issue when generating the ksampler before the details and color adjust parts, the ksampler image became total black at 60%, and the colormatch got error(stack expects a non-empty TensorList), do you have any clues?

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      Hi, Not untill I see what you have done with the workflow. It's quite complex to identify the issue. You mail me, I can have a look and see if I can troubleshoot it. mail @ controlaltai . con ( without spaces)

  • @agusdor1044
    @agusdor1044 หลายเดือนก่อน

    Hi, I'm trying to use this WF but I only have a 6GB GPU. I've tried on various online platforms and even locally with the ComfyCloud node (which allows you to work locally but with a cloud GPU for generations), but I haven't been able to use the WF successfully with any of these alternatives. Could you tell me if you know whether this WF could be used with a service like RunPod or something similar? Ty!!!

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      The workflow can be used on the cloud or locally. Does not matter where you run it, 6gb VRAM won't do. There are lot of things happening here, and lot of models getting loaded. 24 Gb is recommended. But you can try this at 12 GB at a bare minimum. I haven't tested it, as I don't have a 12 GB hardware.

  • @FrauPolleHey
    @FrauPolleHey 13 ชั่วโมงที่ผ่านมา

    Hi!
    I tried everything to install the Layer Style nodes, without success, can anyone help here please?
    (IMPORT FAILED) ComfyUI Layer Style
    (IMPORT FAILED) ComfyUI-BiRefNet-ZHO
    I tried to install manually and with manager, same :(

    • @controlaltai
      @controlaltai  12 ชั่วโมงที่ผ่านมา +1

      @FrauPolleHey Cannot help without looking at the cause of the failure import. I need to look at your system on the cause of the failure. Typically it will tell you what import or dependencies install failed. You then have to do it manually. Send me an email with the entire cmd boot up text after clean boot up. I will try and help via email. mail @ controlaltai . com (without spaces)

    • @FrauPolleHey
      @FrauPolleHey 10 ชั่วโมงที่ผ่านมา

      @@controlaltai sent, thank you

  • @SanchezGodsent
    @SanchezGodsent หลายเดือนก่อน

    i have this problem: Error occurred when executing LayerUtility: HLFrequencyDetailRestore:
    images do not match

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      Yeah if you don't put a manual mask you get that error. It passes on an empty image which is a mis match. Have mentioned this in the video. So connect your image to the preview bridge. Get the error, then manually mask and save the preview bridge and run queue prompt again. It should go through.

  • @design38
    @design38 หลายเดือนก่อน

    Hi, great tutorial, by the way! I have a slight problem. The resulting image of a black product is different from the original. For example, if the product is black running shoes and the background is a green scenery, the result will make the shoes appear green. I also tried a black bag, and it turned white. the details is still there, but this result is after ksampler. probably has something to do with the IPAdapter or IClight ??

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Hi, Send me the workflow and the images to mail @ controlaltai . com (without spaces), without looking and testing myself, can’t trouble shoot.

  • @Andrew-hi4lk
    @Andrew-hi4lk หลายเดือนก่อน

    This is amazing!
    Any ideas about this error?
    Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead

    • @Andrew-hi4lk
      @Andrew-hi4lk หลายเดือนก่อน

      Never mind! I see that the custom node ComfyUI-layerdiffuse (layerdiffusion) is required and this resolves the error :)

  • @JD-ls5vt
    @JD-ls5vt 17 วันที่ผ่านมา

    Which version of the UI was used for this? Keep getting the " but got 8 channels instead" error. Even with the required layered diffusion node and correct "fc-ldm" model, issue persists. Bypassing the lc light apply node lets it complete flow execution.

    • @controlaltai
      @controlaltai  17 วันที่ผ่านมา

      The old one, as comfy came out on 15th aug, and this was posted on July 25. I will check it in some hours and get back to you. If it’s broken in latest version, will update and make a post. Try putting the node in layered diffusion folder instead of u net and see if that works. The channel 8 error is highly unlikely a comfy update issue. I will re check though.

    • @JD-ls5vt
      @JD-ls5vt 17 วันที่ผ่านมา

      @@controlaltaiThanks for the reply, the ldm model only seems to be recognized in the unet or diffusion_models folder. I'm using ComfyUI: 2611[8ae23d](2024-08-23)
      Manager: V2.50.2

    • @controlaltai
      @controlaltai  17 วันที่ผ่านมา +1

      The error is with the IC Light node. Will post an updated workflow as the Impact Pack Switch also malfunctions after the new update. The ic light dev is working on a fix: github.com/huchenlei/ComfyUI-IC-Light-Native/issues/44, will let you know once its pushed.

    • @agusdor1044
      @agusdor1044 15 วันที่ผ่านมา

      @@controlaltai im interested in this fix too! tyyy

    • @controlaltai
      @controlaltai  15 วันที่ผ่านมา

      Hi, Issue has been fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."

  • @nasrulacown6066
    @nasrulacown6066 27 วันที่ผ่านมา

    Hi, i have follow your instruction. Its working for the first run. But when i change the background image and change the resolution in SDXLResolution node, i'm getting "image do not match" error. I dont know what is the problem but these were the only things that i change. This is the error message {Error occurred when executing LayerUtility: HLFrequencyDetailRestore:
    images do not match
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle\py\hl_frequency_detail_restore.py", line 73, in hl_frequency_detail_restore
    ret_image.paste(background_image, _mask)
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 1847, in paste
    self.im.paste(im, box, mask.im)

    • @controlaltai
      @controlaltai  27 วันที่ผ่านมา +1

      Images do not match cause you have not created a mask in the preview bridge. Whenever passing the image to HL frequency you need to have it masked. If you added the switch like in the video, change to switch no 2. If using 1, mask manually and then queue prompt.

    • @nasrulacown6066
      @nasrulacown6066 26 วันที่ผ่านมา

      @@controlaltai Woww thanks man, i switch the mask for detail to no 2 and it works. I see, so that is the error you've been talking about in the video.

  • @KashifRashid
    @KashifRashid หลายเดือนก่อน

    I have installed everything but I cant find the switch (any) node on my search . What am I doing wrong?

    • @KashifRashid
      @KashifRashid หลายเดือนก่อน

      Ok.. figured that one out.. lol. Had to update comfyui from outside. not the manager

  • @smatbootes
    @smatbootes 24 วันที่ผ่านมา

    Hello all ! :) I have an issue when I execute
    "Error occurred when executing IPAdapterModelLoader:
    invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 657, in load_ipadapter_model
    return (ipadapter_model_loader(ipadapter_file),)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\utils.py", line 147, in ipadapter_model_loader
    raise Exception("invalid IPAdapter model {}".format(file))"
    could someone help me?

    • @controlaltai
      @controlaltai  23 วันที่ผ่านมา +1

      "invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors" Hi, this is not an IP Adapter model. Its the ic light model, you have put it in the ip adapter folder. Recheck the video on where the ic light models go.

  • @CharlesPrithviRaj
    @CharlesPrithviRaj หลายเดือนก่อน

    couldnt find the birefnetultra node, which custom node is it from ?

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      That's from LayerStyle custom node.

  • @sagarsinghvi2766
    @sagarsinghvi2766 15 วันที่ผ่านมา

    Can you share the workflow for us to download?

    • @agusdor1044
      @agusdor1044 15 วันที่ผ่านมา

      you have to be a chanel member or build it by yourself, looking the video

  • @KeenHendrikse
    @KeenHendrikse หลายเดือนก่อน

    Hey, can anyone advise where I can find the image blend advanced v2 node?

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      LayerStyle Custom Node. Check video custom node requirements.

  • @DanielPartzsch
    @DanielPartzsch หลายเดือนก่อน

    Very nice. What exactly is the difference between the old IC light models and the once you've used here? Do they yield better results? Thanks

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Actually this one is older, it came out first I think. I started with kijai, all respects to him for his work. However I was not getting the results that I wanted. I switch to an entire different way as the way this one works is different, was impressed with the results, just went on building the workflow from there. Don't have a side to side comparison as the nodes and method applied are both different. So I cannot be sure if any is better, as I really did not go back to the kijai one and tried to get that working the way i wanted.

  • @SteMax-d6z
    @SteMax-d6z 12 วันที่ผ่านมา

    thx so much.
    i update comfyui, but also getting KSampler Error, :TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype':
    sir help me!!!!!

    • @controlaltai
      @controlaltai  12 วันที่ผ่านมา

      Not sure what is that error. Check the checkpoint. You are suppose to use an sd1.5 checkpoint only.

    • @SteMax-d6z
      @SteMax-d6z 12 วันที่ผ่านมา

      @@controlaltai i bypass the load resadapter, it work ,i don,t know why

    • @SteMax-d6z
      @SteMax-d6z 12 วันที่ผ่านมา

      but if “load and apply ic light " not by pass, and i bypass the load resadapter, it don't work

    • @SteMax-d6z
      @SteMax-d6z 12 วันที่ผ่านมา

      @@controlaltai i use the same checkpoint juggernaut

    • @controlaltai
      @controlaltai  12 วันที่ผ่านมา

      Juggernaut has sdxl and s1.5 checkpoints, reconfirm you are using sd1.5 checkpoint and not sdxl

  • @ImagindeDash
    @ImagindeDash หลายเดือนก่อน

    Thank you for the tutorial but I´m getting this error: Error occurred when executing LayerUtility: ImageBlendAdvance V2:
    'NoneType' object is not iterable

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Not sure what is this error, could be some connections are wrong. Ensure background and layer are correct.

    • @ronshalev1842
      @ronshalev1842 หลายเดือนก่อน +1

      Hi, did you manage to fix that? I have the same error

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      You can email me the workflow. I can have a look at it for you. mail @ controlaltai . com (without spaces)

    • @ImagindeDash
      @ImagindeDash หลายเดือนก่อน +1

      @@ronshalev1842Hi, I fixed the error changing the value in the node Impactint, from 0 to 1.

    • @ronshalev1842
      @ronshalev1842 หลายเดือนก่อน

      @@ImagindeDash Thank you that did the work!

  • @FlowFidelity
    @FlowFidelity หลายเดือนก่อน

    at 30:00 you mention copying the negative prompt from CivitAI, could you expound on this? Thanks!

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Well all i did was opened sample images from juggernaut aftermath and checked the negatives used. Copy and pasted that. That's what I meant.

    • @FlowFidelity
      @FlowFidelity หลายเดือนก่อน

      @@controlaltai ooooh that makes sense. Thanks

  • @FlowFidelity
    @FlowFidelity หลายเดือนก่อน

    Thank you. How does one install the VIT MATTE detail model, Pymatting is working for me in Ultra, but I seem to be missing VIT MATTE

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      I have explained in the video. Check from 6:28

    • @FlowFidelity
      @FlowFidelity หลายเดือนก่อน

      @@controlaltai Thank you! That's what I was looking for!

    • @FlowFidelity
      @FlowFidelity หลายเดือนก่อน

      @@controlaltai BTW are you on LinkedIn ? I did a post about this tutorial and would love to tag you. Thanks again for the great tutorial!

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Hi, no went off LinkedIn years back. That's fine feel free to share.

    • @FlowFidelity
      @FlowFidelity หลายเดือนก่อน

      @@controlaltai not gonna lie I originally skipped that section, and really wish I had not. Going back through it now :)

  • @bizonlarheryerde
    @bizonlarheryerde หลายเดือนก่อน

    Is your channel’s membership option turned on? I can’t see it anywhere.

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      Yes, here is the link:
      th-cam.com/channels/gDNws07qS4twPydBatuugw.htmljoin

  • @ismgroov4094
    @ismgroov4094 หลายเดือนก่อน

    i have error,sir. "Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead"

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Is layered diffusion custom node installed?

    • @ismgroov4094
      @ismgroov4094 หลายเดือนก่อน

      @@controlaltai i did sir

    • @ismgroov4094
      @ismgroov4094 หลายเดือนก่อน

      @@controlaltai there is something wrong with "Ic light apply" node.. plz help me.

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      Choose the fc model and not fbc. Download the correct models from the link. This is the ldm version of models and not what's given on kijai GitHub.

    • @ismgroov4094
      @ismgroov4094 หลายเดือนก่อน

      @@controlaltai thx sir, isolved sir! ❤️🙏🏻🥹

  • @josephmorgans6812
    @josephmorgans6812 หลายเดือนก่อน

    great work, thank you !
    Is it possible to edit / change the background & product (STRING) promts ?

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Yeah, you can use custom conditioning. A switch is given in the workflow. Copy and paste from the Ollama generation to custom text condition, then make the switch to 2.

    • @josephmorgans6812
      @josephmorgans6812 หลายเดือนก่อน

      @@controlaltai Thank you for your quick reply. Sadly it does seem to work for me, the final image doesn't change.

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Send me your current workflow with the prompt the reference bg and the product image to mail @ controlaltai . com (without spaces). The workflow is complicated, obviously something is missed. Will have a look and revert to you via email.

  • @packshotstudio2118
    @packshotstudio2118 11 วันที่ผ่านมา

    How to paste with connection?

    • @controlaltai
      @controlaltai  11 วันที่ผ่านมา +1

      Control shift v

    • @packshotstudio2118
      @packshotstudio2118 9 วันที่ผ่านมา

      ​@@controlaltai Hey, I started supporting your channel and downloaded the workflow, but at the end ( IMAGE COMPARER), it's not generating an image-I'm getting a black screen. Also, I have two red boxes on the IPA adapter and in LOAD CLIP vision. Do you know why this might be happening

    • @controlaltai
      @controlaltai  8 วันที่ผ่านมา

      Hi, thank you. It's probably the wrong IP adapter selected. Send me a screenshot of the following via email. Checkpoint group, IC light group, IP adapter Group. Along with the cmd screenshot of the error. When the box is red. Need to see what is happening to trouble shoot it. mail @ controlaltai . com (without spaces).

    • @packshotstudio2118
      @packshotstudio2118 8 วันที่ผ่านมา

      @@controlaltai Thank you, I sent the message - thank you for your help

    • @controlaltai
      @controlaltai  8 วันที่ผ่านมา

      There is another thing, the preview bridge node was updated. Ensure that blocked option in it is set to never.

  • @ameerziadi4253
    @ameerziadi4253 หลายเดือนก่อน

    can you share workflow

    • @controlaltai
      @controlaltai  หลายเดือนก่อน +1

      Ready made json files are for channel paid members only. You can just build the workflow following the tutorial. Nothing is hidden.

    • @SanchezGodsent
      @SanchezGodsent หลายเดือนก่อน

      @@controlaltai where i this private channel?

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      TH-cam Join Membership

  • @FlowFidelity
    @FlowFidelity หลายเดือนก่อน

    Well this is where I stop tonight " Error occurred when executing UNETLoader:
    ERROR: Could not detect model type of: C:\ComfyUI_windows_portable\ComfyUI\models\unet\IC-Light\iclight_sd15_fc.safetensors " got to retrace the steps again I guess

    • @controlaltai
      @controlaltai  หลายเดือนก่อน

      Okay so you have downloaded the wrong models. Check models in requirements or check description. You have to download the layered diffusion version of the model. Here is the link
      huggingface.co/huchenlei/IC-Light-ldm/tree/main

    • @FlowFidelity
      @FlowFidelity หลายเดือนก่อน

      @@controlaltai ahhh, I was thinking that could be it. Thank you so much for your patience. Now I can sleep :)

  • @CerebricTech
    @CerebricTech 5 วันที่ผ่านมา

    Its amazing, even this is rocket science for me yet, this is most detailed explained video for product iv seen till now..
    Thanx.

  • @blackbear8398
    @blackbear8398 23 วันที่ผ่านมา

    Hi, is there a way that i can make the background less cartoonish? I already try many checkpoint but it give the same result. How do i make realistic background. I already use realistic image for the background image though.

    • @controlaltai
      @controlaltai  23 วันที่ผ่านมา +1

      Hi, You see in the video the background images. They are not cartoonish. So it's the prompting or the checkpoint. I cannot tell unless I look at the workflow.

    • @blackbear8398
      @blackbear8398 23 วันที่ผ่านมา

      @@controlaltai after some experimenting, i add this to the prompt {describe the image in extreme detail Include "atmosphere, mood & tone and lighting". Write the description as if you are a product photographer. include the word "hyper realistic" and "shot on dslr" and "shot using 12mm lens" and "aperture f 1.2" and "lifelike texture" and "macro shot" and "faded color grading" and "slow shutter" and "long exposure" in the description} and it worked. Thanks bro for this awesome workflow.

    • @controlaltai
      @controlaltai  23 วันที่ผ่านมา +1

      Great 👍 We need a better llm vision model. The llama 3.1 is far better but nothing vision atm. Your prompt instruction is very interesting, will try it out, thanks.