Hi I changed the difussion model to flux dev and I get the following error: mat1 and mat2 shapes cannot be multiplied (1x1280 and 768x3072), any thoughts on this?
Is depth ControlNet possible for img2img? I will provide a reference image and specify the position that ControlNet should adopt for another photo. The output should modify the reference image accordingly.
@@Aiconomist I am currently researching this issue and have already read a lot of information on the Internet and will have to do so again. If you can suggest something for mac I would appreciate it. I'm interested in a specific workflow: I have a photo of clothes, I generate a person with the necessary parameters and background using AI, and I need to dress him in clothes from the photo so that the clothes are as identical as possible to the ones in the photo.
What if i want to use the control net to build a photo of a man, but using a womans reference photo, would that work? or its better to wait for a working openpose?
Nice to have a reliable ControlNet working in Flux. I noticed your using this "ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors" instead of Clip-I, in the ClipLoader. Do you think it helps? Nice workflow, thanks.
Thank you very much for sharing, nice video! Edit: You mention the workflow is provided on your website for free. I promise you it is not. Or, you have hidden it really well.
You're right, the workflow was initially offered for free as part of a special promotion, but that offer has since ended. Please note that membership on the site helps support the channel and allows me to share content here on TH-cam. Thank you for your support! :)
I don't see a link to the workflow?
Hi I changed the difussion model to flux dev and I get the following error: mat1 and mat2 shapes cannot be multiplied (1x1280 and 768x3072), any thoughts on this?
workflow is behind the paywall
But it works which is more than I can say for a lot of workflows
I will buy the workflow only if you can generate PBR normal maps
Is depth ControlNet possible for img2img?
I will provide a reference image and specify the position that ControlNet should adopt for another photo. The output should modify the reference image accordingly.
this is what ipadapter is used for
Hi there, great workflow thankyou. What is the "no negative prompt" node and how do you find that? Can't find that anywhere? Many thanks
hi, did you find a solution? thank you in advance
Hey pls am using fp16 but I still get low quality images sometimes with deformed
How can you use a normal "Apply control net" I cant. it does not work with flux.
Yes it does, just make sure to update Comfyui
Can I use comfyui in mac?
Yes, but it's limited
@@Aiconomist I am currently researching this issue and have already read a lot of information on the Internet and will have to do so again. If you can suggest something for mac I would appreciate it. I'm interested in a specific workflow: I have a photo of clothes, I generate a person with the necessary parameters and background using AI, and I need to dress him in clothes from the photo so that the clothes are as identical as possible to the ones in the photo.
What if i want to use the control net to build a photo of a man, but using a womans reference photo, would that work? or its better to wait for a working openpose?
how do you access the workflow for free?
You can download workflows for members only on the website.
Nice to have a reliable ControlNet working in Flux. I noticed your using this "ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors" instead of Clip-I, in the ClipLoader. Do you think it helps? Nice workflow, thanks.
Thanks! I discussed this newly improved text encoder in my video, "Unlock FLUX Full Potential on Any Graphics Card." Check it out!
@@Aiconomist You got a typo in your website. It says "5 USD / Mounth" . Do I get a free membership for spotting it out?
HOW TO USE MY IMAGE AND ADD DIFFERENT POSE OR FACE.. I CANT DO IT
Of same face? Not possible with flux yet
We already had controlnet union
Thank you very much for sharing, nice video!
Edit: You mention the workflow is provided on your website for free. I promise you it is not. Or, you have hidden it really well.
You're right, the workflow was initially offered for free as part of a special promotion, but that offer has since ended. Please note that membership on the site helps support the channel and allows me to share content here on TH-cam. Thank you for your support! :)
Has anyone got the 42$ course? I want to know if it's really worth buying and if everything is taught from basics to advanced.
It's worthed, he explain it from installation step by step. For me I think it's worthed fo $42
@@cgpixol Ok. Thank you! 🙏
please help
Error occurred when executing ControlNetLoader:
MMDiT.__init__() got an unexpected keyword argument 'image_model'
File "/workspace/ComfyUI/execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/workspace/ComfyUI/execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/workspace/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/workspace/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/workspace/ComfyUI/nodes.py", line 758, in load_controlnet
controlnet = comfy.controlnet.load_controlnet(controlnet_path)
File "/workspace/ComfyUI/comfy/controlnet.py", line 507, in load_controlnet
return load_controlnet_mmdit(controlnet_data)
File "/workspace/ComfyUI/comfy/controlnet.py", line 413, in load_controlnet_mmdit
control_model = comfy.cldm.mmdit.ControlNet(num_blocks=num_blocks, operations=operations, device=load_device, dtype=unet_dtype, **model_config.unet_config)
File "/workspace/ComfyUI/comfy/cldm/mmdit.py", line 14, in __init__
super().__init__(dtype=dtype, device=device, operations=operations, final_layer=False, num_blocks=num_blocks, **kwargs)