Thank you for your informative video on using ControlNet with ComfyUI; it was very helpful! I have a question: Is it possible to manually edit the previewed lineart generated by ControlNet? For example, can we export the generated lineart, modify it in a graphic editing software, and then re-import it back into ComfyUI for further use? Could you provide some guidance on this?
Keep learning and sharing your knowledge! Thank you for that! One question, is it possible to use mask pass/channels for controlling some part of the image? For example window frames, bg, trees etc. If we have all that in input image. Is that going through control net? ✌✌✌
Thanks a lot, I appreciate the support Yes, of course, everything is possible in ComfyUI, We can mask it over the areas we want to improve and regenerate it with Control Net. I actually do upscale with this technique, especially with people's faces it works great.
Hi thanks for the amazing tutorial, can you help me? I don't understand how to install your workflow on comfy, which file are you dropping inside of comfy? Thanks!
thanks for the wonderful content. im having trouble launching the notebook on runpod, it seems that may be some issues with LastBen and runpod so its not launching as smoothly. do you have a work around using a different cloud service? or recommendations?
Hi Omer, This is awesome; thank you so much! I'm having an issue with Turbo, where I receive this error message while processing the K Sampler: Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320); I'm trying to execute your workflow without altering it. What might I be doing wrong? Thank you!!
I have GeForce RTX 3060 GPU with 6GB of memory and it took me to get a render in 20 seconds to 30 seconds and it is not really efficient to wait a that long for a Real time render thats why i choose to rent GPU on RunPod.
Hello! Let me ask you a question: I create real-time rendering and artificial intelligence (synced to 3dsmax model), and the generated pictures have a lot of noise! The quality is not high, why? Thanks!
Hey Omer, I am trying to download the resources/ the files you mentioned in the installation on jupyter. but i can't find a link to them. The link on the page you refered in the description here is not clickable/not opening anything. Am I missing something?
This is everything! Thanks for showing a cloud-based alternative, I have a very weak GPU
Thank you, glad you liked 🧡
This channel is pure gold. Thank you for sharing so much value!
Great piece of content Ömer. Thanks and keep it up!
Thanks a lot for your support 🧡
Thank you for your informative video on using ControlNet with ComfyUI; it was very helpful!
I have a question: Is it possible to manually edit the previewed lineart generated by ControlNet? For example, can we export the generated lineart, modify it in a graphic editing software, and then re-import it back into ComfyUI for further use?
Could you provide some guidance on this?
Keep learning and sharing your knowledge! Thank you for that!
One question, is it possible to use mask pass/channels for controlling some part of the image? For example window frames, bg, trees etc. If we have all that in input image. Is that going through control net?
✌✌✌
Thanks a lot, I appreciate the support
Yes, of course, everything is possible in ComfyUI, We can mask it over the areas we want to improve and regenerate it with Control Net. I actually do upscale with this technique, especially with people's faces it works great.
Hi thanks for the amazing tutorial, can you help me? I don't understand how to install your workflow on comfy, which file are you dropping inside of comfy? Thanks!
Big props on your video! Where did you find the rvxl3 checkpoint, cant find it on civitai and huggingface :(
Nice 👌🏻
Is there any ai render engine to render the 3d scene exactly with details? To replace with common render engines like v-ray or corona
Eres un genio muchas gracias amigo :)
🧡🧡
thanks for the wonderful content. im having trouble launching the notebook on runpod, it seems that may be some issues with LastBen and runpod so its not launching as smoothly. do you have a work around using a different cloud service? or recommendations?
Hi Omer, This is awesome; thank you so much! I'm having an issue with Turbo, where I receive this error message while processing the K Sampler: Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320); I'm trying to execute your workflow without altering it. What might I be doing wrong? Thank you!!
super powerful!!
I agree, this workflow is handy and efficient
It is possible to use without runpods? With a medium GPU how much time run for a render.thanks
I have GeForce RTX 3060 GPU with 6GB of memory and it took me to get a render in 20 seconds to 30 seconds and it is not really efficient to wait a that long for a Real time render thats why i choose to rent GPU on RunPod.
question, were did you get the workflow template? Is it insde the first link? anyone can help?
this is also unclear to me... I drag an image into the workflow but nothing happens...
I found how this works, you have to use the image that is on his discord because it has the node structure information in it
Hello! Let me ask you a question: I create real-time rendering and artificial intelligence (synced to 3dsmax model), and the generated pictures have a lot of noise! The quality is not high, why? Thanks!
Amazing!
Hey Omer, I am trying to download the resources/ the files you mentioned in the installation on jupyter. but i can't find a link to them. The link on the page you refered in the description here is not clickable/not opening anything. Am I missing something?
NVM mind found it
Super!👌✋💪👏👏👏👍👍👍
Thanks a lot 🧡
Where is depthanything custom node?