quick tip if you are unhappy with being limited to choosing colors by their names, like in the workflow where the gradient is represented by a vector ("iclight_example_fc_controlled_gradient_01.png"): By default, the "Create Gradients from colors" node only allows to enter color names, like "orange". If you convert "start color" and "end color" to input widgets, you can choose any colors you want. In my case, I used a node called "ColorPicker" as input, with comes with the LayerStyle Package. So now I can just choose any color by clicking and moving sliders, or entering values, like you would in Photoshop or Krita.
This is amazing! I have been looking for a completely free alternative to Beeble's SwitchLight Studio, and have ComfyUI already installed as part of the Krita AI plugin. Can't wait to try this workflow. Thank you so much.
It's like having your subject as an exceptionally textured model in a 3d software like blender but with the benefit that you can create a new model in a few minutes or even seconds. I haven't watched to the end so excise me if it's covered but does it work on other kinds of fotos?
thank you for this video, perfect timing for me, I just wanted to give this a try :) I am using the portable version of Comfyui. When I drag and drop one of Kijai's workflow png's into ComfyUI, it says "Unable to find workflow in ... (name of png)" - do you happen to know what could be happening here, and how I could solve it?
you probably did not saved png correctly, be sure to click on example folder, then click on image, then on top right side click on download button. if you just try to right click on image and save, it wont have embedded info about workflow. I will try create json files, it may be easier to use and save
so in the example where you used the light of the cyberpunk image to light your portrait - it treats it A) as if your were facing the lights in the background image? Or B) as if you are with your back to the background? (The final generation shows you with your back to the lights in the image, so for this B) would make more sense. At the same time, actually seeing the different light sources reflected on your face, would be more interesting, and be more logical, in a sense. Because why would the lights of the background show on your face, when you are facing away from them?)
From experiments, photon model does work well with environmental lighting, however i notice, that if i have bright light directly behind, then it would create some lighting upfront, that matching light positioning. Without detailed documentation it is hard to tell, if it is taking environmental image and set like it is upfront of the subject, and background it is just low detailed render of the environment, or it is trying to wrap by duplicating. Need more testing and digging in.
quick tip if you are unhappy with being limited to choosing colors by their names, like in the workflow where the gradient is represented by a vector ("iclight_example_fc_controlled_gradient_01.png"):
By default, the "Create Gradients from colors" node only allows to enter color names, like "orange".
If you convert "start color" and "end color" to input widgets, you can choose any colors you want.
In my case, I used a node called "ColorPicker" as input, with comes with the LayerStyle Package. So now I can just choose any color by clicking and moving sliders, or entering values, like you would in Photoshop or Krita.
This is amazing! I have been looking for a completely free alternative to Beeble's SwitchLight Studio, and have ComfyUI already installed as part of the Krita AI plugin. Can't wait to try this workflow. Thank you so much.
Thank you!
Можно гайд на как создавать consistent characters from an input img с ComfyUI? Вы просто очень хорошо объясняете 💪🏻
It's like having your subject as an exceptionally textured model in a 3d software like blender but with the benefit that you can create a new model in a few minutes or even seconds. I haven't watched to the end so excise me if it's covered but does it work on other kinds of fotos?
yes, but it does have limitation. mostly on what model you are using , checkpoint, depend what dataset it is trained on.
@Geekatplay ok, thank you
thank you for this video, perfect timing for me, I just wanted to give this a try :)
I am using the portable version of Comfyui. When I drag and drop one of Kijai's workflow png's into ComfyUI, it says "Unable to find workflow in ... (name of png)" - do you happen to know what could be happening here, and how I could solve it?
same happens when I try to open the workflow png's in my pinokio version of comfyui
you probably did not saved png correctly, be sure to click on example folder, then click on image, then on top right side click on download button. if you just try to right click on image and save, it wont have embedded info about workflow. I will try create json files, it may be easier to use and save
@@Geekatplay you are completely right, actually displaying them and then saving solved the problem - thank you so much for your answer!
so in the example where you used the light of the cyberpunk image to light your portrait - it treats it A) as if your were facing the lights in the background image?
Or B) as if you are with your back to the background?
(The final generation shows you with your back to the lights in the image, so for this B) would make more sense. At the same time, actually seeing the different light sources reflected on your face, would be more interesting, and be more logical, in a sense. Because why would the lights of the background show on your face, when you are facing away from them?)
From experiments, photon model does work well with environmental lighting, however i notice, that if i have bright light directly behind, then it would create some lighting upfront, that matching light positioning. Without detailed documentation it is hard to tell, if it is taking environmental image and set like it is upfront of the subject, and background it is just low detailed render of the environment, or it is trying to wrap by duplicating. Need more testing and digging in.