I like it. Thank you for sharing. One question. The image with the text you drag in the controlnet part, does it need to have a specific resolution? What resolution did have the images with text you used? Thanks in advance
A huge thanks for a topic like this. I'm wondering how to put the exact text on a certain image without changing it. In the img2img tab, which I use most often, the results are very unpredictable and it is often impossible to read the text that was a mask in ControlNet. Even despite different preprocessor and model settings. And is it necessary to duplicate the mask in Inpaint? Very many questions, very few answers. )
of course the preprocessing is affecting the generated text and the image as a whole. Canny is the name of an edge detection algorithm. If you pass a text image without specifing any detection algorithm there will be no generated mask passed to stable diffusion to tell it where the actual text is.
Without using control net is there any possible way to give it in the prompt and get the text is that possible?
I like it. Thank you for sharing. One question. The image with the text you drag in the controlnet part, does it need to have a specific resolution? What resolution did have the images with text you used? Thanks in advance
check box Pixelprefect, it will read input image resolution
I am learning a lot from you. Thank you very much!
So nice of you
A huge thanks for a topic like this. I'm wondering how to put the exact text on a certain image without changing it. In the img2img tab, which I use most often, the results are very unpredictable and it is often impossible to read the text that was a mask in ControlNet. Even despite different preprocessor and model settings. And is it necessary to duplicate the mask in Inpaint? Very many questions, very few answers. )
Sorry, but what is "ControlNet"?
it is set of model extension for Stable Diffusion. I have video coming out with more information about ControlNet
you can easily do this or even better in photoshop or gimp
of course the preprocessing is affecting the generated text and the image as a whole. Canny is the name of an edge detection algorithm. If you pass a text image without specifing any detection algorithm there will be no generated mask passed to stable diffusion to tell it where the actual text is.
it is also depend on what model you are using, some does not required any preprocessors
👋