Holy crap this is great. I'm 6 days down the rabbit hole of A1111/Stable Diff and I can't get enough. I've been looking for this exact video! Thank you!
looks like it's set up differently now, you have to check the box that says allow preview and then click run preprocessor (the little explosion icon next to the preprocessor field)
@@Geekatplay I just now looked up a TH-cam video about stable diffusion, and brought me back here brother. The algorithm knows where to take me for education. Its so good.
Hello, great video. However, I'm not sure how you got the ControlNet section there and the models? Can you add an explanation for that? There are many results when searching for it and in the link you provided and there is no explanation about that. Thank you.
I followed it entirely, I am getting my face pasted on the generation, ( I just want it to keep the structure of myface) it's not blending the face with image, how to do that, which settings to adjust, please help
Thanks for this Tutorial. But I can't find the Model under the Preprocessor. I think i ticked all the right stuff in ControlNet and restarted the UI. Any suggestions?
How do you find out what size the model is trained on to get best results? I'm finding adjusting the size proportions of the canvas really drastically effects my image output/
Love the video thanks, but when I use inpaint to paint the face and click generate with the same settings, it just puts the face on a random place on the image, and does not replace the face.
Well, I installed everything as well as I could, but after inputting a control net image I can't see "Preview annotator result". There's just nothing there.
Preview button is not visible now when u try the tool see there is a icon looks like this 💥 this is the preview button just enable preview and touch this icon ur preview will be there…
I don't know why when I use the impaint it completely ignores the previous controlls for the pose and just paste the face on a completely different image randomly.
I noticed something about the your Stable Diffusion setup when you were using the ControlNet features There was a LORA feature just above the ControlNet menu How do I get that on my SD ?
Hey, thanks for the tutorial it helped a lot! But I have a quick question, how to make the face and the rest of the body have matching colors and tones? Which settings do I need to change? Thanks!
He didn't mention it in the video, but there is another controlNet model simply called 'color' (search for t2iadaptor color) that will make a mosiac grid like sampling of your source image colors and apply them to the generated image
@@tstone9151 I use the whole Affinity suite for a bunch of stuff. It's not really AI driven, just a photoshop/lightroom alternative (in the case of photo)
hello, I probably have a weaker version because I don't have the controlNet option. it shows me the Deliberate.v2 version. where can I get a better version? Thank you
Be sure you dont have too many extensions installed, or additional tabs will be hidden in overflow as well clear cash on browser. it could be many things that creating this problem, it is hard to tell without seen it.
@@Geekatplay Thank you for your reply. I searched everywhere in the settings, but didn't find it anywhere. I think it is a weaker version. Where can I get a version like yours?
Посмотрел ваше видео на аналогичную тему на русском, но еще решил и на английском посмотреть. Не очень удачная техника, она страдает теми же недостатками, что и заменители лица типа реактора, лицо переносится со своим стилем, который может не совпадать со стилем изображения, в которое оно встраивается. Например, если основное изображение это рисунок в акварельном стиле, а лицо переносится с обычного фото, то оно и будет на акварельном рисунке как фото, т.е. не будет подходить по стилю. Вы знаете, как решить эту проблему в SD?
Почему то после установки и запуска Controlnet выбора картинки и всех значений у меня не появляется три нижние вкладки Create blank canvas, Previev anotator result и Hide anotator result. В чем может быть причина не подскажите?
Thank you, a very useful lesson. However, I ran into a problem: the preview window is not active. It exists, but it doesn't show anything. Do I need to install something?
Wow, this is amazing! I updated the Civitai page to announce that I started training RPG V5.0. I will ship that version with a set of Control Net image to help people have more control on the model.
great video really helped me understand how to keep the face structure, is it possible to do inpaint batch in order to create videos, that retain the face structure? working on your other video on created flicker free video and wanted to use this feature to keep the face structure consistent with my models face.
Bravo! Thanks Vladimir
*Thank you for your support!*
This is the best tutorial i saw about how to use it. Realy great.
Holy crap this is great. I'm 6 days down the rabbit hole of A1111/Stable Diff and I can't get enough. I've been looking for this exact video! Thank you!
The button Preview annotator result dont show. Any tip to show this option? (Controlnet 1.1.02)
Thanks for mentioning this. Same issue for me ControlNet v1.1.112
I have the same problem, did you solve it?
looks like it's set up differently now, you have to check the box that says allow preview and then click run preprocessor (the little explosion icon next to the preprocessor field)
it was changed, now it is small icon, on right side from drop down box. look like spark.
This is an amazing workflow Vladimir, great job! So many people fighting to get exactly this for so long. Again, great job!
thank you
THANK YOU!!, That face trick has been something I've been trying out for months, now I can better make portraits!!
Great to hear!
From an art point of view/perspective, Vlad is the best A.I. mentor on TH-cam, by far.
Thank you for your support!
His equivalency to A.I. art is of Da Vinci modern age. Imagine if Vlad lived in Da Vinci's time. 🤔
@@Geekatplay I just now looked up a TH-cam video about stable diffusion, and brought me back here brother. The algorithm knows where to take me for education. Its so good.
This could save the trouble of training models for different faces. Very helpfull! thanks
Absolutely!
Is it should work for each checkpoint?
No, check points need match other components in how it was trained
@@Geekatplay got it, cause I tried on what I already had and it was not working. Thnx
Hello, great video. However, I'm not sure how you got the ControlNet section there and the models? Can you add an explanation for that? There are many results when searching for it and in the link you provided and there is no explanation about that. Thank you.
09:10 was it possible to use openpose_full instead of inpaint, which also captures the face?
I followed it entirely, I am getting my face pasted on the generation, ( I just want it to keep the structure of myface) it's not blending the face with image, how to do that, which settings to adjust, please help
Hello, I ve installed Control Net, but I cant see the "Preview annotator result" buttons. Should I install another extension or what?
in newer version it is look like spark icon, next to the preprocessor drop down selector
@@Geekatplay Ok I got it, thanks!
thanks for the tutorial.i don't find Control sdi 15 canny.where do i can download it? thanks.
Thanks for this Tutorial. But I can't find the Model under the Preprocessor. I think i ticked all the right stuff in ControlNet and restarted the UI. Any suggestions?
you need be sure models located in correct folder, i will make video about it
@@Geekatplay I dont have this model too, can you please put link for it and just write where to put model in what directory/ folder
How do you find out what size the model is trained on to get best results? I'm finding adjusting the size proportions of the canvas really drastically effects my image output/
it is in model description if you downloading from Hugingface or Civit.ai
@@Geekatplay Thanks for the reply .. Found out all the specific sizes for the model I was using. Turns out I was using an outdated version of sdxl
What video card do you have running to be able to get results that fast with all these controlnets and script running?
牛逼!!? 你太厉害了。我研究了半天,看到你这个视频我终于学会了。谢谢你。👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻
thank you!
I do not have control net in my settings ?
you need install as extension first
Can this method be used for architectural rendering?
yes, if you using ControlNet model with architectural preprocessing. can not recall from top of my head, but i will check and post.
@@Geekatplay If you make a post about architecture, that would be great
thank you for suggestion, i will
Love the video thanks, but when I use inpaint to paint the face and click generate with the same settings, it just puts the face on a random place on the image, and does not replace the face.
be sure you set correct masking, it may be inverted
Did you follow the prior steps to match the pose first?
how to get composable lora on the bottom?
how you get that prompts ? is there any tool or site for good prompts ?
yes, i will release video soon about creating prompts ( prompts generators)
How are you using stable diffusion like that?
it is Automatic1111 installation (UI) and control net
Well, I installed everything as well as I could, but after inputting a control net image I can't see "Preview annotator result". There's just nothing there.
same issue i do not have the preview result button
@@lost-frequency have you find the solution?
@PaonSol and @Lost Frequency Band
Press the "allow preview" checkbox, then a little boom button will appear next to your choice of preprocessor
Preview button is not visible now when u try the tool see there is a icon looks like this 💥 this is the preview button just enable preview and touch this icon ur preview will be there…
Aren't those controlnet modules unsafe due to pickle imports being detected?
they are using some calls, that may misused, it is why i usably check phyton code it self, if code not in the safeguard settings
Thank you my friend
Hello, i have missing Preview annotator results (create blank and hide annotator also) in control net. Is there something what can i do ?
click on small icon next to preprocessor selection
I don't know why when I use the impaint it completely ignores the previous controlls for the pose and just paste the face on a completely different image randomly.
be sure to check, what area of inpainting you want to use, it should be, inpaint mask area or inpaint not masked area
@@Geekatplay thank you!!!
Are you using Automatic1111 gui? You're very simular to mine but I don't have controlnet.
you need install it in extension tab
Hi, it looks like , version of Imac is differente of windows ! how to install it in windows ?
hello Vladimir Beautiful tutorial, only I don't have the "preview annotator result" button in the control net section. Do you know how I can do it?
in new version it is icon, looks like spark, next to the preprocessot drop down
please upload same tutorial with new version lot of different getting confuse for me not showing preview option
why i dont have upload image in controlnet img2img ?
Genius. The video and workflow technique are very much appreciated!
Glad it was helpful!
where i can found controlnet extension???
they are in "extensions" tab
@@Geekatplay ok ty, i try, but if style of model is unreal, like anime or other, how to change the style of the face?
I'm lost for words. Subscribed. This is too accurate and detailed to be free
thank you
I just cant get anywhere i have image A and when i generate someting in image to image i get a cow! for example!
how to install composable lora?
i need make video about it
@@Geekatplay I think I just found it. But thanks, if you want to make an explanation about it go ahead please
I noticed something about the your Stable Diffusion setup when you were using the ControlNet features
There was a LORA feature just above the ControlNet menu
How do I get that on my SD ?
Thx so much! That's a super nice tutorial.
You're welcome!
Владимир спасибо большое. Отличный тутониал. Буду пробовать что то похожее
Spasibo!
Hey, thanks for the tutorial it helped a lot! But I have a quick question, how to make the face and the rest of the body have matching colors and tones? Which settings do I need to change? Thanks!
Yea i was thinking the same
He didn't mention it in the video, but there is another controlNet model simply called 'color' (search for t2iadaptor color) that will make a mosiac grid like sampling of your source image colors and apply them to the generated image
use full body in imported image
@@Geekatplay could you please explain in detail? how is this done?
how do i configure rpg4 model?
link to de manual in description. they have recommended settings in there
Very cool, and helpful! Have you figured out a way to make the in-painted face match the style of the rest of the picture?
Yes...using Affinity Photo you can do just that!
Could do another img2img on low denosing with the controlnet.
@@cryptojedii you mind linking a tutorial? Thanks for the recommendation of affinity photo, never heard of it
@@tstone9151 I use the whole Affinity suite for a bunch of stuff. It's not really AI driven, just a photoshop/lightroom alternative (in the case of photo)
i have all the same settings as you but when im in "inpaint" it just generates the face but doesn't keep the body or background? why is this
Emm, btw my control net does not show "Preview Annotator Result" & "Hide Annotator Result" can someone help me?
now it is small icon by preprocessor selector.
@@Geekatplay Oh! Thankyou, I didn't notice there is small icon.
hello, I probably have a weaker version because I don't have the controlNet option. it shows me the Deliberate.v2 version. where can I get a better version? Thank you
same not seeing ControlNet option
Be sure you dont have too many extensions installed, or additional tabs will be hidden in overflow as well clear cash on browser. it could be many things that creating this problem, it is hard to tell without seen it.
@@Geekatplay Thank you for your reply. I searched everywhere in the settings, but didn't find it anywhere. I think it is a weaker version. Where can I get a version like yours?
amazing thx, you explained it very well
Thank you sir for sharing your knowledge with the world! I fully watch all ads for you😂😅
Such a great trick.❤ Watching these vids makes me realize that I'm still a noob when it comes to SD. 😉
А где же информация по Русски, Владимир? хотя бы субтитры...
я запустил канал по русски: th-cam.com/channels/zUdmVSghI1WXwOKD-vRRaQ.html
Composable Lora-I have a completely different menu(where can I download your version?
github search for Automatic1111
@@Geekatplay I installed it, but I have a different menu than you have in the video
@@Geekatplay Composable Lora
▼
Enabled
Use Lora in uc text model encoder
Use Lora in uc diffusion model
I am following along with mac, I have the v1-5 , looking in settings has all settings except ControlNet, any ideas?
How is your computer so fast? Great stuff, thanks for sharing!
WOW! You have me very excited. I need to see where to get started with this. Looks exactly like what I want to start doing! Liked and subscribed!
You can do it!
can this be achieved using leonardo or midjourney?
not yet
Exactly what I was looking for. Thank you.
thank you
Хоть ролик и пошаговая инструкция к портретам, у вас получилось мимоходом объяснить работу многих параметров. Спасибо за видос.
❤❤❤ great
thank you
How to install this software?
check this video: th-cam.com/video/oO3zIfH4LRE/w-d-xo.html
Wow! This is amazing! But how to get this crazy tool? This is not Leonardo or Stable Diffusion website?
This is Stable Diffusion on local installation, check my channel for the videos how to install it
Thank you very much for the tutorial. Went to find those model you used.
thank you
The video looks awesome by generating portraits. May I know program is this?
Stable Diffusion, local installation. th-cam.com/video/oTrmgXuc3e8/w-d-xo.html
name of the tool kit pls
Thank you 😮 master , you're goat ❤
Hey where can I download the script Img2img alternative test? please
script was part of installation, you can also check github
what is the website he is using to do stable diffusion?
it is local installation: th-cam.com/video/oTrmgXuc3e8/w-d-xo.html
How was this software setup? What's the install process?
it is Stable Diffusion, Automatic1111 installation.
where do you get the checkpoint and how do i install please?
you can copy checkpoint to the model folder
quality walk through -- can you explain in more detail what the Lora configuration means and what it is doing? Thanks in advance
thank you!
Посмотрел ваше видео на аналогичную тему на русском, но еще решил и на английском посмотреть. Не очень удачная техника, она страдает теми же недостатками, что и заменители лица типа реактора, лицо переносится со своим стилем, который может не совпадать со стилем изображения, в которое оно встраивается. Например, если основное изображение это рисунок в акварельном стиле, а лицо переносится с обычного фото, то оно и будет на акварельном рисунке как фото, т.е. не будет подходить по стилю. Вы знаете, как решить эту проблему в SD?
Amazing! I think the inpaint will solve my lipstick issues for singing videos! And I could learn more about the control nets!Thanks a lot
It's amazing. Man, do you think that is possible apply this techinque for food photography or productos?
i will try that.
brilliant video !! thanks
thank you for your support!
Hi, how can i have this prompts?
i am starting short videos with example and prompts in description, check them.
how can we batch inpaint? for the purpose of processing png sequences
you can create multiple masks and load them as batch
how do i install this program? sorry im new😂
th-cam.com/video/PqCIUniQ_U8/w-d-xo.html
congratulations on the job! I can use this technique to create pets?
thank you. i will make video specifically about pets, and yes it does works. I made a lot of photos/videos with my Border Collie
🎉🎉🎉
Smoooth 👍
Thanks 💯
Hey man, great video but I just can't manage to get full-body persons. It's always cropped to head or upper body. Any ideas what I can do?
This might help, I changed the first prompt to ‘full body pose’ and it gives a near on full body.
it was originally 2/3 photo. for full body can have tricks to add (hat) (shoes) (floor) (sky) ..etc something above object and below
@@Geekatplay ah nice, that sounds smart. thanks!
Is the GUI available in Google Collab?
yes
I'm sorry, but I can't find any Colab link with the GUI you're using. Could you please provide it in the comments? Thank you!
I think that is A1111, it's correcto?. Thanks!
What GUI is he running ?
Automatic1111
Почему то после установки и запуска Controlnet выбора картинки и всех значений у меня не появляется три нижние вкладки Create blank canvas, Previev anotator result и Hide anotator result. В чем может быть причина не подскажите?
Nado poiti v "Settings" i v zakladke "ControlNet" ustanovit' skolko instance ControlNet vy xotite videt'
Genius. The video and workflow technique are very much appreciated! thank you
thank you for your support!
Thank you, a very useful lesson. However, I ran into a problem: the preview window is not active. It exists, but it doesn't show anything. Do I need to install something?
on some versions (latest) preview located on side of the selection for preprocessor, it is small icon.
@@Geekatplay Thanks!
great video. is it possible to replicate the same face from the input?
Yes, absolutely
@Geekatplay I am struggling with it for several weeks now. Do we need to mask and generate again for face and features ?
you have option to invert mask for inpainting. you can send me email with problem. need more info on what you trying to do
I tried this on my phone and it work but i cant find the option to use my own picture, where is it?
Wow, this is amazing! I updated the Civitai page to announce that I started training RPG V5.0. I will ship that version with a set of Control Net image to help people have more control on the model.
thank you
Top!
thank you!
Amazin amazing content,thank you
Glad you enjoy it!
sir website name ??????? link / please fast
my client is waiting
link for?
Great work! Thanks so much, very comprehensive!
excellent, really nice
Thank you! Cheers!
Where can I get this script?
prompt?
Img2img alternative test
got to the extensions tab and click on load available extensions, it will be in the list.
@@Geekatplay thanks
I really enjoyed this video. All of your videos are great. Thanks.
thank you for your support!
Could you do this with an photo of a building or house. Keeping an acuate representation of the subject and placing it in a different environment?
Does all of this still fit within 16GB VRAM or do I need more? Thanks nice video.
great video really helped me understand how to keep the face structure, is it possible to do inpaint batch in order to create videos, that retain the face structure? working on your other video on created flicker free video and wanted to use this feature to keep the face structure consistent with my models face.
thank you, it is possible, but you will need to load masks for in-painting
Very interesting
thank you
KINDLY HELP
Hello sir
Any guide or tip how to completely delete stable diffusion
Nedd it to reinstall it for fresh default download