- 3
- 32 091
wizzitygen
เข้าร่วมเมื่อ 18 มี.ค. 2023
people's stories • human edit • machine's words • human edit • machine's sounds • human edit • machine visuals • human edit
Train a Stable Diffusion Model Based on Your Own Art Style
In this video, you will learn how to use Dreambooth to train a Stable Diffusion model based on your own art style. Artists, get inspired by your own art style and create stylized reference material for your next creation.
Here are some links connected to this video.
Link to Google Collab Notebook - 1 Click Execution of Dreambooth Stable Diffusion: colab.research.google.com/github/sagiodev/stablediffusion_webui/blob/master/DreamBooth_Stable_Diffusion_SDA.ipynb#scrollTo=K6xoHWSsbcS3
How to set up Automatic 1111 and Stable Diffusion using a Google Collab Notebook, by The Ming Effect: th-cam.com/video/BgcLD3CiDpY/w-d-xo.html
How to install Automatic 1111 on your local Windows computer. User interface explained by Sebastian Kamph: th-cam.com/video/DHaL56P6f5M/w-d-xo.html
How to install Automatic 1111 on your local Mac M1 Chip computer, Blog Article by Stable-Diffusion-Art.com: stable-diffusion-art.com/install-mac/
How to install Automatic 1111 Stable Diffusion on your Mac Computer, Video Tutorial by Star Morph AI: th-cam.com/video/DUqsYm_rYcA/w-d-xo.html
Here are some links connected to this video.
Link to Google Collab Notebook - 1 Click Execution of Dreambooth Stable Diffusion: colab.research.google.com/github/sagiodev/stablediffusion_webui/blob/master/DreamBooth_Stable_Diffusion_SDA.ipynb#scrollTo=K6xoHWSsbcS3
How to set up Automatic 1111 and Stable Diffusion using a Google Collab Notebook, by The Ming Effect: th-cam.com/video/BgcLD3CiDpY/w-d-xo.html
How to install Automatic 1111 on your local Windows computer. User interface explained by Sebastian Kamph: th-cam.com/video/DHaL56P6f5M/w-d-xo.html
How to install Automatic 1111 on your local Mac M1 Chip computer, Blog Article by Stable-Diffusion-Art.com: stable-diffusion-art.com/install-mac/
How to install Automatic 1111 Stable Diffusion on your Mac Computer, Video Tutorial by Star Morph AI: th-cam.com/video/DUqsYm_rYcA/w-d-xo.html
มุมมอง: 31 820
วีดีโอ
20 FOR 20
มุมมอง 230ปีที่แล้ว
This idea was presented to me by my partner, Corinne. We collaborated with one another. She wrote and directed it and did all the prompting and button pushing. The entire process took about 22 hours of work, between working in Stable Diffusion / Deforum, editing in Premiere, upscaling in Topaz Labs, and sound editing. This was done with a combination of continuous prompts and single prompts for...
I tried and upload 30 IMG but after the some processing. It showing “Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.“ Pls help me 🥺😭
This is a great tutorial. As I am new to this, I was wondering if trained model could also be used to transfer style. So, for instance, can I train the model in my own art style, then take a picture (e.g. a photo of a face) and prompt the model to create a new image of the face in my art style?
May todays weakness be tomorrows strength my boi🙏🏼
Please drink water.
Im getting the error: MessageError: Error: credential propagation was unsuccessful
the most clear demo i ever see
best video ever! Easy to follow
Have u tried to train with several styles? I've trying to train with one style, upload the model to huggingface, train this model again, so on. The problem is in the 5 o 6 train the first style start to fail.
is this model works as a image to image? hope i get your replay or a link of work if possible
This was a great video and the Colab workflow was very easy to follow. I wanted to create a model based on my own art style. This notebook worked perfectly up to the very end. I was able to generate the sample images and also generate images using my own prompt but after checking my Drive folder the checkpoint was never saved to the AI_PICS/models folder. All permissions were given to Colab regarding access to my Drive account. So I used my other google account that has 14gb of available space, built another model and again the tensorsafe file did not appear in Drive folder. Has anyone else experienced this problem?
Thank you for the video. I have a question. I cannot find file "model.ckpt" in my google drive. I've check several time all my google drive. Where can it be?
I keep getting an error that says" ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-0000cefd55dc> in <cell line: 13>() ModuleNotFoundError: No module named 'diffusers' Can you help me or can someone explain what this means?
I don’t See a file called Model ckpt i did everything correctly!
I have the same issue, @euyoss did you find a solution? @wizzitygen
nope ;C@@themaayte
Hi there, not sure what the problem might be. Have you searched the entire drive? Search (.ckpt).
@@wizzitygen Hi so I've found the solution, the code doesn't give you a .ckpt file, it give you a safetensors file, which is the same thing
thanks !! @@themaayte
Damn, I followed all the steps and the model semmed to work correctly on dreambooth but not in automatic1111 : the *.ckpt shows in the dropdown menu but I can't select it for some reason. Safetensors files work though, what did I do wrong ?
Hi there, I'm not sure why that would be. Sometimes the latest commit can be buggy, but I am by no means an expert in these matters. I'm sorry I can't be of more help.
I'm curious about the img2img functionality. Rather than typing a prompt of a cat, I'd love to see if it could translate an image of a specific cat into vrcty_02.
Hi there. Not sure I understand what you mean. Like a Siamese cat or the like?
Super!
Glad you found it helpful.
I want to make sure this doesn't sample other artists. I'm fine with it using pictures of objects for reference but is it 100% only sampling my artwork for style?
It samples your art for style but if you refer to other things in your prompts, i.e. tree, house, etc. it uses the larger model to generate those ideas. In the style you trained it on.
Hey, the faces and details are not really good on my model, is it possible to train it further to improve to make details etc? or maybe I should use better prompts?
My model are "cartoonish drawings" but realistic, although the faces still seem bad after adding negative prompts and using a realistic lora. Do you know how to fix?
This is a very straightforward and easy tutorial. Thanks!
how get you dlown Stable Diffusion WebUI i on your mac
where do I get Stable diffusion to downlod on my mac
Try googling Stable Diffusion WebUI for Mac, Github. If you have an M1 chip addd that to your search.
I know You need to exprot as a 512 X 512 but does it need to be png?
Do I need code because I do not know anything about code but I do want to use my own artistic style
Curious, must you use a specific square size for image size? As most of my pictures are all different sizes and mostly rectangle.
512x512 pixels is the best size to use as Stable Diffusion itself is trained on that size.
Can I save the file in .safetensors extension and not .ckpt?
In this instance it saves as a ckpt.
Making your own Lora really looks like the way to go for targeted results. Thank you for this.
pls upload a tutorial for training lora as well.
omg it worked aaaaaaaaaaaaaaaaaaaaaaaaaaaa. i've been stuck on this issue for months, im a noob with this so little issues would last weeks. thank so so much. can't believe i did this on my own lol. do you have a discord or community?
Hi there, I'm so happy it worked for you. I do have a Discord Channel #wizzitygen but I am not very active on it. I made this video a while back to give artists a leg up on working with their own images and haven't posted many other videos since. My business takes up much of my time as well as the poetry I write. Thank you for your comment, it is nice to know this video is helping people.
currently in the process of training. i was actually looking for the styles panel that on the right in your locally installed SD.. how do you make a style like that?
why i cant find model.ckpt on my drive?
Not sure. Have you tried searching the entire drive?
Thanks for the tutorial! I wonder how many images to use for the best result, the more the better? I first used about 30 images, and later 200 images, but the latter doesn't give a better outcome.
My understanding is 20-30 images. More than that, you could risk overtraining the model and not getting any benefit or perhaps seeing poorer outcomes.
Hello friend looks nice did you do it on Mac?
Hi there. Yes it was done with a mac and Stable Diffusion using Automatic 1111 interface.
I do mostly abstract illustration I’m wondering how a model could be trained with my art style if objects aren’t recognizable
I'm not sure exactly but I believe it will recognize patterns and shapes. color etc. I'm unsure how you would prompt it though. I would suggest trying it and experimenting. That would be the only way to know.
waste of time stops working nextday
Hmm. That is unusual. The model I created in this video is still working fine. One thing to check is to make sure you have the correct model loaded up when generating the image. I.e. Select the right .ckpt (Stable Diffusion checkpoint) file.
You didnt have to annotate each image?
How can I train one model multiple times? For example: I trained model to recognize new art style, and now I want the same model to be able to draw a specific bunny plush in this new art style. In the colab it says to change model_name to a new path, but path from where? Google drive or the colab folder?
I believe you would have to add your model to Hugging Face and link to it.
@@wizzitygenlooks like you're right. Thanks for the consult.
Hi, thank you very much. Used this before and is for sure the easiest and best video out there. Today though I got following error, can you help: ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU not sure what to do. Much apprechiated thanks.
Hi there, thanks for the kind words. Your best bet with errors is to paste the error in Google. It is usually a bit of a hunt but the solutions are usually out there if you Google the error. Sorry I can’t be of more help.
Would increasing the number of 'input' images into the model improve the accuracy at which the model is able to replicate the art style? Or will you achieve the same results as long as you input roughly 20-30 as mentioned? Great tutorial, thank you for your time.
My understanding is that 20-30 is the sweetspot. Training with more could result in overtraining and could affect your results. But, I would experiment and see what works best for you. If you have more give it a try and compare.
Absolutely not. Fewer is better, make sure to choose the best.
is having names of images with same prefixes a mandatory thing?
Yes, you will get better and more consistent results.
When making model are they became priviate?no one can use aside from me right?
Hi there, to be completely honest I am not 100% sure. The model is forked from Hugging Face so I am not sure if your trained model goes elsewhere. The person who created the Collab Notebook can be found here. github.com/ShivamShrirao It would be best to ask him.
Hi there, I decided to write Shivam Shrirao and ask him your question. This was his response: Shivam Shrirao Sun, Jul 2, 11:17 PM (10 hours ago) "It's only available to you."
Thank you! This is well done and helpful.
Glad it was helpful!
can someone show me the process of uploading the ckpt to Huggingface and using the model online? plz...anyone?
This might help you. huggingface.co/docs/hub/models-uploading
Thanks so much! I made it, I really made it thanks to you. Trained a model with 39 images, took more than an hour but the results are amazing. I'm so happy!
Congrats! So happy you found the video useful.
18min to go
Hope it turned out to your liking!
Thank you so much for this video, will give it a try 🙏
Thanks for your message! Best of luck!
For the first time with youtube tutorials, I understood everything, thank you.
Glad you found the video helpful.
Thank you, good video. As for your art; I really like that guy/person with the phones showing him/her/it. Really good.
Thanks for the kind words!
Actually it is better to train a model with the largest sized pictures your GPU can handle. Of course if you take the base model that was purely trained on 512x you will have issues in the beginning, but just take a custom model that has been trained with larger pictures. The obvious advantage is the level of detail. It might be that a certain art style does not require much detail, but some do, and 512x pics simply cant carry much detail and also being limited to 512x output is another strain. I was actually surprised myself not so long ago how well a 512x model can handle larger sizes, and it basically rendered all my prepared sets useless because it is such a big difference in quality only going up to 768x. But i do actually not use a specific AR on purpose, because i figured if i do that i will limit the model on this AR, so i use the number of pixels my GPU can handle and try to input as much variety as possible and i have seen the duplication issues decreasing since doing so. Its basically not happening any more. And what i mean with number of pixels is this, i figured my GPU can handle around 800,000 pixel very well. This could be for example a 800x1000 picture or 2000x400. You see it does not matter really what format, the maximum is the total number of pixels. A model just need to learn a few example of different formats for the subjects so it will not start duplicating things on the image grid. I am not certain however how large the dataset must be to expand a models capability in that regard since i start my own models from merges i do out of other custom models in an attempt to get the best base for my own ideas. And because the base model is actually not trained very well, if you see some of the data set and how the images have been described it is no wonder that very often things are deformed because there was simply no real focus on a proper image description. And that is also no surprise if you have worked on that you know how much effort it takes even for smaller data sets.
I haven't experiment with resolutions other than 512 yet, perhaps once there are more models who do so, I will give it a try then.
@@wizzitygen There are many models, i would actually assume most of the high quality models on civitai were trained with larger sizes. And many of them are also based on merges, i would rather say meanwhile it is hard to find a model that was not at some point trained with larger sizes.
These are great, May I know how can I change the base model that I want to merge with the new one?
You can find different base models on Hugging Face.
Do you not need to add text to the training images to let SD know what they depict?
Hi there. That is what the "Class Prompt" is useful for. It helps classify the image.