How to
How to
  • 36
  • 638 314
ComfyUI workflow management Tips | Buses , Routes and Efficiency Nodes
This video includes tips about organization of CompfyUI workflows, as they can become very messy.
CompfyUI is probably the most powerful tool for Stable diffusion workflows with infinite possibilities.
Video will include notes about:
00:00:23 Comfyui workspace manager to organize workflow files
00:02:26 Combining Nodes & Groups
00:03:15 Efficiency Nodes
00:05:57 MultiControl Net Stack
00:07:10 Templates
00:08:40 how to display node names origin
00:09:27 Organizing workflow using Routes
00:12:32 Organization using Buses and Pipes
check ComfyUI beginners Guide at
th-cam.com/video/p_lKJSUNV_0/w-d-xo.html
efficiency nodes:
github.com/LucianoCirino/efficiency-nodes-comfyui/tree/v2.0
workflow space manager:
github.com/11cafe/comfyui-workspace-manager
WAS Node suite
github.com/WASasquatch/was-node-suite-comfyui
มุมมอง: 2 482

วีดีโอ

ComfyUI Beginners guide installation and basic usage for stable diffusion
มุมมอง 1.6K5 หลายเดือนก่อน
Beginners guide for ComfyUI installation and basic usage in this video, we will cover: installation steps, basic usage guide, where and how to download workflows, how to build a basic work flow from scratch The Question is: why to use ComfyUI? because it offers better Automation for commercial purposes, and better Performance. the downside however are node conflicts, so trying to use online wor...
Guide to Change Image Style and Clothes using IP Adapter in A1111
มุมมอง 27K7 หลายเดือนก่อน
#a1111 #stablediffusion #fashion #ipadapter #clothing #controlnet #afterdetailer #aiimagegeneration #tutorial #guide The video talks mainly about uses of IP Adapter in Automatic 1111 For applying a style or a dress based on a reference image. 00:00:00 Introduction and sample results See examples what we will learn and generate in this video such as changing image style based on reference image,...
Faster Video Generation in A1111 using LCM LoRA | IP Adapter or Tile + Temporal Net Controlnet
มุมมอง 29K7 หลายเดือนก่อน
Computer Specs: RTX 30708GB Laptop GPU, 16GB RAM, nothing else matters. Contents include: Sample results Generation using LCM LoRA Tile Temporal Soft edge controlnet Generation without controlnet using LCM LoRA Generation with using LCM LoRA IP Adapter and other control nets Notes about Davinci resolve - In this video, we will be using LCM LoRA in Automatic1111 in order to generate Videos 3 to ...
Super Fast Image Generation in stable diffusion using LCM LoRA
มุมมอง 20K7 หลายเดือนก่อน
#lcm #stablediffusion #aiimages #a1111 #aiupdates #LoRA #art Update: LCM sampler is available in A1111 now. 00:00:00 Introduction and sample results LCM Latent Consistency Models,This new LoRA Model requires only 4 to 8 sampling steps instead of the usual 25 to 50 steps, to produce complete detailed images. 00:00:43 Downloading LCM LoRAs where to download them from 00:01:31 using LCM in A1111 0...
NVIDIA Update Solves CUDA error (but very slow) -Train Dreambooth, SDXL LoRA with Low VRAM
มุมมอง 5K8 หลายเดือนก่อน
#stablediffusion #a1111 #nvidia #update #cuda #cudaerror #lowvram #kohyass #LoRA #dreambooth #tensorRT (Update: while the update is able to solve CUDA memory errors, I have seen it to be very slow with SDXL... it is not very practical to use with low VRAM... works, but slow, hopefully in the next update, we get better performance than the current one, as this is a new feature, and very likely t...
ReActor Face Swapping of images and Videos in A1111
มุมมอง 33K8 หลายเดือนก่อน
#stablediffusion #reactor #faceswap #faceswapping #a1111 #aivideo #aiimages 00:00:00 introducing ReActor in A1111 and faceswapping showcases showcases in videos and image face swapping 00:00:18 why ReActor instead of Roop? 00:01:16 installation of Reactor 00:02:18 using ReActor in A1111 show settings and example usage 00:04:51 face swapping in batch image for videos show how to generate frames ...
Dalle3 is Free! showcase and tips to use with SDXL
มุมมอง 1.6K9 หลายเดือนก่อน
#dalle #dalle3 #aiimagegeneration #aiimages #stablediffusion #a1111 # 00:00:00 sample Dalle 3 images showing prompts used and example images 00:00:39 how to use Dalle 3 from Bing chat 00:01:31 improving Dalle output using Stable diffusion improve faces and realism of the Dalle 3 images using stable diffusion in A1111 This short video will display sample images generated using Dalle 3 free Bing ...
DALL E 3 initial impression, Cons vs Pros
มุมมอง 1.5K9 หลายเดือนก่อน
#dalle3 #stablediffusion #aiimages #a1111 #aiart #dalle #chatgpt #midjourney Update: Dalle 3 is currently free to use as part of Bing chat features. 00:00:00 introduction, will Dalle 3 replace stable diffusion? short answer is no, each has its own use cases 00:00:043 Dalle 3 image generation features 00:05:24 Comparison with Stable diffusion images using same prompts to generate sample pictures...
AI Video 2 Video Animation Beginners Guide , in Stable Diffusion and A1111
มุมมอง 33K9 หลายเดือนก่อน
#aivideo #aianimation #stablediffusion #a1111 #controlnet #automatic1111 #video2video #img2img This video is for beginners who are interested in seeing how AI Videos can be generated based on a Reference Video in stable diffusion and Automatic 1111. 00:00:00 introduction and showcases see how reference videos are used to generated new videos using AI in stable difffusion and A1111 00:01:14 vide...
Consistent Characters in Stable diffusion Same Face and Clothes Techniques and tips
มุมมอง 55K9 หลายเดือนก่อน
Consistent Characters in Stable diffusion Same Face and Clothes Techniques and tips
Downgrading A1111 to older versions to solver some extension problems
มุมมอง 5K10 หลายเดือนก่อน
Downgrading A1111 to older versions to solver some extension problems
Amazing updates for Hires fix and SDXL faster refiner pipeline in A1111 v 1.6 for stable diffusion
มุมมอง 8K10 หลายเดือนก่อน
Amazing updates for Hires fix and SDXL faster refiner pipeline in A1111 v 1.6 for stable diffusion
Complete Controlnet Guide for Stable diffusion and batch image generation in A1111
มุมมอง 8K10 หลายเดือนก่อน
Complete Controlnet Guide for Stable diffusion and batch image generation in A1111
Style LoRA Training guide for Stable diffusion 1.5 and SDXL Concepts Results and Conclusion
มุมมอง 11K10 หลายเดือนก่อน
Style LoRA Training guide for Stable diffusion 1.5 and SDXL Concepts Results and Conclusion
How much money will a Youtube channel with 1000 or 2000 subscribers make with educational content ?
มุมมอง 94710 หลายเดือนก่อน
How much money will a TH-cam channel with 1000 or 2000 subscribers make with educational content ?
Kohya ss installation on Runpod for LoRA SDXL or SD training for stable diffusion
มุมมอง 4.3K10 หลายเดือนก่อน
Kohya ss installation on Runpod for LoRA SDXL or SD training for stable diffusion
SDXL 1.0 vs SD 1.5 Character Training using LoRA Kohya ss for Stable diffusion Comparison and guide
มุมมอง 25K10 หลายเดือนก่อน
SDXL 1.0 vs SD 1.5 Character Training using LoRA Kohya ss for Stable diffusion Comparison and guide
SDXL 1.0 Tips in A1111 Low VRAM and other Errors and Refiner use cases for Stable Diffusion XL
มุมมอง 15K11 หลายเดือนก่อน
SDXL 1.0 Tips in A1111 Low VRAM and other Errors and Refiner use cases for Stable Diffusion XL
LoRA Clothes and multiple subjects training for Stable diffusion in Kohya ss | Fashion clothes
มุมมอง 40K11 หลายเดือนก่อน
LoRA Clothes and multiple subjects training for Stable diffusion in Kohya ss | Fashion clothes
After Detailer for automatic face fix in LoRA and Stable diffusion in Automatic1111
มุมมอง 23K11 หลายเดือนก่อน
After Detailer for automatic face fix in LoRA and Stable diffusion in Automatic1111
LoRA vs LyCORIS and Regularization comparisons in Kohya ss | stable diffusion person training Part 2
มุมมอง 15K11 หลายเดือนก่อน
LoRA vs LyCORIS and Regularization comparisons in Kohya ss | stable diffusion person training Part 2
stable diffusion simplified for non technical people | How stable diffusion and training works
มุมมอง 3.1K11 หลายเดือนก่อน
stable diffusion simplified for non technical people | How stable diffusion and training works
Beginners Guide for Stable diffusion and Automatic1111
มุมมอง 8K11 หลายเดือนก่อน
Beginners Guide for Stable diffusion and Automatic1111
Training a LoRA Model of a Character| LoRA training Guide | stable diffusion Koyass A1111
มุมมอง 135K11 หลายเดือนก่อน
Training a LoRA Model of a Character| LoRA training Guide | stable diffusion Koyass A1111
Faceswap using Roop in A1111 and stable diffusion | face swapping | deepfake
มุมมอง 23Kปีที่แล้ว
Faceswap using Roop in A1111 and stable diffusion | face swapping | deepfake
Outpaint using Controlnet in stable diffusion and A1111 | Method (3)
มุมมอง 6Kปีที่แล้ว
Outpaint using Controlnet in stable diffusion and A1111 | Method (3)
simple method for outpaint using stable diffusion A1111 | Method (2)
มุมมอง 12Kปีที่แล้ว
simple method for outpaint using stable diffusion A1111 | Method (2)
Outpaint in stable diffusion and Automatic1111 simply using Img2Img | Method 1 , A1111
มุมมอง 15Kปีที่แล้ว
Outpaint in stable diffusion and Automatic1111 simply using Img2Img | Method 1 , A1111
Disable Caps Lock and Num Lock Notification message on Lenovo Legion 5 Pro laptops
มุมมอง 59K2 ปีที่แล้ว
Disable Caps Lock and Num Lock Notification message on Lenovo Legion 5 Pro laptops

ความคิดเห็น

  • @4848kitty
    @4848kitty 14 ชั่วโมงที่ผ่านมา

    Hello! This was an extremely useful video! I'm currently a graduate student in ML and I'm working on training a model for style then using that in img2img. Do you think it's possible to use LoRA with img2img?

    • @AI-HowTo
      @AI-HowTo 6 ชั่วโมงที่ผ่านมา

      great to know... yes ofcourse, especially for Video generation, the use of a LoRA can increase the consistency of the video generated... and depending on the denoising level, the output would change to become more affected by the LoRA or less.

  • @pouyab9952
    @pouyab9952 4 วันที่ผ่านมา

    OMG THANKS BRO

  • @matanlevi4457
    @matanlevi4457 6 วันที่ผ่านมา

    Thank you!

  • @gustavosuarez7945
    @gustavosuarez7945 8 วันที่ผ่านมา

    Great tutorial! Can this be achieved with Comfyui ?

    • @AI-HowTo
      @AI-HowTo 8 วันที่ผ่านมา

      Yes, absolutely, as both use the same Controlnet models, and the same principles, the application methodology changes... what matters is to understand what these models do then discover how they can be used in the target tool such as A1111/ComfyUI

  • @slookify
    @slookify 13 วันที่ผ่านมา

    its always the same scene

  • @ElDespertar
    @ElDespertar 15 วันที่ผ่านมา

    Thank you so much for this super useful tutorial!

  • @AgileIntentions
    @AgileIntentions 18 วันที่ผ่านมา

    Hello. May I ask about your hardware? I have 4070ti and... my the speed of training is around 6-8 secs per it. I see your speed is around 3 it per second! Very interesting and curiously.

    • @AI-HowTo
      @AI-HowTo 18 วันที่ผ่านมา

      ti is more powerful than mine (laptop 3070) so you should get better speed than me, if you are training on same image sizes as me with this speed then xformers options might not be turned on, or the drivers require some update i think not, possibly.

  • @KiritoxNemesis
    @KiritoxNemesis 19 วันที่ผ่านมา

    thank a lot

  • @mengwang-io7fw
    @mengwang-io7fw 22 วันที่ผ่านมา

    Paid Sponsorship & Business Consulting,may i get your e-mail?

    • @AI-HowTo
      @AI-HowTo 22 วันที่ผ่านมา

      sorry, not doing that at the time being, thanks for offering though.

  • @banninghamma
    @banninghamma 22 วันที่ผ่านมา

    I like the mentioning of "real person" when "Olivia Casta" is not a real person (it is a face-swapped character of a real, older model) LOL!

    • @AI-HowTo
      @AI-HowTo 22 วันที่ผ่านมา

      :), ya, i got that note from so many peopel, to be honest at that time i didnt know she was an Artifical character herself, I just wanted a sample data set with so many images that looks real, any way, things still apply.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ หลายเดือนก่อน

    Hi, the settings seem to have changed for adamw8bit at 0.0001. the model seems to overfit. have you noticed a change?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      have not done any recent trainining, and usually the algorithm is fixed so learning rate changes are unlikely to been used differently, not sure, anyway, if you see things overfit quickly then using smaller number of steps could be better, besides, learning rate smaller than 0.0001 doesnt make much sense i think, so we usually consider increasing it not decreasing it to learn faster for instance...not sure if any recent changes in Kohya have made things different.

  • @divye.ruhela
    @divye.ruhela หลายเดือนก่อน

    Subbed! Very good tutorial! I know this is an old video, but I had a few queries. Is it harder to create/ train a 'realistic character LoRA' if the original dataset contains AI generated images created on realistic checkpoints, instead of a real person's photos? I guess, what I mean to ask is, can a LoRA created using AI generated datasets achieve such realism? PS. Also, what would be the best checkpoint to create such an AI generated dataset? TIA!

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      yes as long as the training data set are of good quality and do not contain deformations in the eyes or fingers, even slightest deformation could be augmented after training if they repeated. as for the best check point, not sure unfortunately at the time being, previosuly for 1.5 i got the best results with majicmix v4, even for wester characters despite that the checkpoint was asian and for SDXL the Juggernaut XL , not sure now....I think in general the principles of training do not change overtime, so the video is still good to rely on for training.

  • @abdulrehmanrehan6734
    @abdulrehmanrehan6734 หลายเดือนก่อน

    how to make it appear on windows pc

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      if you disabled and wanted to enable it again, you follow same steps and change the start up type to (Automatic) then click Start from services.

  • @vascocerqueira
    @vascocerqueira หลายเดือนก่อน

    how do you do this on a mac? .bat is for windows correct?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      not sure, but i think it is the same file but with .sh extension, and git reset is a command that is independent of the operating system.

  • @RatScalp
    @RatScalp หลายเดือนก่อน

    THANK YOU

  • @shitokenjpn
    @shitokenjpn หลายเดือนก่อน

    Little confusing here. You have skipped the part at image selection , as per your video one image from the first frame of dance video been used in imgtoimg. How did the generated image created all the exact same images as per frames of the video in animation?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      I used one image to test the output only, once satisfied with the results, we go to Batch tab and just filled (input directory with source images, and output directory for the output) and batch will generated all the images based on the prompt/details on my test on the single image

  • @Sithma
    @Sithma หลายเดือนก่อน

    I tried this but It didnt solve this problem I'm having, since I made a clean installation, I can't use lora anymore. I have a long list of errors for each lora i have in my folder

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      sometimes deleting the venv folder inisde A1111 installation could help solver lots of errors, some A1111 are buggier than others.

  • @dreamzdziner8484
    @dreamzdziner8484 หลายเดือนก่อน

    How could I miss this gem of a video for so long. Thank you so much for this mate💛🤝😍

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      Glad you find it useful, you are welcome.

  • @apnavu007
    @apnavu007 หลายเดือนก่อน

    I'm thinking a buy a Laptop 8GB Vram Will I be able to run a Stable Diffusion XL model?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      Yes it is possible, but it will be slightly slower than you hope for, it can take 20 seconds and more for a 1024x1024 image using Forge UI or using ComfyUI and more ... currently with how AI stuff is heading, if you plan to buy something, you better save and buy 24GB VRAM, it is very expensive, but it is the only option that allows you to run everything such as Animate Diff models without suffocating on memory or suffering slow generation

    • @apnavu007
      @apnavu007 หลายเดือนก่อน

      @@AI-HowTo Then I'll just have to buy a PC good choice.

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      yes, PC is alot more practical, cheaper, and more powerful, avoid laptops unless you extremely need to move around often, even lower VRMAM RTX 3060 for PC is alot more powerful than its Laptop counterpart and has more VRAM.

  • @nicolaseraso162
    @nicolaseraso162 หลายเดือนก่อน

    Hey bro, do you know how to install insightface in Automatic1111 (I use PaperSpace) in order to use the option of Face ID in IP Adapter?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      not sure, for me it worked without any problems, just downloaded the IP Adapter Face id models into the Controlnet models folder, and the Face ID loras into the LoRA folder, and made sure Controlnet was upto date, and it automatically downloaded necessary extra models related to insightface such as buffalo_l , not sure, why some have troubles with this while others dont.

  • @sahilchowdhari5298
    @sahilchowdhari5298 หลายเดือนก่อน

    my webui keeps showing timer on checkpoint and all lora/embeddinsgs/etc and tries to load all again on every start is there a fix for that?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      I have not seen this issue, before, but it could be due to low memory in your PC which forces the A1111 to load/unload models on multiple stages, not sure, I suggest to try github.com/lllyasviel/stable-diffusion-webui-forge which has same UI as A1111, but has better memory management and might run faster and automatically detect best memory settings that could run on your PC.

    • @sahilchowdhari5298
      @sahilchowdhari5298 หลายเดือนก่อน

      @@AI-HowTo thanks for reply it was working fine for months and broke out of nowhere will wait a week before fresh installing.

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      i see, you can try using Git pull (in the command line after set COMMANDLINE_ARGS= line as well, maybe that helps bring up any updates and fix the problem, if auto update is not enable in your installation

  • @solutionxpress2378
    @solutionxpress2378 หลายเดือนก่อน

    Te amo, miles de turoriales en español y este fue el que me ayudo

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      great to know, thanks

  • @gboediman
    @gboediman หลายเดือนก่อน

    Thanks SO MUCH - you saved my time!!

  • @Hshjshshjsj72727
    @Hshjshshjsj72727 หลายเดือนก่อน

    For better results we should use what? Splinter and what else did you say

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      I think, for perfect consistency, Blender (3D Software) is the right tool, stable diffusion is not the one for perfect consistency of objects/faces/clothes, while one can still achieve good resutls with it.

    • @Hshjshshjsj72727
      @Hshjshshjsj72727 หลายเดือนก่อน

      Thank you. Do you have video on that you can add link yet? Also, would it allow me to create photorealistic portraits like for social media, as that is my goal.

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      sorry, Blender is a 3D Free software tool, it is for games and realistic 3D stuff and not AI based.... for social media stuff such as creating realistic pictures, then Stable diffusion is really good to create photo realistic images, you can check other videos in this channel hopefully you find something useful, the only drawback for stable diffusion i think is just the Computing power (good graphics card) which makes it difficult for most people to really dive into it and it's capabilities.

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      th-cam.com/video/vA2v2IugK6w/w-d-xo.html this video shows how one can create a LoRA model that can be used for instance for creating anything related to a specific photo realistic character ... two LoRAs could also be combined to create a new Character with features from both using specific ratio of both ... or one can use the same LoRA with 0.7 weight to get a different variation of the same character... there are no limits on what one can do .... if you were new to Stable diffusion then th-cam.com/video/RtjDswbSEEY/w-d-xo.html could be a good starting point ... and you may want to check other creatos content such as www.youtube.com/@sebastiankamph Sebastian has lots of content about creating stuff or www.youtube.com/@OlivioSarikas Olivio start is also fun to watch ... my content contains just limited number of videos and mostly about techniques such as LoRA creation, training, or provides tools and techniques to create certain stuff that other creators may not have covered it properly or in depth.

    • @Hshjshshjsj72727
      @Hshjshshjsj72727 หลายเดือนก่อน

      @@AI-HowTooh excellent, thank you very much 😊

  • @monkeysit7826
    @monkeysit7826 หลายเดือนก่อน

    I have questions on photos pick up. For example, I want to create a real character with all different face angles, including face close-up and face with upper body. Because it seems like that if the training images include too much face close-up while only few face with upper body, the images generated afterwards with upper body will have failed face whereas close-up photos generation will be fine. So is the ratio, or portion of different kinds of image important to prevent overfitting of one type and increase photos diversity? In general, how many photos per each types and how many training steps per that type of image would give good flexibility of that type as well as good quality. to make it more understandable, let’s say I just need to create a good close-up 45 degree face and 90 degree side face for my whole project, how many photos and training step should I use in general.

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      There is no rule... everything is purely experimental, even the creators of Stable diffusion do not know. I think that close up and upper body shots should have the same number to have a balance in training. we would usually have few full body shots such as 10% and most shots are protraits/close up/upper body You might be interested in watching this newer video th-cam.com/video/vA2v2IugK6w/w-d-xo.html which uses smaller number of images... as far as i have seen, full body shots are difficult to reproduce with high quality, this is why we often use After detailer to repaint the face (With prompt that has face with the LoRA inside) some people train only close up in a separate lora and upper body in another lora to get better results as for full body, since they will never function as perectly as one wants, they are best to have small ratio such as less than 10% (For realistic models in which details matter). experimenting is the key eventually, some models may work from first experiemnt, others might take 10s till you get something good, even use different regularization images could afffect the output greatly.

    • @monkeysit7826
      @monkeysit7826 หลายเดือนก่อน

      @@AI-HowTo Thanks you. It's helpful.

  • @dragongaiden1992
    @dragongaiden1992 หลายเดือนก่อน

    Friend, you can do it with XL since it is very difficult to guide yourself if you use SD 1.5, basically it is doing everything differently from your video and I find many errors and deformed images

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      true, XL is certainly better, but I still dont use it unfortunately on my 8GB video card.

  • @53021417
    @53021417 หลายเดือนก่อน

    I get this problem where Reactor wil skip any image/frame of a video that doesn't has a face in it. So when I put it back into video(output) the audio will be out of synch and video will be all choppy. is there anyway to solve this problem?

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      even if the image doesnt have a face, Reactor will output the image (based on my batch image tests), not sure why this is not working out for you, you should do some tests for 5 images for instance to see where the problem happens

  • @Lell19862010
    @Lell19862010 หลายเดือนก่อน

    Is there any possibility to use batch with openpose, making each image with a different seed?

  • @chiptaylor1124
    @chiptaylor1124 2 หลายเดือนก่อน

    I really appreciate you for making this video. It solved my issue. Thank you so much.

    • @AI-HowTo
      @AI-HowTo หลายเดือนก่อน

      not sure, but i think its the same file on mac but named webui-user.sh and you edit it in the same way and put the command there, if you go throught he .sh file it might give you some guidance

  • @rbdesignguy
    @rbdesignguy 2 หลายเดือนก่อน

    Why not just crop in photoshop and save yourself a step?

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      I think I did that at some point

  • @damned7583
    @damned7583 2 หลายเดือนก่อน

    where do I download the ip_adapter_clip_sd15 processor?

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      I think it is (ip-adapter_sd15.bin) ... all 1.5 models are in huggingface.co/h94/IP-Adapter/tree/main/models

    • @damned7583
      @damned7583 2 หลายเดือนก่อน

      @@AI-HowTo I work with Google Colab, could you tell me which folder to place this file in?

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      I think it should be the same as local installation folder --- which is the Controlnet model's folder, on my local installation that is stable-diffusion-webui\extensions\sd-webui-controlnet\models ... but i think A1111 also looks inside stable-diffusion-webui\models\ControlNet folder as well

  • @Damage23
    @Damage23 2 หลายเดือนก่อน

    IT DIDENT WORK

  • @beanbean9926
    @beanbean9926 2 หลายเดือนก่อน

    THANK YOU SO MUCH I'M AT MY KNEES KISSIN UR FEET FRL

  • @mothishraj4463
    @mothishraj4463 2 หลายเดือนก่อน

    Hey, I have two questions, 1) How did you get the image output for each epochs ? I'm getting only the tensor data 2) Can I train a color and a pattern (Leapord pattern Fabric) and use it on any garment ? (By eliminating anything related to leopord or animal pattern ?

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      1) from sample images config section, as in th-cam.com/video/wJX4bBtDr9Y/w-d-xo.html we choose for instance 1 for epochs to generate 1 image each epoch, and we write in sample prompts the prompt that we want to dispaly, it must be written as shown in the sample text completed including the image size to dispaly 2) yes, as in th-cam.com/video/RT2jj-5t8x8/w-d-xo.html training guide which is a style, this helps train styles/patterns rather than objects -- and yes we eliminate anything related to the pattern in the image description for the training images (leopord or animal pattern ) and keep everything else in the image description

  • @moulichand9852
    @moulichand9852 2 หลายเดือนก่อน

    is there is any script availabe without using web ui?

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      the web ui is built on top of python scripts, so everything in stable diffusion image generation or training is based on scripts, so they can be automated, but i have not used that unfortunately so i dont have enough expertise to guide you on that

  • @ReinhardMozes
    @ReinhardMozes 2 หลายเดือนก่อน

    Since when DreamBooth appears there???? I can't understand this :(((

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      this video is from few months ago, back then, dreambooth training was easier on A1111 and i think it appeared on the GUI by default... if that is what you are asking about.

  • @Eustas5
    @Eustas5 2 หลายเดือนก่อน

    thank you so much bro

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      You're welcome!

  • @--9199
    @--9199 2 หลายเดือนก่อน

    THANK YOU!!

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      You're welcome!

  • @ricardoc9436
    @ricardoc9436 2 หลายเดือนก่อน

    Sorry but I don’t follow you. I think is a good video, but I don’t understand you very well…

  • @rustyMetal99
    @rustyMetal99 2 หลายเดือนก่อน

    i got rtx 3060 12gb vram but my reactor face swapping takes too much time, at least 1 hour for like 300 frames or so, please HELP!

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      not sure, this could indicate a problem with onnxruntime installation possibly, or that you are using frames with large dimensions which results in longer processing time, check the frame dimension first, I have not faced this issue before this is why i cannot advise you more that on the matter, i didnt even had any troubles with onnxruntime installatino before... but when it is slow, then there is an issue.

    • @rustyMetal99
      @rustyMetal99 2 หลายเดือนก่อน

      @@AI-HowTo i really don't know and can't find a solution cuz i've tryed multiple command args but none of them reduced my generation time, imagine 9 seconds took 25 minutes (220 frames) my specs are : rtx 3060 12gb / i7 3th gen / 24gb ram / normal 1TB HDD. my command args : --xformers --no-half-vae

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      I think this is a good speed for applying ReActor through A1111, 25minutes for 220 frames ... sometimes the size of the image used in ReActor also may affect the output... but 25 minutes it means it is using the GPU and I dont it gets faster than this using A1111.... anything related to AI or face swap can take a while to produce a decent result and creating a long video could take hours or days depending on the quality and length of the clip.

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      my video card is RTX 3070GB Laptop , it can swap and generate using ReActor at speed of 15 images per minutes, the source video input image sizes are 720x1280, and i used 512x512 face input image in this test.

    • @rustyMetal99
      @rustyMetal99 2 หลายเดือนก่อน

      @@AI-HowTo thank you for your response and assitance, i think it's okay to be patient to get a 2 minutes video in arround 3 or 4 hours. better thank nothing. i was just asking myself how other famous apps give me results within few minutes even if the video is long, maybe they have multiple computers and they split the input for each computer and re-link the parts later as one video. MAYBE

  • @HPCAT88
    @HPCAT88 2 หลายเดือนก่อน

    thanks. now let's scam some simps on OF

  • @Ziko675
    @Ziko675 2 หลายเดือนก่อน

    I am confused about one thing. During captioning do we need to caption all the keywords that we want to train in a model or do we have to remove those tags if we want them to train in a model. Which one of them is correct?

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      yes, anything that we want to be part of our LoRA must not be captioned... for example, if our subject always have green eyes, then we better not include that in the captions... if she has blonde hair, then we better not include that in the captions either ... we caption only the things that changes, for example: her clothes, the background...etc.... it is a bit confusing... but this is the best way to caption things and improve accuracy for the LoRA model.

    • @Ziko675
      @Ziko675 2 หลายเดือนก่อน

      @@AI-HowTo hmm that’s interesting. So you saying that anything which is not consistent among images should be not be captioned/tagged. For example if I am training some background images as a style whose theme is neon light cyberpunk, so I should not caption cyberpunk or neon light as it will be consistent among the images but I could caption a group of people for example or tall building because they will not be always there

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      exactly...we can also add a trigger word that refers to your style (which absorbs all things related to your style common features in all the images) for example which may help increase the chance of bring that style out when prompting too.

    • @Ziko675
      @Ziko675 2 หลายเดือนก่อน

      @@AI-HowTo I think I understand some of it even if it sounds somewhat confusing. Thanks mate😀

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      you are welcome.

  • @ManDogAndCows
    @ManDogAndCows 2 หลายเดือนก่อน

    i want to run this of a server i have in it a gt1030 only 2Gb wil it work? also has 64Gb ram and 2x 10 core CPU.. render time is no issue for me since the server works while i do something else i just want to utilise my server for something other than storage.. also a quadro p2000 fits in my server im thinking about upgradeing it has 5 GB

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      it will be impractical to run on 2GB, with A1111 this might not work propertly, but Forge have better automatic memory management github.com/lllyasviel/stable-diffusion-webui-forge this repositry provides same things as A1111 with same UI but has better memory management and can run on 2GB for SD 1.5.... it might run SDXL too but it will use the CPU then which will be slow.

    • @ManDogAndCows
      @ManDogAndCows 2 หลายเดือนก่อน

      @@AI-HowTo yes slow these days is unusable.. the gt1030 was the dumbest purchase i have ever made 2gb idk. if it is the drivers but i cant get it to render or transcode anything..i found a quadro p2000 for cheap so i will run with that thank you for fast response

    • @AI-HowTo
      @AI-HowTo 2 หลายเดือนก่อน

      You are welcome, these days, RTX graphics cards are game changers, they are the way to go for AI/Gaming/3D, they are expensive, but they seem like the only option to save time and be able to stay uptodate with the technolgoy...best of luck.

  • @huaseynzade9954
    @huaseynzade9954 3 หลายเดือนก่อน

    thanks goat

  • @masterzed1
    @masterzed1 3 หลายเดือนก่อน

    you edited to much......

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 3 หลายเดือนก่อน

    That is so useful to know. I wonder if there is a way to have a chat with you over at discord or done thing?

    • @AI-HowTo
      @AI-HowTo 3 หลายเดือนก่อน

      sorry , not at the timeing.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 3 หลายเดือนก่อน

    at 20:52 you say a larger rank will cause the model to overfit faster, so only use high rank like 128 with larger datasets of hundreds, otherwise stay on 32:4. Would love for the source to this assertion. I 've been looking for reliable information in regard to dimension, and this sounds really interesting, but would love to get the source or vaguely where to find this source. thank you!

    • @AI-HowTo
      @AI-HowTo 3 หลายเดือนก่อน

      what i have said was mostly based on try-error tests of 10s or hundreds of LoRAs... higher rank allows absorbing more data, but i found it to overfit faster... Kohya has some wiki useful info that you can read and possible find some useful info inside it github.com/bmaltais/kohya_ss/wiki/LoRA-training-parameters the wiki is based on hoshikat.hatenablog.com/entry/2023/05/26/223229 which is written in Japansese but you can use Auto page translate , the japanese reference was the best i saw online, it contains many extra links on training subjects and stable diffusion that I saw to be better explained than any other english counter part.

  • @FabLeKebab
    @FabLeKebab 3 หลายเดือนก่อน

    If this is bugging you, use version 1.7, it will definitely work better!

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA 3 หลายเดือนก่อน

    Hello friend, is there going to be anything new on Lora training program for people? I haven't heard anything in a long time, maybe there are some new software chips

    • @AI-HowTo
      @AI-HowTo 3 หลายเดือนก่อน

      I have not been keeping up lately with what's happening in this area, but I think before we get a new training method (That is faster, more accurate, and resource efficient and that is better than LoRA and Dreambooth), there wont be anything new for a while

  • @DarkInspire16
    @DarkInspire16 3 หลายเดือนก่อน

    my reactor running but no enable box why?

    • @AI-HowTo
      @AI-HowTo 3 หลายเดือนก่อน

      there could be aproblem in installation, have not seen this before, reinstallation may help, not sure.

    • @DarkInspire16
      @DarkInspire16 3 หลายเดือนก่อน

      @@AI-HowTo reinstalation reactor?

    • @AI-HowTo
      @AI-HowTo 3 หลายเดือนก่อน

      yes, I think reinstallation of extensions is not straight forward in A1111, usually we would delete related folder in the extension folder, restart and install it again

    • @FabLeKebab
      @FabLeKebab 3 หลายเดือนก่อน

      I don't have it anymore either but I think that's normal, since it has been moved, that is to say that it is next to "ReActor" when you check, that therefore deploys it no mistake

    • @DarkInspire16
      @DarkInspire16 3 หลายเดือนก่อน

      @@AI-HowTo i will try