Krita AI Trick to Removing Objects With AI diffusion Plugin

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ธ.ค. 2023
  • Download the static flood fill and place in,
    C:\Users\YOURUSERNAME\AppData\Roaming\krita\patterns
    Static fill - Download link.
    www.mediafire.com/file/cntbdn...
    watch this video on the one click selection plugin by the same creator Acly
    • Krita Free Quick Snap ...
    Watch my Overview video and install, the comments there might help you if your having problems.
    • KRITA AI Diffusion FRE...
    I believe this is the Plugin creator, Show them love ❤
    / auspicious_firefly
    Get Krita Free Here::
    krita.org/en/
    or
    OLD 5.2.1 from my cloud ( if you are running the AI plugin on 5.2.2 let me know and if any issues for my curiosity )
    www.mediafire.com/file/yyl64o...
    Get the AI add-on Free Here::
    github.com/Acly/krita-ai-diff...
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 49

  • @streamtabulous
    @streamtabulous  7 หลายเดือนก่อน

    Reasons I love this plugin and Stable diffusion use,
    I'll cover a few things,
    First adobe trained there model on there stock that has copyrighted images so if you want something you can control this is better as you can train and use your own model, and you can train that in any resolution you like, I trained mine at 1500, and that was just due to hardware limitations.
    the miss conceptions I get are Stable diffusion use copyrighted Images, you can use and train your own model or others models to avoid art stealing if you feel this way.
    though I feel the real issue is a fear of being redundant, I dont feel that will ever be the case,
    for many many reasons.
    also are not limited to one model so you can download models from civitAI that don't endorse copyright images in training,
    Do note watermarks are on some images that are royalty free, even adobe has that issue, and with civitAi there are models trained on 4K so if you have a nice big system and a rtx4090 you can use a model and resolutions at 8K.
    The Plugin allows you to link and use RENTED hardware so even on a low end system you can pay for that option. and have far greater flexibility than adobe and there model and models with higher resolution trained sizes.
    You can make and use your own models and use them, This also means you can be unique in whats make, I love that above all, as I trained a model on my art.
    MASSIVE NOTE: despite training on my art the renders are completely different as the core AI seems to be trained not to copy and be unique with what it makes. so while the model is fantastic its very different to my digital paintings that are more basic.
    Adobe Benefits
    Easy to use and less issues, don't have to be smart with computers, Though less powerful. Has adobe support,
    Adobe IMO is trained on stability AI and uses Stability AI for the AI just as Stable diffusion that's made by Stability AI, This means you can expect adobe to go to 4k and 8k soon enough, adobe at current does have a better language trained engine to there model so more accurate responses, though adobe do run a AI that checks prompts that does effect you being able to use it to do things and cause confusion ie fix skin, skin is seen as being used for something lude, at least when i was using it to fix photos in about November 2023.
    Adobes AI remover I FANTASTIC like amazing.
    Adobe edge bending is better though visible, Krita AI plugin does have ghosting sometimes at the blend and shadow artifacts.
    Both have there pros and cons, for me this is better as I control it has way more features.
    but note again with resolutions you do need the hardware or rented GPU cloud, but as I mentioned the cost of adobe subscription can very easily get you that.
    Krita plugin Big win..
    FREE
    if like me you are poor free is very appreciated
    know this is long winded covering few things that I feel are important to understand.
    NOTE
    On my hardware I don't mind sacrifice with resolution using 512x512 models, the reason for this is I have a old purchase of Topaz gigapixel AI upscale before being broke, so once I have done work restorations etc I run them through the AI upscaler to 4k, 8k etc this I had to do with adobe also.
    there are free AI upscaler available and I recommend looking at them and trying a few.
    I have to lower resolution in this video due to just using a gtx1070 and of course I encoding though my GPU at the same time I am using it to use this program that's a lot on the poor old system. otherwise 1024x1024 image size and SDXL models I can do fine but longer waits

  • @rebelgonebad
    @rebelgonebad 6 หลายเดือนก่อน +1

    Great job, very helpful

  • @SumNumber
    @SumNumber 2 หลายเดือนก่อน +1

    I bet if you use different noise patterns/types you get different results or even an image that is not considered a pattern as well . Cool stuff. :O)

    • @streamtabulous
      @streamtabulous  2 หลายเดือนก่อน

      I found with just a grey static I still get the same result but only the color seems better on more vibrant image the grey worked on the 4x4 in the bush just as well.
      I think as long as there no reference image the AI works better if there a reference the AI trys to use that image as a reference.
      I have not tested noise patterns of large blocks or worm static, you bring up a interesting idea I'll have to play with.

  • @DrRoncin
    @DrRoncin 6 หลายเดือนก่อน +1

    Can you show how to set up a local ComfyUI installation and connect it to Krita? When Krita crashes, python is still running and prevents the server from being relaunched. (Killing python in task manager fixes problem.)

    • @streamtabulous
      @streamtabulous  4 หลายเดือนก่อน

      i have tried i can get it sort of working but many features are missing, if i manage to work it out ill do a video. but its definitely completed

  • @bigglyguy8429
    @bigglyguy8429 2 หลายเดือนก่อน

    you explain this stuff slowly, clear and in detail, but I have the attention span of a squirrel and feel totally lost...

  • @RaynMao
    @RaynMao หลายเดือนก่อน

    not sure if this feature was added later but you can just simply select a region and generate with the same result without using a noise map

    • @streamtabulous
      @streamtabulous  หลายเดือนก่อน

      That's the removal feature was brought in later yes.
      But this still works if the AI won't adhere to what you want where it just makes other cars etc. But overall with the newer version this is less need. I had to use it today with the new version on a stubborn item that would not vanish till I did this.

  • @PrometheusPhamarus
    @PrometheusPhamarus 3 หลายเดือนก่อน

    Why is the link to the plugin leading to just a png image, and also a virus warning?? from avast.

    • @streamtabulous
      @streamtabulous  3 หลายเดือนก่อน

      it's standard for most viruses scanners to warn of anything from a cloud, it is a png image right click save image as but there should be a download link, for the image, otherwise any colour static will do the job, just use google images search colour static save crop to a square and place in the mentioned directory.
      any mess of colour seems to work, static seems best.

  • @MotivationToStudy972
    @MotivationToStudy972 6 หลายเดือนก่อน

    I've installed Krita, but my GPU lacks sufficient power to run it. Is there a way to leverage Google Colab to utilize the Krita AI plugin? 🤔

    • @streamtabulous
      @streamtabulous  4 หลายเดือนก่อน

      there was on i think 1.10 version or rental gpu, but i don't see the option any more, there might have been issues. what GPU do you have?

  • @Amitkrdas17
    @Amitkrdas17 7 หลายเดือนก่อน +1

    =can you make a video which shows how to download and deploy different models...

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน +2

      Like this one on how to put them into the krita AI Diffusion?
      or how to download and find models on civitai.com
      Iv been asked to do a video on how to use civitai.com so will be doing that very soon
      th-cam.com/video/xh-4YUr1F4U/w-d-xo.htmlsi=auHfuYqUapyyVU8X

  • @zraieee
    @zraieee 2 หลายเดือนก่อน +1

    Well done, Please how I got so many styles, it's just four of them

    • @streamtabulous
      @streamtabulous  2 หลายเดือนก่อน

      models give styles and unique looks.
      th-cam.com/video/xh-4YUr1F4U/w-d-xo.htmlsi=6cM-039A8NZCaYt7

  • @pogiman
    @pogiman 7 หลายเดือนก่อน +1

    does using an inpainting model better?

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน +1

      Works the same as SD comfy just a different UI, so I'd say yes, in the 10.1 plugin update you can use lora code also and weight has been fixed so you can now prompt the same as you would in comfy.
      I'm downloading some inpaint models to test on the Australian bush to see if there have better results. Will give feedback. There still trained and normally trained for certain tasks ie smile, glasses, eyes, clothing. So will be a interesting test.

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      Tested with various impounding models the checkpoint painting models do not detect is checkpoint but in the lora file detected, so used the best model and linked the inpaint modes with them.
      The results where the same needed static to correctly remove the vehicle and get the most accurate look to the whole image.
      There seem to be no difference in the look of the render with or without a inpaint model.

  • @abridnalavely3980
    @abridnalavely3980 7 หลายเดือนก่อน +1

    How with resolution , its better than generative fill?

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน +1

      few reasons, Ill cover a little more than what you are asking for others, first adobe trained there model on there stock that has copyrighted images so if you want something you can control this is better as you can train and use your own model, and you can train that in any resolution you like, I trained mine at 1500, and that was just due to hardware limitations.
      you are not limited to one model so you can download models from civitAI that don't endorse copyright images in training, do note watermarks are on some images that are royalty free, even adobe has that issue, and with civitAi there are models trained on 4K so if you have a nice big system and a rtx4090 you can use a model and resolutions at 8K.
      side note the plugin allows you to link and use rented hardware so even on a low end system you can pay for that option and have far greater flexibility than adobe and there model and trained sizes.
      also as you can make your own models and use them this also means you can be unique in whats made, I love that above all, as I trained on my art.
      Adobe has benefits to however of being easy to use and less issues in anything that goes wrong, though less powerful, has adobe support, and is also trained on stability AI and uses Stability AI for the AI just as Stable diffusion that's made by Stability AI, this means you can expect adobe to go to 4k and 8k soon enough, adobe at current does have a better language trained engine to there model so more accurate responses, though adobe do run a AI that checks prompts that does effect you being able to use it to do things and cause confusion ie fix skin, skin is seen as being used for something lude at least when i was using it to fix photos in November.
      Both have there pros and cons, for me this is better as I control it.
      but note again with resolutions you do need the hardware or rented GPU cloud, but as I mentioned the cost of adobe subscription can very easily get you that.
      know this is long winded covering few things that I feel are important to understand.
      PS on my hardware I dont mind sacrifice there resolution to 512x512 models, the reason for this is I have a old purchase of Topaz gigapixel AI upscale so once I have done work restorations etc I run them through the AI upscaler to 4k, 8k etc this I had to do with adobe.
      there are free AI upscaler available and I recommend looking at them.
      PS.
      I have to lower resolution due to just using a gtx1070 and of course I encoding though my GPU at the same time I am using it to use this program thats a lot on the poor old system.

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน +1

      short answer if you dont have the hardware for the larger resolutions save your money and use a free AI image upscaler. try it and you be amazed

  • @ZiggyDaMoe
    @ZiggyDaMoe 7 หลายเดือนก่อน +3

    Krita = Crayon in swedish. :) "Kreeta" is closer to the correct pronunciation than "Kritta"

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      Im working on it the neuro logically dyslexic keeps throwing me to i especially as Mum is Rita so my brain does that with K ..I do a video and relies my brain has done the error.

    • @ZiggyDaMoe
      @ZiggyDaMoe 7 หลายเดือนก่อน +1

      Things perpetuate, like the Jif/Gif confusion. I know ways people pronounce things are different all over the world. I feel like if I my name was "Kreeta", but unfortunately I spelled it "Krita", I would want you to call me "Kreeta". At first I didn't know it was Swedish and without knowing, I thought it was "Krita". Oh well. I have really enjoyed all the in-depth information you have provide in your videos. I have gotten so much farther than I would have on my own. You showed me things that are fantastic.
      Thank You.@@streamtabulous

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      @@ZiggyDaMoe hopefully will get another little video up tomorrow.
      I'm enjoying doing the series on it. Be sure to check out the new video on the snap section tool. So so handy

  • @LordOfThunderUK
    @LordOfThunderUK 7 หลายเดือนก่อน

    Wow, so Filmora like!!!!

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      My editing skills suck. I use Movavi video editing, cheap for my poor budget and basic to use, not Fillmore. But it's very basic and also with having dyslexia that affects complicated drop down menus and sub menu and complicated UI, so I need simplistic. I'm definitely no pro 😢😂

  • @streamtabulous
    @streamtabulous  7 หลายเดือนก่อน +2

    rtx3060 8gig generates as fast as Adobe cloud faster in some cases, My mates rxt3060 12gig even faster than that, and rxt4080 is 2sec that's much much faster than adobe, AND again what you pay per year you could easily buy one of these cards and have more control than what they offer and again NO Limitations on generations..
    Plus play high end games and more. its a no brainier to put your money into your own system rather than to Adobe for a limited services.
    There secret source is a better auto section in the background that results in better blending, and a auto static in background so the AI doesn't concentrate on the information of the selection, Plus there model is very large over 400TB of training images. BUT there model is still extremely limited and the ability to use your own models etc is far far far better.

  • @BRUTALKING
    @BRUTALKING 7 หลายเดือนก่อน

    can you show how to install it on macos

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      I dont own a Mac to do that im afraid but on the reddit a there is a person that has dont that, they linked it to the comfyAI with these
      github.com/Acly/krita-ai-diffusion/blob/main/doc/comfy-requirements.md

    • @mhavock
      @mhavock 7 หลายเดือนก่อน +1

      LOL how will you render on a mac without a video card?

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      @@mhavock if you go to the git plugin you will see Mac support is added. There is something called MLC machine learning code that interfaces differently so it doesn't need cuda. I wonder when there on tensor option for Nvidia rtx users as they are made for AI use

    • @BRUTALKING
      @BRUTALKING 7 หลายเดือนก่อน

      @@mhavock then why Krita has a macox version shown in there website.

    • @mhavock
      @mhavock 7 หลายเดือนก่อน +1

      @@BRUTALKING Krita is a Painting APP, the Plugin he is showing is from separate developers. The plugin may run on mac but without hardware acceleration (like a video card) it will be ALOT slower.

  • @UltimatePerfection
    @UltimatePerfection 21 วันที่ผ่านมา

    It's not "Krayta", it's "Kreeta"

    • @streamtabulous
      @streamtabulous  20 วันที่ผ่านมา

      neurological dyslexic affects speech grammar spelling, its not deliberate i know it triggers people im sorry for that, its why one second i ponce differently seconds after i have just said a word, front temporal lob damage likely from a car accident went i was 5y old when my seat-belt ripped and i went through car window, also why part of my face mouth doesn't move fully.
      i often apologies for this issue through many videos

  • @welbot
    @welbot 7 หลายเดือนก่อน

    The "secret sauce" of Adobe's speed, is simply that the processing of it is done in the cloud, which is why you need to pay credits for it.

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน +1

      No its Not... rtx3060 8gig generates as fast as there cloud, My mates rxt3060 12gig even faster than that, and rxt4080 is 2sec that's much much faster than adobe, AND again what you pay per year you could easily buy one of these cards and have more control than what they offer and again NO Limitations on generations.. plus play high end games and more. its a no brainier to put your money into your own system rather than to them for a limited services.
      There secret source is a better auto section in the background that results in better blending, and a auto static in background so the AI doesn't concentrate on the information of the selection, Plus there model is very large over 400TB of training images. BUT there model is still extremely limited and the ability to use your own models etc is far far far better.

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน

      ps read the pinned post, for more

    • @welbot
      @welbot 7 หลายเดือนก่อน +1

      @@streamtabulous well yeah.. I didn't mean to single out speed per se as the main point, rather that overall fact it's done in the cloud using Firefly and a huge training set, and from what I've seen, it tends to produce better results with less effort in many situations, as I think they use a fairly large amount of steps for the generations. (which for most average users, does make it quicker in the long run)
      The speed locally can vary a lot depending on what models you're using though. I have tested some on my 3080, and it can take up to 2 mins to generate. Some will do it in seconds. I just found out about SDXL Turbo today though, and it can generate images locally in about 200ms! 😂
      With regard to the selection stuff, I haven't tried it myself yet, but watched friends do it, and they were working with much cleaner images, so selection issues weren't really apparent.

    • @welbot
      @welbot 7 หลายเดือนก่อน +1

      @@streamtabuloustotally agree that having to pay per gen is shit. Especially given to even use it, you have to pay them a monthly sub as well!

    • @streamtabulous
      @streamtabulous  7 หลายเดือนก่อน +1

      @@welbot 2min ouch, are they high resolution images. As the gtx1070 8gig did not take much longer than the cuts in the video soon as I stop recording it flys much fast, just obs encoding through the GPU while using it. Make sure you have Microsoft visually studio and cuda kit installed. I mentioned those in speeding up A1111 video for Nvidia users but it works across the board.
      Yer definitely faster models, some are slow the one I trained in this video is massively slow because I use old AI trianing set, but I have newer SDXL that are much fast and if you set LMC LCM dyslexia they work faster but used on say a model with built in LCM it will be a mess.
      I'll have to do the video on models as there so complicated.
      My favourite models are
      Absolute reality
      Epic realism
      Robot chillout
      RPG
      Urber realistic that's a not safe one but the training set speed is multiple styles
      OpebDalle yes that's the Dall-E model need to get from hugging face it's a very fast sdxl
      Playground AI again hugging face and the online AI playground core model
      Colorful
      Reality check
      Sdxxl
      What sampler and setting plays a massive part in speed. Again I'll have to do a video on that.
      Adobe is hand especially if the feature is on the IOS. But overall I like these tools more, and using the money on my own system. That's my goal for end of next year upgrade the GPU to a rtx3060. I have one in a htpc for gaming on the TV and use and tested all these programs and there so much faster it's insane.
      Again cuda and VS installed.