ComfyUI Tutorial - Live Painting Module - Photoshop+ComfyUI
ฝัง
- เผยแพร่เมื่อ 22 ก.ค. 2024
- Heya, this here tutorial is all about how to create a live painting module using Zfkun's screen share node which allows you to use a screen input as a live source, this can allow you to integrate comfyUI with photoshop, flash, illustrator, or any other program including games and movies in near real time using LCM or other SD models and sampler setups.
Workflow: drive.google.com/file/d/1D2oG...
Discord: Join the community, friendly people, advice and even 1 on 1 tutoring is available.
/ discord
00:00 -Intro
01:00 -Workflow Walkthrough
10:54 -Live Painting Module Building
18:52 - Using live painting module
That was really next level. Impressive!
I've got no clue how you learned it so well, but your skills are unmatched
experimentation and reading obscure reddit posts
I enjoy watching your amazing tutorial from which I learned a lot, i really appreciate your efforts into such tutorials.
Hey, this is my first comment. So i am happy i am commenting here. I will be watching all of your things, thank you so much for doing what you are doing! I will find this immensely helpful.
Thank you for the tutorial
Need to try. Thanks!
that was awesome, thanks brother
Super, was just looking for something like this. Thank you so much🙏
thanks a lot Ferniclestix..Love your lessons...!!!! please continue 😀
Thanks for sharing your workflow, great tutorial, very useful and well explained
Thank you. I really like your videos.
Great idea. Thanks.
So amazing!
great. i am going to try to connect a digital camera pointed at a paper sketch
sounds awesome :D
Thanks for your enlightening tutorials,
i think the input in the share screen node for an image is for an image to image influence on what we paint, hence the weight it can have as an influence, just speculating thought but really looks like it can be used like that, thanks again for this, have a great 2024! :)
yeah, need to look into this node some more whne I have some time :D
@@ferniclestix I put a second ksampler in the line after the first one with the low denoise, put the second one with a higher denoise, piped in the latent from the ksampler with the LCM sampler, with a different sampler (any will do) denoise of 0.77 and I'm getting nice ish pictures from my garbage drawings now. Thank you so much for this process, really simple and powerful! :)
I use an updownscaler, go up to 4X with a model and down to 1.5 on my second half of this workflow now :P also plugged in ipadapter to it, might be time to do a tutorial on that :P
@@ferniclestix Sounds like a great idea :D Have a good weekend!
Amazing... Thanks
You are a Genius
Creds should go to Zfkun imo, that node is awesome :D
I have a bunch of RAM so I like to create a RAM drive to store all the output images from these things, especially if its for animation, which creates hundreds of temporary images. That way my actual hard drive doesn't get wear. Just make sure to move what you want to keep before you reboot.
yeh, this stuff does wear HDD a bit. the trick is to have a really big hdd lol.
Great tutorial series!
A lot of good and advanced information.
Is it possible you could make a few tutorials we can follow from scratch?
From nothing to a final image, doesn't have to be too complex!
my earlier tutorials are like this, the reason I don't really do it fully now is ive built that starter beginner workflow like 900000 times by now lol, kinda end up repeating myself. buut, eh. yeah mabey, ill see.
Очень круто !
Fascinating, I wonder if Adobe is bright enough to build this into Photoshop... judging by Firefly, probably not, not sure it will work on my Mac... the OS will probably have a fit!
should work on a mac I think as its features are mostly web based. but its been years since I messed with macs
thanks for the cool video. how can I make the generation happen only after the picture changes? I turn on the auto queue and change flag and the generation goes on constantly, but I only need it after changes in the picture.
Great vid! I have a question, is there any way to create stuff over a background and render it to a transparent layer to composite it later in an application? I need this mainly because I will need later to remove and readd the object (at the same place). Thanks again for your fantastic vids!
there is no easy way to render to a transparent layer, the best you can do is use something like segment anything, clipseg or rembg to cut things out of another image.
Hi friend. I really like your advanced lessons. I've been sitting on the Automatic1111 for about a year and now I decided to switch to ComfyUI.
I am interested in such a question:
How to automate individual parameters or even entire nodes depending on the queue (batch count).
Let's say I want to make multiple XYPlot (Checkpoints and Samplers ) on differernt seeds. Or let's say I want to generate 10 batches with one set of nodes, and then 10 more , connecting several more nodes or changing some values in the existing ones.
I only know that there are several options for changing, for example, the sid (increase, decrease, random). But maybe you can set other algorithms through some simple mathematical formulas. It seems that there are nodes like counters, but I have not figured out exactly how to use them for these tasks. Thank you in advance🙏
I am not great with math.
That said. I think WAS suite and some of the others have all kinds of math nodes, using these you can break values out of your various samplers and use things like was suites counter node (it adds 1 each time the workflow gets queued)
Batch repeat nodes can be used to set a batch value mid way through your workflow.
I do some of this kind of stuff in my sdxl tutorials where I switch values from ints to floats and so on which is basically what you want to be doing.
you can get schedulers from animation node packs, these let you basically decide when in the workflow to input certain information. Its really a matter of duct taping things together to get it to work.
Anyway, my SDXL tutorials do some of this kind of thing (changing resolution with some math) but thats about the extent of what I do really.
@@ferniclestix Thank you for such a detailed answer. I haven't looked at all your lessons yet and apparently I haven't reached the right examples yet. Yes. maybe the animation packs have what I need, but I haven't put them yet, because I haven't studied all the main nodes well enough yet. But thanks again!
Hello I am new on Comfyui Do you have module generate quality image like midjourney? I tried comfyui but I cant get without hand or face deformed. btw I like watching your tutorials I am already subscribed.
fixing deformations and faces is a bit of a drawn out process for comfyui, you generally have to build complex workflows that include a refinement step like impact pack detailers or various control nets coupled with hand and face detectors to get those done.
Hey I am using Comfy UI in my browser with google cloud compute and I get permission error to share screen. Is there any way to fix?
Got some errors about enabling cuda .When googling there was an advice to install fooocus .Im a noob so ..Is this possible ?Thanks
unfortunately its really tough for me to debug comfyui installations as there are so many ways it can break from failed installs to users having wierd graphics cards or node combinations.
my advice is to try over on the comfyui reddit if you need assistance. making sure you include screenshots of any errors and a detailed explanation of whats wrong.
use controlnet...
heh, yes, you can plug in control net to this workflow pretty easily too. takes like 20 seconds to setup.