- 24
- 8 654
Professor Lich
Australia
เข้าร่วมเมื่อ 19 ต.ค. 2019
ComfyUI Crash Course 2024 (Part 3 of 3)
💡 Video covers:
- Traditional Upscaling Algorithms
- AI Upscaling
- Latent Upscaling
- Input Nodes
Links to resources:
- Civitai (workflow link): civitai.com/articles/9102
- CyberRealistic: civitai.com/models/15003
👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below:
- www.patreon.com/professorlich/
- ko-fi.com/professorlich
- Traditional Upscaling Algorithms
- AI Upscaling
- Latent Upscaling
- Input Nodes
Links to resources:
- Civitai (workflow link): civitai.com/articles/9102
- CyberRealistic: civitai.com/models/15003
👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below:
- www.patreon.com/professorlich/
- ko-fi.com/professorlich
มุมมอง: 234
วีดีโอ
ComfyUI Crash Course 2024 (Part 2 of 3)
มุมมอง 75919 ชั่วโมงที่ผ่านมา
💡 Video covers: - Traffic Cones - SEGS Education - Workflow Execution - Noise Modes differences - Weight Normalization differences Links to resources: - Civitai Link: civitai.com/articles/8893 - CyberRealistic: civitai.com/models/15003 👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below: - www.patreon.com/professorlich/ - ko-fi.com/professorlich
ComfyUI Crash Course 2024 (Part 1 of 3)
มุมมอง 1.6K14 วันที่ผ่านมา
Welcome to ComfyUI Crash Course! You may be familiar with Automatic1111, but are you ready for deep dive into ComfyUI? 💡 Covered in video: - text2image - outpainting - inpainting - detailing 🎬 If you wish to replicate my steps, be sure to download below resources: - Masks: civitai.com/articles/8730/comfyui-crash-course-2024 - CyberRealistic v5: civitai.com/models/15003?modelVersionId=537505 - C...
Getting Started with IP Adapter (2024): A1111 and ComfyUI
มุมมอง 71914 วันที่ผ่านมา
Welcome to the Computer Lab component of Topic 3, where we get started using IP Adapter with 'stable-diffusion-webui' (aka 'Automatic1111') and 'ComfyUI'. - Accessing IP Adapter via the ControlNet extension (Automatic1111) and IP Adapter Plus nodes (ComfyUI) - Easy way to get the necessary models, LoRAs and vision transformers using downloadable bundle.- Using IP Adapter in Automatic1111 - Comf...
What is IP Adapter? (Autumn, 2024)
มุมมอง 1.1K2 หลายเดือนก่อน
Welcome to Topic 3 lecture of our series on Stable Diffusion and Artificial Intelligence! In this video, we'll explore IP Adapter, an innovative technique for using image prompts to generate consistent and high-quality visuals in AI art. This short video covers: 🔹 What is IP Adapter 🔹 Decoupled Cross-Attention mechanism 🔹 Differences from classic 'image-to-image' 🔹 Matching IP Adapter with Visu...
Stable Diffusion 101 - Topic 2: Models and Platforms (Computer Lab)
มุมมอง 2132 หลายเดือนก่อน
Website: 🔹lichacademy.org GitHub: 🔹github.com/LichAcademy/Lich-Courses Support me: 🔹www.patreon.com/professorlich/ 🔹ko-fi.com/professorlich Key Highlights: 🔹AI tips 🔹Essential WebUI settings 🔹Browser Differences 🔹Resolution Numbers for Convenience 🔹Folder Structures If you find this video helpful, don't forget to like, comment, and subscribe. Share your suggestions for future topics in the comm...
Stable Diffusion 101 - Topic 2: Models and Platforms (Lecture)
มุมมอง 1123 หลายเดือนก่อน
Alternative title: "Fantastic Models and Where to Find Them" - tiny glimpse into inner workings of Stable Diffusion: not too much to scare anyone away, but enough to introduce Variational Autoencoders (VAE) and CLIP. - a quick recap of foundation models from 2022 until today github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
Stable Diffusion 101 - Topic 1: Fundamentals (Computer Lab)
มุมมอง 1754 หลายเดือนก่อน
Introduction to the course, and overview of some basic principles. github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
ステーブルディフュージョン101-トピック1:基礎(コンピュータラボ)
มุมมอง 274 หลายเดือนก่อน
コースの紹介と基本的な原則の概要。 github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
稳定扩散101-主题1:基础(计算机实验)
มุมมอง 234 หลายเดือนก่อน
课程介绍和一些基本原理的概述。 github.com/LichAcademy/ComfyUI-Lich-Pack www.patreon.com/professorlich/ ko-fi.com/professorlich
Stable Diffusion 101 - Topic 1: Fundamentals (Lecture)
มุมมอง 3174 หลายเดือนก่อน
Introduction to the course, and overview of some basic principles. github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
ステーブルディフュージョン101-トピック1:基礎(講義)
มุมมอง 714 หลายเดือนก่อน
コースの紹介と基本的な原則の概要。 github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
人工智能暑期学校讲座3 (Lecture 3 in Chinese Mandarin)
มุมมอง 884 หลายเดือนก่อน
有关人工智能、编程和机器学习的暑期学校课程。喜欢、订阅和评论吧。:) github.com/LichAcademy/ComfyUI-Lich-Pack www.patreon.com/professorlich/ ko-fi.com/professorlich
ComfyUI Lecture 3 - Custom Nodes Part III
มุมมอง 3195 หลายเดือนก่อน
Summer School curriculum on all things AI, coding and Machine Learning. Like, subscribe, and comment. :) github.com/LichAcademy/Lich-Courses/ www.patreon.com/professorlich/ ko-fi.com/professorlich/
ComfyUI: Making Your Own Custom Nodes
มุมมอง 1.5K5 หลายเดือนก่อน
ComfyUI: Making Your Own Custom Nodes
PF2e Remastered Lecture 05: Immunity, Weakness and Resistance
มุมมอง 6610 หลายเดือนก่อน
PF2e Remastered Lecture 05: Immunity, Weakness and Resistance
PF2e Remastered Lecture 01: Basic Mechanics
มุมมอง 10510 หลายเดือนก่อน
PF2e Remastered Lecture 01: Basic Mechanics
PF2e Remastered Lecture 03: Defending
มุมมอง 2110 หลายเดือนก่อน
PF2e Remastered Lecture 03: Defending
PF2e Remastered Lecture 02: Attacking
มุมมอง 4810 หลายเดือนก่อน
PF2e Remastered Lecture 02: Attacking
Great content, as always! I need some advice: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How should I go about transferring them to Binance?
You’re a phenomenal teacher thank you so much. I would really like to see content that covers video and animation.
Thank you for bringing the theory. I've been dying for content of this nature. Amazing work.
for users of comfyui: Your bundle of models and loras strike me as the perfect complement to YanWenKun's version of ComfyUI Portable with preinstalled nodes. The 3 part zip file on YanWenKun's github gives me portable comfy with insightface etc. etc. installed, resolved depencies - without the models. Your IP Adapter bundle supplies all the relevant and up to date IP Adapter models, which are a pain to sort out.
"(Part 3 of 3)" ---> Sniff. I have really enjoyed your videos so far - you sure there is no way we can bribe you to keep on going? Though I'm sure you have your reasons. Making these videos is very time consuming, and there has to be some pay-off on the horizon. Then again - if ComfUI should pull it off to become the Blender of AI image and video generation, investing into the tedious and painfully SLOW process of increasing follower numbers now might pay off in the future, "the first ones become the big ones". Easy for me to say, since I don't have to put in the work :)
"I have deliberately left putting output nodes into the workflow, because you can use it to control how far the workflow gets executed." (paraphrasing) --> So obvious now that I heard you spell it out - and yet, I hadn't thought about that before
what is the minimum GPU VRAMs required to run ComfyUI bro ?
4-6gb Nvidia card, though it depends on the actual stuff you're going to do
Since there is so much to learn, I feel like I am kind of swimming, in so many respects. I kind of improvise my way through. I had noticed that comfy and forge produce different images, but I had no idea why. I have no idea where comfy actually gets its images from, when using the standard "Load image" node. (With "load image from path", it is clear.) For example, when I use images as reference, like when using controlnet - ideally, I would want to be able to reproduce the exact same output I got, just by throwing in the generated image. Somehow, at the moment, it seems to have no trouble finding those images. If you asked me now, how it does this, and what I would need to do to make sure it will still be able to this a year from now, I would have to say "I have no idea". The list goes on and on, I can use Comfy, and I absolutely love using it, but there are so many fundamental questions I don't know the answer to.
Your videos are so refreshingly... different. Introductory AND deep, at the same time Both videos that I watched so far gave me answer to a number of questions that have been lingering in the back of my mind for quite some time. great stuff, I'm hooked!
Segs education had me dying LOL.
Ha. I found it. In the tidying up sped up part you connected the inpainting pipe to the K Sampler for the impainting. I connected the first pipe further...This is what I wrote first:(It is funny how I get a different image than you after the inpainting into the outpainted image. My new part of the kitchen looks very different from yours. Strange.) A very good tutorial.
I am glad it worked out. I realise now that I have sped up the video too fast at times. In fact, it was your comment that gave me the idea of inventing these 'Ikea-style workflows' (see Part 3). Hopefully, this will allow me to deliver content with speed, without people getting lost. Thank you for your kind words. Let me know how you find the new approach. 👍
I didn't expect that song at the end. Nice!
Your videos are amazing Prof. I just subscribed. Thanks for this.
This looks like a very down to earth educational channel with great approach to teaching. Subscribed! :)
I am #300 sub! I learned a lot, and I been trying to get good with CompfyUI for a few weeks, thank you.
Out of over 40 (often very good) videos using ComfyUI I’ve watched this is by far the best introduction by combining an excellent overview with varied practical examples and useful tips and tricks. All in a brief and well structured format. Excellent work. Looking forward to more from you.
Completely agree. The amount of shitty videos made by AI talking about making AI images is to damn high. This is an excellent video.
I absolutely loved watching this, so much valuable information in one swift "painless" go :) Now I know how to do outpainting and inpainting! I know about the ToBasicPipe node, and I know how to increase the number of suggestions when dropping a cable into the empty canvas. No time wasted, super useful stuff, presented in a super clear, easy to understand way. Thanks!
Thank you for your kind words. ❤️ Leaving the comment boosts the visibility of the video, and my own moral to keep making more. Part II of Crash Course is coming up. A 2-hour recording has been reduced to 15 minutes, I am nearly done editing it. 😅 It will be up within the next 24 hours. 👍
fantastic thank you
One thing to keep in mind, even if you think someone else explains a topic better than you, just be aware, to the watching individual, you might make more sense. Happens for me all the time. Love your take on topics. Thanks for posting!
Nice. Thank you.
You flip the slides to fast
Thx for the summary.
as a self-proclaimed pun aficianado, all good choices, why choose when you have the undead academic angle cornered?
THANK YOU - straightforward explanations, lighthearted and informative style, logical progression of concepts and clear, no-nonsense examples that don't insult my intelligence or flash by in a cloud of jargon or tangents. It's unfortunate it has taken me this long to find anything remotely resembling a non-biased, non-clickbait, non"hey look at me and the cool dumb thing I did" take on all this because I think it is valuable information that artist and non-artists alike just want to have to explore and learn like with anything else. TYTYTYTYTY....Please keep posting content!
The hideous, obviously AI-generated thumbnail...bleh.
AI-generated? It's far worse than than that, I'm afraid. It's Adobe Photoshop. ❤️
@@professorlich I have been using Photoshop for 32 years. I'm also a machine learning developer. You may have run the model output through Photoshop, but it didn't successfully cover up where you got it.
@@KAZVorpal Shame using Photoshop for 32 years didn't teach you any class... bleh
Indeed, I did exactly as you said. There is no 'cover up'. In my earlier message, I was just trying to be funny (and failed miserably). 😅 Thumbnail is new, so thank you for the feedback. I will try and do better next time. 👍
@Tyrell I only used Photoshop for about 1 year (or so). Regardless, you are probably right regarding class, I will try to do better next time, perhaps less swearing and more professional. 👍 (Edit. Sorry, Tyrell, just realised you were not addressing me.)
1:54 The problem is that human language implies human's body experience. LLM does not and cannot have such experience, all it has is a mishmash of dictionary index on top of picture's library, therefore all efforts to explain to the machine the difference between these two pictures are futile.
I have followed everything and triple checked, but when i try to open SD through the shortcut i get "No Python at '"C:\Users\...\AppData\Local\Programs\Python\Python310\python.exe'" I installed anaconda and SD in different drives, could that be causing the issue?
Possibly. If your Anaconda is on a different drive, try this: 1. Click on Start Menu (Windows Key) and start typing 'Anaconda Prompt' 2. Instead of clicking on it, right-click on it, and click 'Open File Location' (option under 'Run as Administrator' on Windows 11) 3. You will see a folder with several shortcuts: Anaconda Prompt, Anaconda PowerShell, Anaconda Navigator, etc. Right-click on Anaconda Prompt, select Properties. 4. In 'Target', you should see something like this: %windir%\System32\cmd.exe "/K" C:\Users\professorlich\anaconda3\Scripts\activate.bat C:\Users\professorlich\anaconda3 My trick with shortcuts is essentially the modification of the above. %windir%\System32\cmd.exe - this is the path to regular command prompt executable (%windir% is a shortcut for C:\Windows) Pay attention to the second path: C:\Users\professorlich\anaconda3\Scripts\activate.bat If you have installed Anaconda on Drive D or E, this will be different.
By the way, this is one of my first TH-cam guides. Shortly after making it, I realised I need to explain better what is going on, rather than getting people to blindly follow the recipe. To address this discrepancy, I have made an update to this guide. See below (rewind ahead): th-cam.com/video/RDuIeuOIB7s/w-d-xo.html This updated video has better explanation and visual aids, so that people can develop a sense of what happens when they execute these commands. 👍
@@professorlich Thanks for trying to help, i just ended up placing everything on the same drive and it was fine. Will be checking the updated gude though.
Ohh here it is, time to grab a tea
still waiting for new videos my friend, flux is amazing. i would love to understand how it works and what you can do with it. comfy ui is getting amazing updates too.
Thank you, Kobe. Next video will discuss architecture of SD1.5. In truth, pulling apart architecture and grappling with the basics is the reason it's taking so long. I am, after all, breaking into academic field that is entirely new to me. Still, you would be surprised how much stuff is in there that we don't talk about. Next video will lay important foundation for us to build further upon. In short: expect something by the end of this week, latest. 👍
@@professorlich Looking forward to it :)
excellent! :) Being able to write, even simple, nodes for Comfyui could be really useful!
Hello sir Is it possible to create a UI based on a custom comfyUI workflow? I mean is it possible to do this = 1. create and test comfyui workflow A (this workflow will run custom nodes to remove background from an image) 2. an executable in Linux, mac, or Windows (would still work even when no ComfyUI installed) when this run it will have UI like in this example a button when pressed it will remove background of images in a specified folder. or... rather than individual executable what about remotely trigger ComfyUI workflow A in CLI from client computer to a ComfyUI server? git clone remoteComfyUI sh ./remoteComfrUI. sh -d "~/Pictures/removeBackroungs" -server 192.168.255.255 -verbose 192.168.0.88 ~/Pictures/removeBackroungs/testimg1.jpeg sent 192.168.0.88 ~/Pictures/removeBackroungs/testimg2.jpeg sent 192.168.0.88 ~/Pictures/removeBackroungs/testimg1.png sent 192.168.0.88 ~/Pictures/removeBackroungs/testvid.mp4 sent 192.168.255.255 processing ~/Pictures/removeBackroungs/testimg1.jpeg success 192.168.255.255 processing ~/Pictures/removeBackroungs/testimg2.jpeg success 192.168.255.255 processing ~/Pictures/removeBackroungs/testimg1.png success 192.168.255.255 processing ~/Pictures/removeBackroungs/testvid.mp4 failed (not an image) saved into "~/Pictures/removeBackroungs/output" the idea was to help folks at home and maybe neighbor who wants just do 1 or 2 specific thing with their pictures... or... they can just keep using subscription based of canva 🙃
Great videos, great quality content! I was wondering if it is possible to create a custom node that uses code with a different python version? I want to create a cool node and I have working python codes already but it only works on python 3.7 because it needs tensorflow 1.5. Is this even possible to accomplish in comfy UI, maybe by using code in the custom node that creates a virtual environment?
I won't say it's impossible, but... I personally wouldn't know how to do it. When it comes to TensorFlow (Google) vs PyTorch (Meta), my impression is that PyTorch is "winning", as of this writing (2024). Perhaps due to greater adoption by academics? I don't know. What made you use TensorFlow, if you don't mind me asking? If you are looking at building a front-end, have a look at this repo: github.com/jagenjo/litegraph.js/ This is the node-based library that powers ComfyUI. Also, ComfyUI now has a Discord server, you can approach developers directly: discord.gg/MrtNUNEx I hope this helps. Good luck. 👍
@@professorlich Thx for taking the time to respond and link some useful stuff. And why TensorFlow is because I am playing around with lucid-sonic-dreams and it only works with an older TensorFlow, there are also working version of lucid-sonic-dreams with PyTorch but I had problems getting it to work. Thanks for your good understandable videos it helped me create some custom nodes for this project. I first tried some different things but don't think it was possible at the end how I have made it work, I created a simple node that just makes an API call to my custom flask API were I run the lucid-sonic-dream stuff with all the right versions. So thx again, have a great day sir!
awesome work! its the kind of content i hoped it would be, looking forward to the next video!
Thank you, more to come! :) Just need to tidy up my GitHub repo first. 👍 And those commands I promised.
有人能确认这是否正确吗: Automatic1111: 稳定扩散网页用户界面 ComfyUI: kāng fēi yōu ài (康菲优爱)
If you have a deep understanding and can go beyond the surface and explain to people how things work below the surface to give them a deeper understanding people will listen, to many click this button tutorials. Comfy desperatly needs people that explain whats actualy going on not just click this button, for the average user to understand, so that people can then use it the way they want and might get a grasp of its capabilities. The best channel I've seen so far that does this really well is LatentVision but I wish he had the time to go even more in depth and make more videos. Thanks for what your doing, I hope this will always stay free and open for everyone to see.
Kobe, regarding the last bit. If push comes to shove, I'd rather abandon this channel than abandon my values. GitHub has a short bio of who I am and what I stand for: github.com/LichAcademy/ComfyUI-Lich-Pack Let me put it this way: I would rather shut down this channel, than put any content behind paywall. And even then, before doing so, I'd share OneDrive folder with direct downloads to all my videos with everyone. I wrote on my Patreon to would-be subscribers: "Education should be free and accessible to everyone. I won't hide any educational resources behind a paywall. However, I do want to express my gratitude to those who support me by... (etc.)" Nonetheless, thank you for the kind words, Kobe. ❤️
I'm even conflicted with TH-cam monetization. I mean, is the video truly free, if you pay for it with your attention or privacy? I do wonder how Wikipedia does it, what their business model is, etc. But I digress...
looking forward to learning from you Mr. Lich!
Shoutout to the Lich family! Keep the lectures rolling man <3
10:39 I misspoke at this point: *Models (not molecules)
Excellent tutorials. I enjoyed them very much. But how do we get the skull (or any) icon in our category? Apologies if you explained and I missed that :) Please keep up the good work, and I will keep watching even though TH-cam wants to feed me Chinese content now rofl
Good question. If you are on Windows, hold down Windows Key + . (dot, punctuation mark). PS This is not a programming thing. It works on Discord, Notepad, Outlook, browsers... I am using it in this comment: 🥲▶️👍😅🙏 Edit. It may depend on the version of Windows you are using. Let us know if it works for you. 👍
@@professorlich Yes, that worked perfect on Windows 10. Thank you, Now I just need to figure out how to add custom icons to Windows. 🤣
Clearly structured and well done! Thank you for your excurse into history! 😊
Glad you enjoyed it! And yeah, I have a tendency to go on tangents occassionally haha 😅
Thanks, I think I'll have to go back to beginning
omg thank you for the Tuple information i never was understood what that's mean !
thank you so much, as a begginer it's hard to find some information and you are so clear and precise ! more more more plz !
More to come! :) This weekend, fingers crossed. 👍
Chandrasekhar was another brilliant mind India gave us (offtopic, but I will put it here). He has shown that electron degeneracy pressure is not sufficient to prevent the collapse of stars beyond a certain mass. This implied that such stars would continue to collapse into more compact objects (neutron degeneracy was not yet discovered, or fully developed at the time, I believe). His ideas caused quite an uproar, he was publicly ridiculed even...
Looking forward to your next lecture, with or without pineapple😂
Coming soon! :) Thank you for the nice comment
I guess it is quite challinging to teach ComfyUI specialities combined with basic python. You are on the right track, your style is improving and everybody who is following your explanations, improves too! ;-)
Good job! The translation is basically available. But there are Chinese, Cantonese and English in this video, especially if you're speaking very fast. And the pronunciation and intonation are a little weird. Maybe Chinese subtitle is a better idea for Chinese speakers.
I followed your video and now I can’t even open stable difussion. I’m furious!! Why would you put information that doesn’t work. It was working perfectly until I tried this and now it doesn’t open. Whoever is watching this. Do not try this. It took me days to even download stable difussion and I had so many things added to it already. Honestly this is horrible. Do not try this video.
Honestly it just sounds like you don't know what you are doing and putting the blame on the guide.
Valentina, I am sorry this has happened to you, and thank you for letting me know. While I can't tell exactly what went wrong, there are two things I gleam from your comment: a) "It took me days to even download stable diffusion": I should have explained how to back up your project; and b) "I had so many things added to it already": I should have explained how "adding to it" can be automated, so that installing things does not feel like a chore. I will try to do a better job explaining this in my next video.
Hi, Bro! Please correct the link to GitHub in the description. Thanks for the content
'Sup! 😎 The link works fine, from what I can see? A month ago (when you wrote this), I wasn't 'verified member' by TH-cam. 🤔 Perhaps TH-cam did something to the hyperlink, to protect viewers from "dodgy", "unverified" TH-camr such as me. 😅 (PS I have no idea why your comment only popped up now...)
Thank you so much for taking the time to make this video. Just one suggestion, the border and style of the background is distracting. Code can be visually confusing, adding detailed graphic elements around it can be counterproductive. Also, having a huge border around everything reduces the scale and legibility of the screenshots. I only give these suggestions in hopes that your videos attract more viewers. I really appreciate how you explain things and your step by step approach, very helpful for non-coders. 🙏