Andrea Baioni
Andrea Baioni
  • 44
  • 333 574
A Solution to AI Plastic Skin
Learn how to address the "plastic skin" issue common in AI-generated portraits. This video explores techniques to make AI-rendered skin appear more natural by adding realistic textures, depth, and color variations. Discover how tools like Perlin and Voronoi noise, depth maps, and advanced blending methods create lifelike skin effects.
Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi
Workflow: openart.ai/workflows/JakazCfWF9UWi2mOZTlf
Models:
- YOLO Face: you can find it in the models section of the ComfyUI Manager;
- Flux.1 dev Q4 GGUF: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf
Timestamps:
00:00 - Intro
00:37 - What is Plastic Skin?
01:09 - How Should Skin Look?
02:25 - Workflow Overview
02:53 - Face Segmentation and Noise
05:30 - Applying Random Noise as Skin Texture
06:41 - Correcting for Color
07:26 - Applying via Depth Map
08:43 - Fixing Color Variations
10:25 - Adding a FaceDetailer Pass
11:47 - More Results (Different Skin Tones, Hues, Lighting Conditions)
12:10 - Current Issues and Limitations
13:23 - Outro
#stabledifussion #stablediffusiontutorial #comfyuitutorial #comfyui
มุมมอง: 4 378

วีดีโอ

Un-Blur / Blur Your Photos with AI (ComfyUI, SD, Flux)
มุมมอง 1.8Kหลายเดือนก่อน
In this episode of Stable Diffusion for Professional Creatives, we explore how to manipulate focus and create beautiful blur effects in your images using custom workflows. Learn two different techniques: the deep blurrer and blurrer workflows, which allow you to selectively blur backgrounds or unfocused areas of your images while maintaining the overall structure and composition. Want to suppor...
Fun with Automated Loops in Stable Diffusion
มุมมอง 2.3K2 หลายเดือนก่อน
Welcome back to Still Diffusion for Professional Creatives! In this episode, we dive deep into the world of loops in ComfyUI and explore how they can streamline complex tasks like automated upscaling, face detection, and masking in image processing. What You'll Learn: - What Loops Are: Understanding loop start and end nodes and how to control loop behavior. - Loop Applications: Examples of upsc...
How I work when designing ComfyUI workflows
มุมมอง 2.1K2 หลายเดือนก่อน
In this episode of Stable Diffusion for Professional Creatives, we're diving into something a little different from our usual workflow tutorials. Today, I’m breaking down the entire process of designing workflows and pipelines - essentially, how I handle client projects from start to finish. With over a decade of experience in fashion photography and four years in creative AI, I’ve developed a ...
Easy Inpainting for ANY model (SDXL, Flux, etc)
มุมมอง 6K2 หลายเดือนก่อน
In this video, we'll take a look at two simple inpainting workflows for any model, be it Stable Diffusion 1.5, SDXL, Flux, etc. Workflows: - Simple: openart.ai/workflows/risunobushi/simple-inpainting-workflow-for-any-model-sdxl-flux-etc version-1/EpL2wearIDZDxMDNuatK - Advanced: openart.ai/workflows/1lZ0NQPoIokMI4iYinqk Models: - Xinsir Depth (SDXL): huggingface.co/xinsir/controlnet-depth-sdxl-...
Motion Graphics with Stable Diffusion - Video to Video
มุมมอง 4.7K3 หลายเดือนก่อน
This video dives deep into a new Stable Diffusion workflow. We're going to explore how to achieve motion graphics animation effects in ComfyUI starting from a pre-existing video input. Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi Shoutout to Ryanontheinside for creating this node pack: www.youtube.com/@UCWLSByG96v8vPHgZLv8UHVg Nodes: github.com/ryanontheinside/ComfyUI_...
Animating Products with Z-Depth Maps in Stable Diffusion (Houdini style)
มุมมอง 5K4 หลายเดือนก่อน
This video dives deep into a new Stable Diffusion workflow. We're going to explore how to achieve Houdini-like Z-Depth manipulations and generate stunning animations using Stable Diffusion! This technique involves creating depth maps and using them to guide the diffusion process, resulting in smooth animations. Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi Shoutout to R...
Flux LoRAs: basics to advanced (single image, single layers training) with RunPod and AI-Toolkit
มุมมอง 12K4 หลายเดือนก่อน
In this video, I'll be diving into the world of Flux LoRA training and showing you how I've been training my own custom LoRAs. We'll cover: - The basics of Flux LoRA training: How to set up a cloud GPU instance on RunPod and get started with AI Toolkit. - Experimental LoRa techniques: Learn how to train LoRAs with a single image and target specific layers for faster, more efficient results. - B...
Developing Complex Flux Workflows Kinda Sucks
มุมมอง 3.7K4 หลายเดือนก่อน
From Dev's uncertain License, to underperforming model releases and the lack of documentation, Flux's ecosystem is proving to be a bit underwhelming - or maybe it's just me and it's a skill issue? Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi (no workflow this week because the shown workflow is just a troubleshooting mess) Timestamps: 00:00 - Intro 01:15 - Flux Dev's Li...
A Professional's Review of FLUX: A Comprehensive Look
มุมมอง 12K5 หลายเดือนก่อน
In this video, we explore Flux - the groundbreaking new image generation model from Black Forest Labs. As a fashion photographer and AI workflow expert, I break down: What is Flux and how does it compare to previous models? The different versions: Schnell, Dev, and Pro My professional perspective on Flux's strengths and current limitations Detailed installation guide for ComfyUI Practical workf...
The Only Virtual TryOn I've Been Excited About - CatVTON ComfyUI
มุมมอง 10K5 หลายเดือนก่อน
In this episode of Stable Diffusion Experimental, we explore CatVTON, a fantastic Virtual Try-On (VTON) tool that's great at creating working bases for generative clothes swaps. As a fashion photographer, I explain why this model excites me and how it outperforms previous VTON attempts. Wanna support me? Buy me a ko-fi here: ko-fi.com/risunobushi Workflow: openart.ai/workflows/HaxcrNaVvjae9pdku...
Multimodal AI Video Relight with IC-Light (ComfyUI, non-AnimateDiff)
มุมมอง 5K5 หลายเดือนก่อน
Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs! Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below. REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon. COUPON CODE: RCABA10 (Expires August 31) Workflow (RunComfy): www.run...
A Great New IPAdapter with Licensing Issues: Kolors
มุมมอง 6K5 หลายเดือนก่อน
A new, very good base model and IPAdapter were released, but the licensing is not that clear: Kolors! Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi Workflow: openart.ai/workflows/4bczMC6DtTZKktEBIUfU Install missing custom nodes via the manager, or from the GitHub via git clone: github.com/MinusZoneAI/ComfyUI-Kolors-MZ Kolors (checkpoint, place it in the models/UNET fo...
Photoshop to Stable Diffusion (Single Node, updated)
มุมมอง 4.6K6 หลายเดือนก่อน
A very quick update to my previous Photoshop to ComfyUI tutorials, which, since Nima's updated their nodes, needed a bit of a refresh. Workflow: openart.ai/workflows/2ZePdBrzTz2Bi00BKJJz Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi This node needs to be installed from GitHub: github.com/iamkaikai/comfyui-photoshop Everything else can be installed via ComfyUI Manager b...
Magnific AI Relight is Worse than Open Source
มุมมอง 14K6 หลายเดือนก่อน
Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs! Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below. REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon. COUPON CODE: RCABP10 (Expires July 31) Workflow (RunComfy): www.runco...
Turning Canva into a Real Time Generative AI tool
มุมมอง 2.5K6 หลายเดือนก่อน
Turning Canva into a Real Time Generative AI tool
Multi Plane Camera Technique for Stable Diffusion - Blender x SD
มุมมอง 6K6 หลายเดือนก่อน
Multi Plane Camera Technique for Stable Diffusion - Blender x SD
Get Better Images: Random Noise in Stable Diffusion
มุมมอง 4.4K6 หลายเดือนก่อน
Get Better Images: Random Noise in Stable Diffusion
Perfect Relighting: Preserve Colors and Details (Stable Diffusion & IC-Light)
มุมมอง 11K7 หลายเดือนก่อน
Perfect Relighting: Preserve Colors and Details (Stable Diffusion & IC-Light)
Any Node: the node that can do EVERYTHING - SD Experimental
มุมมอง 7K7 หลายเดือนก่อน
Any Node: the node that can do EVERYTHING - SD Experimental
Stable Diffusion IC-Light: Preserve details and colors with frequency separation and color match
มุมมอง 13K7 หลายเดือนก่อน
Stable Diffusion IC-Light: Preserve details and colors with frequency separation and color match
Relight and Preserve any detail with Stable Diffusion
มุมมอง 20K7 หลายเดือนก่อน
Relight and Preserve any detail with Stable Diffusion
Relight anything with IC-Light in Stable Diffusion - SD Experimental
มุมมอง 15K8 หลายเดือนก่อน
Relight anything with IC-Light in Stable Diffusion - SD Experimental
Simple animations with Blender and Stable Diffusion - SD Experimental
มุมมอง 9K8 หลายเดือนก่อน
Simple animations with Blender and Stable Diffusion - SD Experimental
Hyper Stable Diffusion with Blender & any 3D software in real time - SD Experimental
มุมมอง 73K8 หลายเดือนก่อน
Hyper Stable Diffusion with Blender & any 3D software in real time - SD Experimental
Adobe Mixamo & Blender to level up your poses in Stable Diffusion - SD for Professional Creatives
มุมมอง 6K8 หลายเดือนก่อน
Adobe Mixamo & Blender to level up your poses in Stable Diffusion - SD for Professional Creatives
Stable Diffusion 3 via API in comfyUI - Stable Diffusion Experimental
มุมมอง 4.6K8 หลายเดือนก่อน
Stable Diffusion 3 via API in comfyUI - Stable Diffusion Experimental
Generating a fashion campaign with comfyUI - Barragán x Stable Diffusion for Professional Creatives
มุมมอง 3.8K9 หลายเดือนก่อน
Generating a fashion campaign with comfyUI - Barragán x Stable Diffusion for Professional Creatives
From sketch to 2D to 3D in real time! - Stable Diffusion Experimental
มุมมอง 16K9 หลายเดือนก่อน
From sketch to 2D to 3D in real time! - Stable Diffusion Experimental
Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject
มุมมอง 6K9 หลายเดือนก่อน
Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject

ความคิดเห็น

  • @greyscale11th
    @greyscale11th 14 ชั่วโมงที่ผ่านมา

    This is crazy awesome, man! Keep up you amazing work! 👏

  • @insane3953
    @insane3953 19 ชั่วโมงที่ผ่านมา

    Man, you're the first one that explain clear this topic, keep going !!!

  • @johanmrch9316
    @johanmrch9316 21 ชั่วโมงที่ผ่านมา

    Would it be posible, to integrate a function, where you can give the workflow a reference image to guide the backgruond generator in the direction you want it to go? :)

  • @SilentSnowStudios
    @SilentSnowStudios 3 วันที่ผ่านมา

    This channel and comment section are an actual gold mine for the future of creation, whether business, professional, or casual content. What a blessing.

  • @janbrozek3672
    @janbrozek3672 3 วันที่ผ่านมา

    The pores are not randomly distributed across the face surface. Caucasians (i do not know if true for all humans) have usually more pores and larger and deeper pores on and around the nose and on a forehead. Thats the distribution of seborrheic glands that make nose and forehead “greasy” and reflect the light more when no makeup applied. Then it is not only pores but also tiny facial hair that in humans diffuse light and they are also not randomly distributed. Look in the mirror. Women also have them just tiny and diffusing light more than more reflective shaven skin of males. This and other factors you mentioned and did not mention (freckles, discolorations, pimples, moles, you name it) all contribute to “natural” human skin. I like your “automated workflow” and experimenting but my impression is that we need to add a lot more imperfections to the skin to make it look natural. "The devil and the angels are in the details."

  • @marcosbrolio
    @marcosbrolio 6 วันที่ผ่านมา

    very cool, now we need create concistence characters, same armout, enviroment, and make 3d render in real time (is a fake 3d render but we can animate it very fast)

  • @AndroKarpo
    @AndroKarpo 7 วันที่ผ่านมา

    Don't mislead people, your video has nothing to do with classic Inpainting, you just have a workflow with control net

  • @QuentinJay-c4s
    @QuentinJay-c4s 10 วันที่ผ่านมา

    I keep getting “no_segm_detector all though I have installed the custom node

  • @anasibrahim1531
    @anasibrahim1531 11 วันที่ผ่านมา

    Why not just fix it post-production? Like using photoshop for example i see it a quick and simpler way. it may be tricky but I think it is the most efficient way to date. any way thanks for the video I am fan of your channel! :D

  • @MahdiDakhli
    @MahdiDakhli 13 วันที่ผ่านมา

    Absolutely fantasic work. I hope you'll go through it and amaze us as always.

  • @FelipeSantos-ff1kt
    @FelipeSantos-ff1kt 15 วันที่ผ่านมา

    Thanks so much for this content! Please, continuous. It's so important

  • @calvinherbst304
    @calvinherbst304 17 วันที่ผ่านมา

    Thanks for sharing such through documentation. You do a great job of making this process digestable. I have some questions regarding object loras for inpainting pipelines. If I plan on using the lora in an inpainting pipeline, should I train on the Flux-fill base model instead of the standard flux dev? What has worked best for you when training loras designed for use in inpainting pipelines? Since making this video, what have you learned about config settings for training object loras with datasets of 5-10 images?

  • @ChaiBiscuits-tf7sr
    @ChaiBiscuits-tf7sr 19 วันที่ผ่านมา

    I am no fashion editorial photographer but been spending some days time on SECourses tutorial on FLUX LoRa training with Kohya on RunPod and MassedCompute, as well as spamming chatGPT with questions and FLUX training is heavily dependant on GPUs for SURE, Andrea... especially since I am training on high resolution datasets, I am using a 4 x A40 and it's taking me 5 hours-ish please tell me I am not wasting my money... can't wait for the result.. I am also curious how we can incorporate Flux in Photoshop with ComfyUI and Stable diffusion? Pretty cool anyways. I admire your workflow, looks like it gives full control for creativity, but it's also quite time consuming and I'd be spending days on few images knowing myself... I am looking for tools that can do it fast and high quality all in one place and already finding solutions... now I am looking at a free course I found on AI for product and fashion. I want to do my own store and promote it with quality style, presence and branding on socials etc. It's gonna take me months to years to nail it but fu.. working for others.

  • @travislittle4381
    @travislittle4381 24 วันที่ผ่านมา

    Did I miss the posting of this workflow?

  • @pixelcounter506
    @pixelcounter506 25 วันที่ผ่านมา

    Thank for very much for offering some of your insights on this topic, Andrea! I like your photographic approach to these kind of images and to introduce a path to get a (more) realistic result. Everybody who has made images from real people, knows that there are different areas of light, textures, pores, imperfections across everybody's face. But that way of thinking doesn't fit into the concept of AI and into the minds of a lot of AI users which aren't familiar with the details of realistic photography (e.g. compositon, colors, textures, lighting). At your starting positionyou have to think about the presentation of your subject. Do you want a "real" impression of that person or an image of someone who is close to reality, but enhanced with tools like PS (means everything like colors, lights, blemishes, skin impurites, contrast). I go with the last one but I dont want to exaggerate that the image/person looks like photoshopped (or only a little bit). Regarding your approach with noise, color and depth is very interesting. I was playing with the blend and noise nodes as well, but havent come to the idea of using a depth map. Usually you have more noise and visiual details and imperfections in shadow/more grey areas. Therefore I want to go into the direction of highlights and shadows. Furthermore there is the color issue if different skin areas. I'm thinking about the average color for darker and lighter skin areas. But I havent finalized my approach yet to solve these problem. Therefore I'm looking forward to dl your wf (thx very much for your service!) and to play with your nodes. Greetz PS: I always see another problem with AI images of people... the "burned" result with too much contrast and saturation. I think both issues are influencing each other and the blend nodes are helping, too, in that respect (color and contrast corrections).

  • @neaonsjaji
    @neaonsjaji 26 วันที่ผ่านมา

    you are so underrated. i like u using artist perspective combined with technical perspective explanation, thanks for teaching us and keep it up!

  • @aysenkocakabak7703
    @aysenkocakabak7703 27 วันที่ผ่านมา

    great, I learned a lot. I feel so good about it :)

  • @FanVerseMedia
    @FanVerseMedia 29 วันที่ผ่านมา

    I keep having issues with CavTON Wrapper even though it is all installed correctly.

  • @jonh2o
    @jonh2o 29 วันที่ผ่านมา

    Back in like 2015/2016 eBay did a bolt-on purchase of a company called Phi Sixx and never did anything with it. I was always dissapointed but now we have this and Kaiber and Kolors VTONs.

  • @ankur122k
    @ankur122k 29 วันที่ผ่านมา

    8:41 if the source image is not having great resolution can we also not add a step in comfyui to upscale it before sending to catvton for processing?

  • @NinoLouLeChenadec
    @NinoLouLeChenadec 29 วันที่ผ่านมา

    Realy Nice work flow Andrea :) For that I use an inpaint mask + ControlNet to keep the line and add a Skin LoRA + guided prompting. But your results seems to be a lil bit more natural ! Thanks for your vidéo !

  • @gnsdgabriel
    @gnsdgabriel 29 วันที่ผ่านมา

    Thanks for sharing. Check your mic settings, the audio is clipping.

  • @parisallier
    @parisallier หลายเดือนก่อน

    I've achieved some realistic skin texture by upscaling using InvokeAI upscale tab with 4k-Ultrasharp model. The only issue i've found is it only works on close ups and middle shots, on full shots it changes some features of the face and haven't found out how to solve it.

  • @ronbere
    @ronbere หลายเดือนก่อน

    far too complicated, when it could be so much simpler

  • @alexvillabon
    @alexvillabon หลายเดือนก่อน

    Great stuff as always. Thanks Andrea.

  • @AkshatDobhal-m3h
    @AkshatDobhal-m3h หลายเดือนก่อน

    this is great freaking work man

  • @metasaman
    @metasaman หลายเดือนก่อน

    wow, this is very complex but highly fascinating.

  • @nikgrid
    @nikgrid หลายเดือนก่อน

    Andrea..Well done! Yes my person did look a little like an excessive case of measles due to the cell size of voronoi. Could you use a pre-rendered image (In photoshop for example) instead on the native voronoi pattern generator? Anyway Thanks ..subscribed.

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      yes, you can! although I didn't want to go that route because then I'd have to share different voronoi texture sizes for everyone lol

  • @freneticfilms7220
    @freneticfilms7220 หลายเดือนก่อน

    sameface fix? realistic checkpoint?

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      Yeah, but those are model-dependent, inference-based solutions, whereas I want to research a (somewhat) model-agnostic solution :)

  • @turkeybuttsammich
    @turkeybuttsammich หลายเดือนก่อน

    Great vid, loved the epidemic-themed intro, lol... 2 tips: In midj, you can use a photo of a man with overly porous skin in sref, at about sw 5 or 10, which sometimes helps. A quick fix is to run an image through leonardo's upscaler, which can instantly add whatever degree of "real" that you may be trying to achieve. Sometimes it goes too far with the blemishes, but there's an option for text input during the upscaling so I usually add "clean skin, detailed pores, skinhair..." or something like that. Both tips work about 80%.

  • @lucifer9814
    @lucifer9814 หลายเดือนก่อน

    Is it just me or does anyone have an issue running the facedetailer node ? With the recent most update, a lot comfyUI nodes broke, including pretty much all the face ID tools like pulidID, ecomID, and the rest of them in this category, but apart from them, I can't seem to be able run the face detailer node as well, gives me a huge error each time I run it.

  • @Han3D
    @Han3D หลายเดือนก่อน

    Always you bring great content! It's similar with 3D workflow! very interesting. Seems like comfy has so much potential Thank you so much Andrea!

  • @dadrian
    @dadrian หลายเดือนก่อน

    Nice idea - If I find time, I think I bundle that whole workflow in an Nuke Gizmo as I do retouch of my pictures in Nuke anyway

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      man I seriously need to pick up Nuke sometime in the future, I keep postponing it

  • @neoneil9377
    @neoneil9377 หลายเดือนก่อน

    Plastic skin is already solved in SDXL. Install Lustify model from civit Ai, and write down this in Ur negative prompt. "CGI look, non-photo real, fake skin, CGI skin, anime, 3d rendered skin, fake look, plastic skin, plastic look" Use any of the Fuji xt4 camera and dof film grains, subsurface scatter, f1.3 in Ur positive prompt and u will be amazed how realistic it will look. I have struggling with flux Loras for realism and used several realistic fine tuned flux models and non of them can still compare to the realistic skin of SDXL.fine tuned models

  • @Filokalee999
    @Filokalee999 หลายเดือนก่อน

    Very good workflow, indeed! Also your reference images are excellent fashion editorial images. May I ask what checkpoint(s) are recommended for fashion images?

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      these were all generated through flux.1 dev Q4_0 GGUF, nothing else really - I've simply been a fashion photographer for 10 years, that's where I'm "cheating"

  • @ArrowKnow
    @ArrowKnow หลายเดือนก่อน

    Love it! I always enjoy when a new video drops because it is apparent that you put a lot of thought into them and I know that I am going to learn something interesting. Usually you make me look at things from a new perspective which is always useful. Thank you!

  • @JacekPilarski
    @JacekPilarski หลายเดือนก่อน

    As someone who has been using Comfy for 1.5 years, I think you should focus on integrating img2img and upscaling/latent upscale techniques to maximize the potential of the model, rather than trying to fix complex details like human skin using "randomly generated noise" in one go. Start from not generating with guidence >3 both for distilled and de-distilled models.

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      IMO there’s nothing inherently wrong in using random noise for skin - I’m testing a theory about using 3D shading techniques in a different medium, rather than relying on upscaling. If the theory is sound, then I can develop a model agnostic technique that can be applied to any model, regardless of how good they are at generating skin textures. If I’m wrong, I’ll have fun trying to build things, so that’s good! I’ve been working in gen AI since 4 years ago now, and I like researching how to apply different, non gen AI techniques to the medium.

    • @JacekPilarski
      @JacekPilarski หลายเดือนก่อน

      @@risunobushi_ai I get your point, but your results wont be realistic in a sense of achieving photorealism using randomly generated patterns. While every model is better than previous one, there is no point of adding more with messy noise. I'm just trying to be helpful because I watch your channel from the beginning. You should try to play with latent upscale methods maybe, along with "Redux Inpaint IP adapted" where you could source a skin texture reference from input image for example. That would be something.

    • @lacerhigashi2326
      @lacerhigashi2326 16 วันที่ผ่านมา

      @@JacekPilarski Would like to see that happen. Ping me please !

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน

    This is revolutionary!!!!! I can't wait till you create the nodes as you said..

  • @luisfelipemurguiaramos659
    @luisfelipemurguiaramos659 หลายเดือนก่อน

    Your solution is very promising and elegant. It can work not only for skin but for any type of material, as this issue isn't limited to skin but affects almost the entire image (plastic textures in clothing, etc.). Additionally, I notice that many limitations stem from technical constraints in ComfyUI nodes, which could be resolved by creating a custom node. Mateo (th-cam.com/video/tned5bYOC08/w-d-xo.html) also presented a Noise Injection approach, which adds more "details" by injecting noise. It's ironic that diffusion images lack noise when they are generated from pure noise itself. We could collaborate on creating a custom node to solve this issue. I have extensive knowledge of ComfyUI's operation and node development, but I would benefit from a professional creative's perspective.

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      yes, in theory adding random noise to anything makes it more realistic - the same way applying a bit of imperfections to a surface in 3D shaders makes for better assets. I can't pick up the collab invitation though because I'm working on both a public release and a private release of this workflow for my day job, so I can't take your inputs for my own gains, but thank you nonetheless!

  • @Vinz-VYG
    @Vinz-VYG หลายเดือนก่อน

    Why not use LoRAs? There are plenty available to improve skin

    • @kennedysworks
      @kennedysworks หลายเดือนก่อน

      많은 테스트를 해봤는대.. 로라는 형태를 변형시키기도 하더군요.. 결국 후가공을 하는 것이 좋아보여요

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      This is a first step towards a model-agnostic solution, so I wanted to experiment with as few models as I could in order to find a solution that could work regardless of the models used during the generation of the base image

    • @Vinz-VYG
      @Vinz-VYG หลายเดือนก่อน

      @@risunobushi_ai I understand thank you for your answer, But to me, It seems more like 'I need to fix my mistake' than 'I choose the right tools for the job'. Nevertheless, the approach taken in the video is interesting.

    • @WhySoBroke
      @WhySoBroke หลายเดือนก่อน

      Loras are small and limited and also limited in terms of knowledgd of where to inject the noise. The reason is that style Loras are trained on a variety of images and work best when applied to similar images as the training dataset. To improve skin I normally use a second pass refiner or model based upscaling. These two methods increase the time it takes to generate the image. Which is why I think this is an interesting approach.

    • @WhySoBroke
      @WhySoBroke หลายเดือนก่อน

      Interesting approach… A limitation of this method is that it is tricky when dealing with other body parts such as arms, or a model wearing a swimsuit. Masking is not as easy in those cases,

  • @kennedysworks
    @kennedysworks หลายเดือนก่อน

    한가지 더 말씀드리면 커스텀노드중 하나인 "FACE PARSING" 을 같이 사용하면 더 좋을 것 같아요

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      Yep that one’s better! Although it involves a ton of nodes

  • @bgtubber
    @bgtubber หลายเดือนก่อน

    Funnily enough, most of the popular SDXL and even SD 1.5 models create much more realistic skin textures than Flux without having to resort to weird tricks. 🤔

    • @JacekPilarski
      @JacekPilarski หลายเดือนก่อน

      Then you are doing something wrong, Flux gives pure realism if used with correct settings.

    • @bgtubber
      @bgtubber หลายเดือนก่อน

      ​@@JacekPilarski Yes, there are ways to get better realism, but not without losing prompt adherence. For me, the default settings, which give good prompt adherence, have always given me plastic-y looking skin (and more CGI-like appearance overall). I know you can decrease Flux Guidance from 3.5 to ~2 for more realistic output, but then you start losing the image coherency and the prompt adherence. It's basically a trade-off between realism and prompt adherence. Do you have any suggestions for getting more realistic results while also keeping the prompt adherence and coherency of the higher Flux Guidance?

    • @ChanhDucTuong
      @ChanhDucTuong หลายเดือนก่อน

      Do you mean Base SDXL/SD1.5 vs Flux Dev? Or do you mean best finetuned SDXL/SD1.5 vs Flux Dev? The first one is not true imo, and the 2nd one is not fair.

  • @kennedysworks
    @kennedysworks หลายเดือนก่อน

    정확합니다! 안그래도 저 역시 동일한 프로세스를 고민하고 있었어요. 실제로 포토샵이나 3d에서 쓰는 방식이기도 하죠. 빛은 질량이 0인 물질이라는 물리학의 이론으로 볼 때 생성형 AI는 빛을 그저 "존재하지 않음"으로 보기에 생기는 현상이기도 합니다. AI에게는 어떤 형태이건 단지 색상이 다른 물질덩어리일 뿐이죠

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      My hope for the future is that we’re able to train “PBR Materials” ControlNet to drive roughness, metallic, IOR, etc. but for now yeah, subsurface scattering is just a foreign concept to AI

  • @AlastairGreen
    @AlastairGreen หลายเดือนก่อน

    Thank you as always.

  • @pjosxyz
    @pjosxyz หลายเดือนก่อน

    how is this channel still sub 100k

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      this is more like my past time, so I'm very happy with whatever sub count I get!

  • @digitaldepictionmedia
    @digitaldepictionmedia หลายเดือนก่อน

    I am definitely testing this tomorrow. Just one question. Do you think this will work on the intricate details and designs on a jewellery? That is something I am looking forward to as i have a jewellery business as well

    • @risunobushi_ai
      @risunobushi_ai หลายเดือนก่อน

      there's a more recent version that should work with jewelry, as long as you don't want refraction to go through the jewel itself: th-cam.com/video/GsJaqesboTo/w-d-xo.html

  • @puruzsuz31
    @puruzsuz31 หลายเดือนก่อน

    where the fuck is masking part

  • @Lily-wr1nw
    @Lily-wr1nw หลายเดือนก่อน

    learned a lot! Thanks master.

  • @generated.moment
    @generated.moment หลายเดือนก่อน

    I keep getting "Clip Vision Model Not Found" Edit: i solved the issue by downloading "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors" and of curse renaming it, this goes in Clip_Vision Dir

  • @mohammadbishalahmed8784
    @mohammadbishalahmed8784 หลายเดือนก่อน

    may i know this mixamo animation name?