Comfyui consistent characters using FLUX DEV

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 พ.ย. 2024

ความคิดเห็น •

  • @maximinus1972
    @maximinus1972 25 วันที่ผ่านมา +4

    This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!

    • @goshniiAI
      @goshniiAI  25 วันที่ผ่านมา

      I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.

  • @jamessenade3181
    @jamessenade3181 5 ชั่วโมงที่ผ่านมา

    thank bro ... i love the way your detailles all the process ... you are a Rock star , merci

    • @goshniiAI
      @goshniiAI  5 ชั่วโมงที่ผ่านมา

      You are very welcome, and saying thank you for your compliment.

  • @zoewilliams2010
    @zoewilliams2010 11 วันที่ผ่านมา

    Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!

    • @goshniiAI
      @goshniiAI  11 วันที่ผ่านมา +1

      You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!

  • @240dbprisms5
    @240dbprisms5 หลายเดือนก่อน +1

    omg bro, just what i need 🔥🔥 THANK YOU clear rhythm, working method

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      you are most welcome. i am glad to read your feedback.💜

  • @Gimmesomemore2012
    @Gimmesomemore2012 27 วันที่ผ่านมา +1

    thank you very much for this tutorial... at the right speed and detailed explanation..

    • @goshniiAI
      @goshniiAI  27 วันที่ผ่านมา

      Thank you so much for the kind words!

  • @yangli1437
    @yangli1437 2 วันที่ผ่านมา

    Thanks so much for your hardwrok, very useful videos.

    • @goshniiAI
      @goshniiAI  2 วันที่ผ่านมา

      You are very welcome! I appreciate your encouraging feedback. Thank you!

  • @pizza_later
    @pizza_later 2 หลายเดือนก่อน +2

    So helpful. Thank you for starting fresh and walking us through each step. Definitely earned a sub.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thank you so much! I’m honoured to have earned your subscription and and glad you found this helpful.

  • @devnull_
    @devnull_ หลายเดือนก่อน

    Thanks and it is nice to see a cleaner node layout, instead of a jumble of nodes and connections, which too many Comfy tutorial makers seem to love.

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      I am Glad it was helpful! Thank you for the observation and feedback . It means alot

  • @kajukaipira
    @kajukaipira 2 หลายเดือนก่อน +2

    Amazing, concise, understandable. Congrats man, keep the good work.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thank you so much! appreciate it

  •  หลายเดือนก่อน +1

    Just wanted to say, you are amazing!!

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Hearing that means so much. Thank you for your support.

  • @ielohim2423
    @ielohim2423 17 วันที่ผ่านมา

    This is amazing! Thank you so much. Subscribed!

  • @sergeysaulit
    @sergeysaulit 2 หลายเดือนก่อน +1

    Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!

  • @willmobar
    @willmobar 27 วันที่ผ่านมา +1

    Thank you, you are excellent!

    • @goshniiAI
      @goshniiAI  27 วันที่ผ่านมา

      That's very kind of you!

  • @cosymedia2257
    @cosymedia2257 4 วันที่ผ่านมา

    Thank you!

    • @goshniiAI
      @goshniiAI  4 วันที่ผ่านมา

      You are more than welcome.

  • @ainaopeyemi339
    @ainaopeyemi339 2 หลายเดือนก่อน +1

    I love this, already subscribed

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thank you for being here. i appreciate your support.

  • @wrillywonka1320
    @wrillywonka1320 23 วันที่ผ่านมา

    also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem

  • @JoeBurnett
    @JoeBurnett 2 หลายเดือนก่อน +3

    Great video as always! Thanks!

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thank you for your encouragement.

  • @cleverfox4413
    @cleverfox4413 2 หลายเดือนก่อน

    Really good Explanation, Keep up the good work :)

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thank you for the motivation! I'm glad I could help.

  • @sudabadri7051
    @sudabadri7051 2 หลายเดือนก่อน +1

    Superb work mate

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Thank you so much, Suda! Love

  • @calvinnguyen1451
    @calvinnguyen1451 หลายเดือนก่อน

    Dope stuff. You rock!

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      I appreciate that! Thank you!

  • @devon9374
    @devon9374 28 วันที่ผ่านมา

    Great video!

    • @goshniiAI
      @goshniiAI  27 วันที่ผ่านมา

      I'm glad you enjoyed it!

  • @E.T.S.
    @E.T.S. 2 หลายเดือนก่อน

    Very helpful, thank you.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      i appreciate your feedback

  • @pixelist999
    @pixelist999 6 วันที่ผ่านมา

    Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3)
    AnimalPosePreprocessor [ComfyUI-tbox]
    DWPreprocessor [ComfyUI-tbox]
    DensePosePreprocessor [ComfyUI-tbox]
    So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?

  • @wrillywonka1320
    @wrillywonka1320 24 วันที่ผ่านมา

    i cant lie, this was the best consistent character video for sure! is this able to work with sd3.5?

    • @goshniiAI
      @goshniiAI  24 วันที่ผ่านมา +1

      Thank you for coming here, and I appreciate your feedback.
      Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling.
      If you can tweak those and add the right nodes, you should be able to get great, consistent characters!

    • @wrillywonka1320
      @wrillywonka1320 24 วันที่ผ่านมา

      @goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf

  • @Ozstudiosio
    @Ozstudiosio 2 วันที่ผ่านมา

    perfect but what if i want use image instead use prompt input?

  • @V3ryH1gh
    @V3ryH1gh 25 วันที่ผ่านมา +1

    when doing the first queue prompt for the aio aux processor - i just get a blank black image

    • @goshniiAI
      @goshniiAI  24 วันที่ผ่านมา

      double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.

    • @Retrocaus
      @Retrocaus 22 วันที่ผ่านมา

      @@goshniiAI i still get a blank image also the strength is after the preprocessor save image i don't think it affects it?

  • @diaitigai9856
    @diaitigai9856 2 หลายเดือนก่อน

    Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.

  • @RoN43wwq
    @RoN43wwq 2 หลายเดือนก่อน

    great thanks

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      You are welcome!

  • @hmmrm
    @hmmrm 2 หลายเดือนก่อน

    THANKS

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      You're welcome!

  • @petttertube
    @petttertube 2 หลายเดือนก่อน +1

    Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.

  • @AIChandu77
    @AIChandu77 2 หลายเดือนก่อน

    thanks

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      You're welcome!

  • @nickfai9301
    @nickfai9301 วันที่ผ่านมา

    How to use the image reference in animation?

  • @pushingpandas6479
    @pushingpandas6479 2 หลายเดือนก่อน

    thank you!!!!

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      You're welcome!

  • @kagawakisho4382
    @kagawakisho4382 2 หลายเดือนก่อน

    Thanks for the video. This is Awesome. Do you use this to create loras? Or what do you use the character sheets for?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs.
      Also they are super useful for game development, animation, or just keeping a consistent look across different art projects

  • @Usermx0101
    @Usermx0101 2 หลายเดือนก่อน +1

    Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.

  • @pumbchik5788
    @pumbchik5788 2 หลายเดือนก่อน +3

    for the pose reference, can we add our own pics posing as we like. will it work?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Yep!!! You can use any picture, and then you'll need ControlNet to extract your pose.

  • @AIRawFootages
    @AIRawFootages 11 วันที่ผ่านมา

    it shows "(IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors" when i try to install ControlNet Auxiliary Preprocessors...anyone pls help

    • @goshniiAI
      @goshniiAI  11 วันที่ผ่านมา

      Make sure you're running the latest version of ComfyUI. Sometimes, older versions don’t play well with newer add-ons.

  • @Fret-Reps
    @Fret-Reps หลายเดือนก่อน

    IDK if you can help me but I've had problems with this AIO Preprocessor.
    AIO_Preprocessor
    'NoneType' object has no attribute 'get_provider. Please help

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      A missing or outdated dependency can cause this, so make sure to update comfy
      Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.

  • @ImHewg
    @ImHewg 2 หลายเดือนก่อน

    How do you get the super cartoony prompts, like that cool robot? I keep generating 3D characters.
    Sweet workflow! Subbed!

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Welcome on board! Here is the prompt for that.
      A Cyberpunk Mecha Kid, concept art, character sheet, in different poses and angles, including front view, side view, and back view, turnaround sheet, minimalist background, detailed face, portrait.

  • @秦奕-f9k
    @秦奕-f9k 2 หลายเดือนก่อน

    great ai master

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thank you, Sensei!

  • @LaMagra-w4c
    @LaMagra-w4c 9 วันที่ผ่านมา

    Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple
    ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?

    • @goshniiAI
      @goshniiAI  9 วันที่ผ่านมา +1

      Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory.
      Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video th-cam.com/video/TWSFej_S_bY/w-d-xo.htmlsi=hWosspilbjYj3QWl

    • @LaMagra-w4c
      @LaMagra-w4c 9 วันที่ผ่านมา

      @@goshniiAI Thank you! It worked but is it normally very slow when it hits the first ksampler? it takes forever to get through this point

    • @goshniiAI
      @goshniiAI  9 วันที่ผ่านมา +1

      @@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you!
      Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.

  • @bananacomputer9351
    @bananacomputer9351 2 หลายเดือนก่อน

    Wow nice

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      saying thank you!

  • @TheBearmoth
    @TheBearmoth 2 หลายเดือนก่อน

    Great video, very helpful! What kind of spec do you need for this flow?
    I'm able to run some Flux1D stuff, but ComfyUi keeps getting killed for taking too much memory with this workflow :(

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Thank you! I'm glad you found the video helpful. if you’re already running FLUX1D. Ideally, you’d want at least 12GB of VRAM for smoother runs. You can try lowering the resolution of the inputs or using quantized models to reduce memory usage.

    • @TheBearmoth
      @TheBearmoth 2 หลายเดือนก่อน

      @@goshniiAI any system RAM requirements? That's given me grief in the past, before I upgraded it.

  • @ttthr4582
    @ttthr4582 23 วันที่ผ่านมา

    How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow

    • @goshniiAI
      @goshniiAI  19 วันที่ผ่านมา

      Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet.
      For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration.
      I hope these help.

  • @poptasticanimation55
    @poptasticanimation55 10 วันที่ผ่านมา

    My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?

    • @goshniiAI
      @goshniiAI  10 วันที่ผ่านมา

      First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ]
      If it’s missing, you can download the necessary files by using the Manager.
      then make sure you update dcomfyUi to the latest version.

  • @personaje27
    @personaje27 หลายเดือนก่อน

    Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน +1

      Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently.
      Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.

  • @phenix5609
    @phenix5609 หลายเดือนก่อน

    Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Thank you for diving into the workflow! Here are a few tips that might help:
      - Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already.
      - It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result.
      i hope these helps

  • @AnthonyTori
    @AnthonyTori 2 หลายเดือนก่อน

    It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      .glb would advance the creation of consistent characters. That might just be a possibility in the future!

  • @aaagaming2023
    @aaagaming2023 2 หลายเดือนก่อน

    Is there automated way in comfy to split the character sheet into individual images to train LoRAs on the character?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Yes, you can get individual images by using the image crop node.

  • @stevenls9781
    @stevenls9781 21 วันที่ผ่านมา

    Is there a way with this workflow to use an image of a person that would be part of the output character sheet?

    • @goshniiAI
      @goshniiAI  21 วันที่ผ่านมา

      Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency.
      To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.

    • @stevenls9781
      @stevenls9781 21 วันที่ผ่านมา

      @@goshniiAI oh ok, that works also. Doooo you happen to have a link to a training a lora video :D

    • @goshniiAI
      @goshniiAI  20 วันที่ผ่านมา

      ​@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process.
      you can check out this reference video that might assist you th-cam.com/video/Uls_jXy9RuU/w-d-xo.htmlsi=EJoLucxVyOFFQKjB

  • @lordmo3416
    @lordmo3416 หลายเดือนก่อน +1

    Would you be so kind as to give the workflow for using an existing image or character? Thanks

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน +1

      Yes, hopefully, the tutorial that follows will clarify and give that.

    • @lordmo3416
      @lordmo3416 หลายเดือนก่อน

      @@goshniiAI can't wait

  • @m3dia_offline
    @m3dia_offline 2 หลายเดือนก่อน

    are you going to follow up on this video on how to use this character sheet to put them in different scenes/videos?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Thanks for the suggestion! I'll check it out since you mentioned it.

  • @cray989
    @cray989 21 วันที่ผ่านมา

    I'm getting an error when I try to use the DWPreprocessor (and several others). The message says:
    # ComfyUI Error Report
    ## Error Details
    - **Node Type:** AIO_Preprocessor
    - **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError
    - **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
    ## Stack Trace
    My internet connection is fine. Any advice?

    • @goshniiAI
      @goshniiAI  21 วันที่ผ่านมา

      Sorry to hear that; I would recommend updating any of your nodes as well as running an update for ComfyUI.

  • @dmitryboldyrev7364
    @dmitryboldyrev7364 2 หลายเดือนก่อน +1

    How to create multiple consistent cartoon characters interacting with each other on different scenes?

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Hopefully soon, in the next post

  • @muggyate
    @muggyate หลายเดือนก่อน

    I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Thank you for sharing that approach with everyone! awesome tip!

  • @demiurgen3407
    @demiurgen3407 2 หลายเดือนก่อน

    This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency.
      it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.

    • @demiurgen3407
      @demiurgen3407 2 หลายเดือนก่อน +1

      @@goshniiAI Cool! Maybe you could do a video on that? How to move from a character sheet to a 3D model :)

  • @josemasisvalverde8646
    @josemasisvalverde8646 12 วันที่ผ่านมา

    Can use this for Sdxl?

    • @goshniiAI
      @goshniiAI  11 วันที่ผ่านมา

      Yes, you can; just make sure to use the correct SDXL models for controlnet, checkpoint Loader, and other SDXL-compatible nodes.

  • @lefourbe5596
    @lefourbe5596 2 หลายเดือนก่อน +1

    i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently.
    your take is basically the mick mumpitz workflow with flux. it's good as it is.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.

  • @edmartincombemorel
    @edmartincombemorel 2 หลายเดือนก่อน

    great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight

  • @k.jatuphat9785
    @k.jatuphat9785 2 หลายเดือนก่อน

    How to add LoRA to this workflow? Please. I need LoRA for my charector face and Controlnet for my charector pose.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=-l4wISSzrH0i1wmp

  • @Josegonzalez-gv4ws
    @Josegonzalez-gv4ws 23 วันที่ผ่านมา

    And how do I use it to generate images after this set up?

    • @goshniiAI
      @goshniiAI  21 วันที่ผ่านมา

      You can use a character sheet to train your own Lora to become a consistent character that you can use.

  • @Larimuss
    @Larimuss หลายเดือนก่อน

    But how do we make different poses and profile photos for loras etc? Part 2 would be awesome 😂 this is a great workflow and video thanks!

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      I'm glad you enjoyed the workflow and video! I appreciate your suggestion to create various poses and profile photos for LoRAs, and I will take it into consideration. True enough, Part 2 seems like a really good idea! :)

  • @skybluexox
    @skybluexox หลายเดือนก่อน

    I can’t use AIO Aux processor, how do I fix this? 😢

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      No need to worry. You can use separate preprocessors for each model, and everything will still work.

  • @Larimuss
    @Larimuss หลายเดือนก่อน

    Nice thanks. But what about when we want to use the character in a generation?

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Yes, you can, here is a follow-up video that explains the process. th-cam.com/video/OHl9J_Pga-E/w-d-xo.html

  • @greenlanternA123
    @greenlanternA123 2 หลายเดือนก่อน

    your UI is very nice, I still have the old look, how do I update to get your UI ?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Please see my Video here, towards the end, i explained the settings: th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=uMK8VUuxhCxyIerW

  • @adult85a1
    @adult85a1 หลายเดือนก่อน

    sir! which gpu are you using? and please suggest cloud gpu service site!

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.

  • @JustinCiriello
    @JustinCiriello 2 หลายเดือนก่อน

    It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!

  • @AIandTech-dq4iy
    @AIandTech-dq4iy 2 หลายเดือนก่อน +1

    I can't find the ControlNetApply SD3 and HunyuanDIT nodes. Where can I install them?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      One of the key nodes in comfyUI is the controlnetapplySD3. Before it's made available, make sure comfy is updated.

    • @goldkat94
      @goldkat94 2 หลายเดือนก่อน

      @@goshniiAI I can't find it either. Auxiliary Preprocessors is installed and "ComfyUI is already up to date with the latest version."

    • @bluemodize7718
      @bluemodize7718 2 หลายเดือนก่อน +1

      @@goshniiAI I already have comfy and packages up to date and still can't find it

    • @Simjedi
      @Simjedi 2 หลายเดือนก่อน

      @@bluemodize7718 It has changed. It's been renamed to "Apply Controlnet with VAE"

    • @fedesalmaso
      @fedesalmaso หลายเดือนก่อน

      @@bluemodize7718 same here

  • @ScaleniumPersonaleAI
    @ScaleniumPersonaleAI 21 วันที่ผ่านมา

    Bro this video is great but some nodes are missing...how should we fix this?

    • @goshniiAI
      @goshniiAI  21 วันที่ผ่านมา

      If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear.
      That will help to find the missing nodes and fix them.

  • @wrillywonka1320
    @wrillywonka1320 24 วันที่ผ่านมา

    update on the controlnetapplysd3 node, supposedly it has been renamed controlnet apply vae

    • @goshniiAI
      @goshniiAI  21 วันที่ผ่านมา

      Thank you for making us aware. We appreciate you watching out for that.

  • @sanbait
    @sanbait หลายเดือนก่อน

    what is ur comf ui panel in browser?

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Hello there, i have explained that towards the end of this video. th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=_KhvMhp30g_h2rxx
      i hope this helps.

  • @CsokaErno
    @CsokaErno หลายเดือนก่อน

    This "controlnetapply sd3 andhunyuandit" is nowhere :/ I updated everything.

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.

  • @ZergRadio
    @ZergRadio 2 หลายเดือนก่อน

    Wow, I really enjoyed this vid.
    I am an absolute beginner.
    I am confused. In the video you have your character in many poses and improved the details.
    How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment?
    Do you have a video on that?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background.
      I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.

  • @RxAIWithDrJen
    @RxAIWithDrJen 2 หลายเดือนก่อน

    Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.

    • @RxAIWithDrJen
      @RxAIWithDrJen 2 หลายเดือนก่อน

      @@goshniiAI Thanks! And thank you for an excellent video

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      @@RxAIWithDrJen You are most welcome. Thank you for being here

  • @stevenls9781
    @stevenls9781 26 วันที่ผ่านมา

    Can we download that workflow.. maybe I missed that in the vid.

    • @goshniiAI
      @goshniiAI  26 วันที่ผ่านมา +1

      Yes, you can use the link in the description.

    • @stevenls9781
      @stevenls9781 26 วันที่ผ่านมา

      @@goshniiAI oh man... if only I used my eyes. thanks for the reply.

    • @stevenls9781
      @stevenls9781 25 วันที่ผ่านมา

      ah I was looking for a JSON file or something, it's a PNG to use as a ref and copy into Comfy

    • @goshniiAI
      @goshniiAI  25 วันที่ผ่านมา +1

      @@stevenls9781 True! A PNG or JSON file can be used in the same way.
      The benefit of using a PNG workflow is that you can see a preview of the node structure or layout. You only need to drag the PNG file into comfyui to get to the workflow.

    • @stevenls9781
      @stevenls9781 25 วันที่ผ่านมา

      @@goshniiAI ah gotcha, I was just looking at them as an image preview and thought cool I can create it based on that. Now after doing it manually I have dragged the png into Comfy and it loaded.. hahahah well good practice following the image :D

  • @ainaopeyemi339
    @ainaopeyemi339 2 หลายเดือนก่อน

    So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box
    Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times?????
    Thank you

    • @Muz889
      @Muz889 2 หลายเดือนก่อน

      What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Thanks for explaining and providing the extra information

  • @Huguillon
    @Huguillon 2 หลายเดือนก่อน

    How do you get that new Interface??, I updated everything and I still have the old interface

    • @Huguillon
      @Huguillon 2 หลายเดือนก่อน

      Nevermind, I found it

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Awesome! I'm glad you found it.

    • @Huguillon
      @Huguillon 2 หลายเดือนก่อน

      @@goshniiAI By the way, Amazing video, Thank you

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      @@Huguillon i appreciate it, You are welcome

  • @felipecesarlourenco8955
    @felipecesarlourenco8955 หลายเดือนก่อน

    how to add simple lora?

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=FzSSqoe6OV_56l55

  • @tmlander
    @tmlander หลายเดือนก่อน

    why not share the json for comfy? I went to gumroad and downloaded your files but was surprised there is no json just an image of your set up!!!?????

    • @devnull_
      @devnull_ หลายเดือนก่อน +1

      You sure the image didn't have the comfy workflow stored into it? Did you try dropping it into Comfy UI?

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน +1

      Yes you are right, the PNG image still works the same as a JSON file. You only have to import it or drag and drop into comfyUI.

    • @tmlander
      @tmlander หลายเดือนก่อน

      @@goshniiAI I saw that later... sorry I thought comfy only accepted json... thanks for your work!

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      @@tmlander you are most welcome, thank you for sharing an update.

  • @bushwentto711
    @bushwentto711 2 หลายเดือนก่อน

    Cool but now how can we use that to create a consistent character in a scene with flux?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      I am looking into it, and hopefully we will have a video guide on it soon.

    • @bushwentto711
      @bushwentto711 หลายเดือนก่อน

      @@goshniiAI Cheers mate keep up the great content

  • @RagonTheHitman
    @RagonTheHitman 2 หลายเดือนก่อน +1

    I can't use "DWPose" as a Preprocessor. I get some strange errors. Could have something to do with onnxruntime-gpu / Cuda version whatever. Someone wrote: "The error message mentioned above usually means DWPose, a Deep Learning model, and more specifically, a Controlnet preprocessor for OpenPose within ComfyUI's ControlNet Auxiliary Preprocessors, doesn't support the CUDA version installed on your machine." I tried for 4 hours to fix it, ChatGpt could'nt help neither anyone on the Internet..... :(

    • @JustinCiriello
      @JustinCiriello 2 หลายเดือนก่อน +3

      I can't either. Try using OpenposePreprocessor instead.

    • @RagonTheHitman
      @RagonTheHitman 2 หลายเดือนก่อน +2

      @@JustinCiriello Yes, this is working :)

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Thank you for providing the additional information.

    • @brandoncurrypitcher1945
      @brandoncurrypitcher1945 หลายเดือนก่อน +1

      @@JustinCiriello Thanks, I had the same issue

  • @fungus98
    @fungus98 2 หลายเดือนก่อน

    So it appears that apply SD3 node has been renamed to Apply With VAE?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      It is still SD3, as I checked.

    • @fungus98
      @fungus98 2 หลายเดือนก่อน

      @@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +2

      Thank you for pointing that out, it looks like the "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      @@fungus98 Yeah, you are right, and thank you for sharing your observation

  • @sanbait
    @sanbait หลายเดือนก่อน

    but what about non-human characters?
    Animals?

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      For animals, you'll need the controlnet animal position model, but for now I'm not sure it is currently available for Flux.

    • @sanbait
      @sanbait หลายเดือนก่อน

      @@goshniiAI how i can custom skelet.
      iam have game char like pokemon

  • @mr.entezaee
    @mr.entezaee 2 หลายเดือนก่อน

    Does anyone know how to fix this problem?
    Failed to restore node: Ultimate SD Upscale
    Please remove and re-add it.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.

    • @mr.entezaee
      @mr.entezaee 2 หลายเดือนก่อน

      @@goshniiAI Yes, that's it, but I don't know which node to delete.. How do I know which node to delete?

  • @botlifegamer7026
    @botlifegamer7026 2 หลายเดือนก่อน

    There is no option for controlnetapply sd3 option.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      The controlnetapplySD3 is a core node in comfyUI. Ensure comfy is updated before it becomes available.

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      please do the same by updating comfyui.

    • @botlifegamer7026
      @botlifegamer7026 2 หลายเดือนก่อน

      @@goshniiAI it's not there even after updates

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      @HelloMeMeMeow Yeah the workflow is now available.

    • @botlifegamer7026
      @botlifegamer7026 2 หลายเดือนก่อน

      @@goshniiAI Your workflow is a ControlnetApply vae not the sd3 you have in yours or did you rename it?

  • @wrillywonka1320
    @wrillywonka1320 2 หลายเดือนก่อน

    Can this be done in forge ui?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน +1

      Yeah, hopefully I'll make a tutorial video for that.

    • @wrillywonka1320
      @wrillywonka1320 2 หลายเดือนก่อน

      @@goshniiAI thatd be awesome! I need that badly

  • @victorestomo729
    @victorestomo729 2 หลายเดือนก่อน

    can I add load lora node?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Yeah, that can be done. I explained how to do it in this link here. th-cam.com/video/HuDU4DlZid8/w-d-xo.htmlsi=gC-go2q4ylLSm6Or

  • @amirhossein1108
    @amirhossein1108 26 วันที่ผ่านมา

    Is this free?

    • @goshniiAI
      @goshniiAI  26 วันที่ผ่านมา

      Yes, you are welcome to use the description's link.

  • @Thefishos
    @Thefishos 2 หลายเดือนก่อน

    Very nice work ! thanks a lot man. I know it takes a lot of time to make videos like this, but is there any chance you could make a video with a workflow like this one but with flux ofc:
    th-cam.com/video/849xBkgpF3E/w-d-xo.htmlsi=GZwbPr4nuI8dvvyn
    That would be amazing!!!
    🙏

    • @goshniiAI
      @goshniiAI  24 วันที่ผ่านมา +1

      Hi there, I appreciate your suggestion and the reference link. i will consider that.

  • @bhuvanaib.9731
    @bhuvanaib.9731 10 วันที่ผ่านมา

    Hi it's stuck on Load Upscale Model node. I believe I don't have the "4x-Ultrasharp.pth". How to get that please?

    • @goshniiAI
      @goshniiAI  10 วันที่ผ่านมา

      The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you th-cam.com/video/PPPQ1SANScM/w-d-xo.htmlsi=M-fMMvE6-kEzr5u8

  • @hasstv9393
    @hasstv9393 2 หลายเดือนก่อน

    Can Anyone tell me the usecase of this characters images?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Awesome question! Just picture game development, animation, or storyboarding. When you have consistent images from different angles, it makes sure your character looks the same from any perspective. This makes it easier to animate, storyboard, or even print in 3D. It's also super helpful for storybooks or visualizing characters in dynamic scenes. I hope that gives some inspiration!

    • @hasstv9393
      @hasstv9393 2 หลายเดือนก่อน

      @@goshniiAI is possible to make the 3D models with AI with this images

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      @@hasstv9393 Absolutely! There are good AI tools for converting 2D concepts to 3D.
      If you're looking for AI-powered choices, you can use 3D A.I. Studio, Meshy, Rodin, Tripo 3D, or Genie by Luma Labs to produce 3D models directly from images, while platforms like Ready Player Me allow you to build 3D avatars using an image input.

  • @Ozstudiosio
    @Ozstudiosio 4 วันที่ผ่านมา

    perfect workflow could you send me your contact we need speak about some business work?

    • @goshniiAI
      @goshniiAI  4 วันที่ผ่านมา

      Thank you! Please send an email to this address: mylifeisgrander@protonmail.com.