Use Llama3.2 to "Chat" with Flux.1 in ComfyUI with 8GB+ VRAM

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 86

  • @MarceloPlaza
    @MarceloPlaza หลายเดือนก่อน +1

    Great workflow, thanks for sharing.

  • @grahamulax
    @grahamulax 2 หลายเดือนก่อน +1

    This is the video ive been waiting for! Downloaded 3.2 like a week ago and sat on it. FINALLLLLLY THE TIME HAS COME!

    • @onlinehorseplay
      @onlinehorseplay หลายเดือนก่อน

      4 weeks later can we get a quick count on pngs with 'emma watson' in the name? I'm doing a comparison as a sanity check

  • @ajedi6127
    @ajedi6127 2 หลายเดือนก่อน +17

    Waaaiiiit a second, you're telling me that comfyUI is now actually comfortable to use?...impressive.

    • @IdRadical
      @IdRadical 2 หลายเดือนก่อน

      Comfy ui is great, it forces people to learn and if youve had the pleasure of trying to run flux on Comfyui with amd gpu youll know there was plenty of support on github and hugging face and users helping eachother, we should give thanks the coder friends we made along the way. Python is awesome, this shift in the number of users in this AI game. We are forced to step our game up

  • @p_p
    @p_p หลายเดือนก่อน

    i installed ollama from the website with the exe installer, not really sure if this is running locally or not🙄🙄

    • @NerdyRodent
      @NerdyRodent  หลายเดือนก่อน

      Yup, as it’s installed & running on your pc it’s running locally! 👍🏼

  • @urbanthem
    @urbanthem 2 หลายเดือนก่อน +4

    Hello! Been following you from the start, but this is straight up amazing.

  • @amkire65
    @amkire65 2 หลายเดือนก่อน +8

    I quite like that complicated messy version of ComfyUI... makes me look clever knowing how to use it, if anyone sees me working on some images. :) Certainly give this a try once I fix my computer.

    • @Bicyclesidewalk
      @Bicyclesidewalk 2 หลายเดือนก่อน

      Yeah, I like the original ComfyUI as well...yet to sail into these uncharted waters...lol~

  • @scobelverse
    @scobelverse 2 หลายเดือนก่อน +1

    this was really well done

  • @quercus3290
    @quercus3290 2 หลายเดือนก่อน +2

    it almost looks like auto1111, well done.

  • @rifz42
    @rifz42 2 หลายเดือนก่อน

    I found this video "Ollama does Windows?!? Matt Williams" that helped get Ollama working, and I was able to use the workflow. I learned a lot getting it going.
    "

  • @devnull_
    @devnull_ 2 หลายเดือนก่อน +2

    Thanks! I thought that rat was some Gordon Freeman wannabe :D

  • @juanjesusligero391
    @juanjesusligero391 2 หลายเดือนก่อน

    Oh, Nerdy Rodent, 🐭🎵
    he really makes my day, ☀😊
    showing us AI, 💻🤖
    in a really British way. ☕🎶

  • @lucvaligny5410
    @lucvaligny5410 2 หลายเดือนก่อน +1

    can't get the APIllm general link to work , while the basic WF start with ollama from LLM-party is working , but there's so few explanation on how it works it's a pity
    i had the error first loading Rodent WF but everything went in place after installing missing nodes

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      If you want to change from the LLM party node API loader, you can check out the GitHub page for more information on whichever options you’d like to use instead. It does indeed support a lot 😊

  • @build.aiagents
    @build.aiagents 2 หลายเดือนก่อน +1

    Phenomenal

  • @freestylekyle
    @freestylekyle 2 หลายเดือนก่อน

    I had some problems getting to work, I did and update and refresh but no go. In the end I gave chatgpt the output and asked it how to fix the errors. now I got it going, so maybe give that a try if your having problems.

  • @Larimuss
    @Larimuss 2 หลายเดือนก่อน +5

    Whmm with my meager 4070ti 12gb vram. Wouldnt it be better to use guff lama in ram so the image gen doesnt compete with lama? Or does it load into ram every time you queue? Im guessing guffm might not be out yet for this model though.

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      With 12Gb you’d want a gguf that is less than 9 Gb. With a 1gb llama 3.2, that should probably fit in!

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      @@malditonuke Yup, the small size of llama makes it great!

  • @saltygamer8435
    @saltygamer8435 16 วันที่ผ่านมา

    Can you create a new docker image for this on runpod please

    • @NerdyRodent
      @NerdyRodent  15 วันที่ผ่านมา

      I would imagine so. Go for it!

  • @hungi
    @hungi หลายเดือนก่อน

    24G vram recommended -- what's the equivalant in apple silicone? would M3/16G ram suffice for this exact model?

    • @NerdyRodent
      @NerdyRodent  หลายเดือนก่อน

      I have no idea about Mac stuff, but your best bet for anything AI is Linux + Nvidia!

  • @PugAshen
    @PugAshen 2 หลายเดือนก่อน +2

    Great way for a nice workflow. But like many others have mentioned, it will not even open. Clean install ComfyUI, installed the packages mentioned on your page, but unfort. nothing happens. Any chance of a checkup?

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      Start by clicking “update all” in manager and restarting to ensure you’ve got the latest version! Should be Oct 12 as a minimum

  • @MilesBellas
    @MilesBellas 2 หลายเดือนก่อน +3

    Open Source is becoming amazing !
    NR works in R&D at "The Mouse"?

  • @MissingModd
    @MissingModd 2 หลายเดือนก่อน

    Nerdy's famous! Wow!

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      😉👋

  • @DaveTheAIMad
    @DaveTheAIMad 2 หลายเดือนก่อน +1

    Can it be used with Textgen webui? Ollama is awful lol, no way to use it across a network, it wont load your already downladed llms with out converting them and duplicating them, pain to set it for a new folder.
    I love your video's and find them informative, though it does seem your trying to turn comfyui into auto1111 lol complexity is not as much an enemy as tools that over-simplify can be... though perhaps thats just from personal standpoint.

  • @dkamhaji
    @dkamhaji 2 หลายเดือนก่อน

    is there a setting to see the sampling progress as its happening so that you can cancel it if its not what you want? not sure if its the node sampler custom advanced that doesn't show you the progress

  • @MohammedAli-tq8ln
    @MohammedAli-tq8ln 2 หลายเดือนก่อน

    Very useful!

  • @DezorianGuy
    @DezorianGuy 2 หลายเดือนก่อน

    Can I switch the LLAMA 3.2 and use another variant of the 3.2 models?

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      Yup. Press 2 to go to the LLM settings and change there like in the video!

    • @DezorianGuy
      @DezorianGuy 2 หลายเดือนก่อน

      @@NerdyRodent What exactly should I change? Which node?

  • @onlinehorseplay
    @onlinehorseplay หลายเดือนก่อน

    thanks for using the quotes, and I feel like all things DIY AI should be in quotes because you're gonna git pipped as a windows newb (I learned most of it here tho)

  • @antonpictures
    @antonpictures 2 หลายเดือนก่อน +1

    I wish I had the hardware

  • @NeptuneGadgetBR
    @NeptuneGadgetBR 2 หลายเดือนก่อน

    Hi, you got a subscriber here, congratulations for the amazing work, I have an issue after upscaling it doesn't looks perfect, the edges has some blurry edges, any idea how to solve it? I enabled High res... but it didn't fix the issue

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      Hires mode will do the upscale for you, yes

  • @dracothecreative
    @dracothecreative 2 หลายเดือนก่อน

    Hey all, so i am using pinokio for my comfi ui stuff and everything works fine exept ollama, i installed it but its in my user files and i installed it in pinokio but i still get this: Error code: 404 - {'error': {'message': 'model "llama3.2" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}} i know what to do i just dont know the parent folder it needs to be in... anyone knows?

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      Did you try pulling it first like in the video?

    • @dracothecreative
      @dracothecreative 2 หลายเดือนก่อน

      @@NerdyRodent yeah and it works now, only lama is in a wierd folder unrelated, Thanks!

    • @rifz42
      @rifz42 2 หลายเดือนก่อน

      @@NerdyRodent I have git installed but don't know where you ran the pull command, what folder?. I tried in my comfyui install folder and with the the git cmd window.

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      @@rifz42 git? No, it’s “ollama pull” like in the video 😉

  • @purposefully.verbose
    @purposefully.verbose 2 หลายเดือนก่อน

    so it works well, but it is loading this huge dev model every time... slowly on a 3090. is there some hidden setting to keep in loaded?

  • @fionaliath6326
    @fionaliath6326 2 หลายเดือนก่อน

    Weirdly, when I attempt to load these workflows in ComfyUI, literally nothing happens. No errors, no load, just leaves whatever was already open there. Not even a message in the Terminal. I thought it might be my ComfyUI having too many old and/or conflicting nodes, so I did a clean reinstall of the portable version with only the Manager, but that didn't change anything. One of my workflows (in .png format) does load, but the .jsons I downloaded from the repository do absolutely nothing. I re-downloaded them, in case they broke somehow, but that didn't change anything either, and I opened them up to make sure they have contents, and they do (quite a lot). Dunno what's going on there, but it's too bad, because this workflow looks fun.

  • @randymonteith1660
    @randymonteith1660 2 หลายเดือนก่อน

    Error occurred when executing ImpactControlBridge:
    No module named 'comfy_execution'

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      My guess would be an old version of that custom node is installed. Click “update all” in manager to ensure you’re up to date!

  • @13-february
    @13-february 2 หลายเดือนก่อน

    I watched this video with interest and wanted to try this amazing LLM ability. Unfortunately, I have not been able to get to work your workflow which one I found on huggingface. Many nodes in it do not have connections, and I had to guess where to attach them. I managed to connect some nodes, but some did not give in, for example, the value slot of the Control Bridge node remained lit red. I had to set this this whole group of nodes to bypass, because I don't even understand what they are for.
    After that, workflow started working, but the generated image did not match the prompt at all, as if the Sampler did not see the prompt, although the LLM group is working properly and generates the desired text. I don't understand how to make workflow work correctly. It will be very kind of you if you fix the LLM workflow on huggingface.

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +2

      If you change the way everything is connected then it will definitely work in unexpected ways! The best thing is to simply enter your prompt, and then press queue 😎

    • @13-february
      @13-february 2 หลายเดือนก่อน

      @@NerdyRodent it doesn't work because many nodes have lost connections. And queue just stops and does not do anything. So can you please check the workflow and fix it?

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      @@13-february To fix your environment try updating ComfyUI, or go with a fresh install!

    • @rifz42
      @rifz42 2 หลายเดือนก่อน

      @@13-february you have to work through the nodes one by one. I did but it took a long time. do you have ggug, hyper-flux, a vae, clip 1 and 2(Vit-L-14-text) ?

    • @13-february
      @13-february 2 หลายเดือนก่อน

      @@rifz42 Yes, of course. I am well versed in the ComfyUI. And I'm pretty sure that workflow is the problem. I downloaded the one called Flux-Simple-LLM_v0 from huggingface and installed all the missing nodes and ollama of course. But the problem is that the some nodes' slots are not connected to anything. For example, the value slot of the Control Bridge mode has no connections and there occurs error during generation. In addition some image and models connections were also missing in this workflow. Perhaps the author provides a correct workflow on the patreon, but the one that I found on huggingface is completely damaged.

  • @MilesBellas
    @MilesBellas 2 หลายเดือนก่อน

    Nvidia Sana test next ?
    😊

  • @4thObserver
    @4thObserver 2 หลายเดือนก่อน

    Finally, Spaggetti monster begone!

  • @fullflowstudios
    @fullflowstudios 2 หลายเดือนก่อน +1

    This is sooo nerdy and sooooo weird. The workflow you show is nowhere to be found in my Comfy install when I browse the 4 templates that are offered. What miracle do you perform to load this new layout into the program?

  • @davidberserker6625
    @davidberserker6625 หลายเดือนก่อน +1

    You should show the link connections of the nodes so we understand better the workflow, it doesn't make sense you show the nodes without the connections.

    • @NerdyRodent
      @NerdyRodent  หลายเดือนก่อน

      No. The point of the workflow is that you don’t see or care about any of the spaghetti 😃

    • @davidberserker6625
      @davidberserker6625 หลายเดือนก่อน

      @@NerdyRodent Yes, I got that but it will be better if you show the links to explain the logic, I know is a very simple workflow but for people learning comfy it will be convenient even at the end you hide all links. Take it as a constructive feedback please, just a thought for your future videos ;)

  • @chochlik223
    @chochlik223 2 หลายเดือนก่อน

    Would you be able to create a workflow on comfyui in which at the very beginning you would add a photo of the character (let's say it would be a photo in just underwear), then you create a so-called bra and panties mask. From this mask, the bra and panties are created on a white background. And at the very end add an upscale to these photos on a white background. I think it wouldn't be a problem for you and you would really help me a lot.

  • @phridays
    @phridays 2 หลายเดือนก่อน

    You lost me super big time in the first minute. I downloaded comfy portable on Windows, I installed manager, I see 0.2.3, and it opens in a web browser. I don't have any workflows top menu, no new buttons on the side, no way to hide the spaghetti. What did I miss?

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +1

      If you haven’t turned the beta interface on yet, you can do so in settings - th-cam.com/video/g8W3xe5kRBQ/w-d-xo.html

  • @nemonomen3340
    @nemonomen3340 2 หลายเดือนก่อน

    The use of Llama3.2 in interesting, but I don't think I'd ever use it. It doesn't really seem to do anything that actually improves the image quality or Flux's comprehension.

  • @LouisGedo
    @LouisGedo 2 หลายเดือนก่อน

    Hi 👋

  • @pink_fluffy_sky
    @pink_fluffy_sky 2 หลายเดือนก่อน

    I just updated ComfyUI and nothing changed. Do you have to turn on this new one-screen workflow thing somehow?

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      If you haven’t turned the beta interface on yet, you can do so in settings - th-cam.com/video/g8W3xe5kRBQ/w-d-xo.html

  • @Wattsepherson
    @Wattsepherson 2 หลายเดือนก่อน

    So it's not complicated now because... umm.. because they put a screen in front of the complicated stuff so you don't see it but it's still there and if you want everything to work properly you still need to visit the complicated stuff otherwise the umm... the umm the front screen that is less complicated, won't work properly.....
    So that's like having a car with no shell... and everyone's saying it's really complicated to fix and maintain but then someone built a shell for it, to hide the engine and electronics and now everybody knows how to fix and tweak it because it's hidden....
    Excuse me for a moment whilst I go and stroke my beard and try to work out what this means.

  • @MilesBellas
    @MilesBellas 2 หลายเดือนก่อน +1

    Comfyui quantized Pyramid Flow with Flux iterative upscaler next ?😊

    • @MilesBellas
      @MilesBellas 2 หลายเดือนก่อน

      via PI
      To upscale and add details to the output of Pyramid Flow in ComfyUI with Flux, you can use the "Iterative Upscale" workflow. Here's a step-by-step guide on how to do this:
      1) Open ComfyUI and select the "Iterative Upscale" workflow.
      2) Set the "Base Image" to the output of Pyramid Flow that you want to upscale.
      3) Choose a suitable upscaler model, such as LDSR or Lanczos, and set the "Upscale Factor" to the desired value.
      4) In the "Add Detail" section, select a model such as "DVV D8" or "Enhance1024" to add additional details to the upscaled image.
      5) Adjust the "Prompt Weight" and "Prompt Text" to fine-tune the added details.
      6) Click on "Run Workflow" to generate the upscaled and detailed image.
      7) You can also add a "Loop" step to iteratively upscale and add details multiple times for even higher resolution and detail.

  • @RedDragonGecko
    @RedDragonGecko 2 หลายเดือนก่อน +3

    This channel used to be good. Now it just promotes workflows locked behind a paywall.

    • @sven1858
      @sven1858 2 หลายเดือนก่อน +3

      @@RedDragonGecko that's not strictly true with this one. Yes he has patreon. Not disputing that, I'm not a paying member. However, this workflow is freely available, suggest watching and listening to the video.

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน +2

      Nope. Huggingface has no paywall! Supporters packs are available for those who want to support, so the choice is yours! 😃

  • @a.akacic
    @a.akacic 2 หลายเดือนก่อน +1

    🤦‍♂

  • @mossom
    @mossom 2 หลายเดือนก่อน

    I didn't get the 'reset view' option without adding "--front-end-version Comfy-Org/ComfyUI_frontend@latest" to the end of the launch variables in .
    un_nvidia_gpu.bat for anyone missing them.

    • @NerdyRodent
      @NerdyRodent  2 หลายเดือนก่อน

      Interesting, as it simply showed up for me when I started as usual! I take it you already had the new workflow and menu beta on already?

    • @mossom
      @mossom หลายเดือนก่อน

      @@NerdyRodent yes, i was confused to as why it wasn't there. Not sure if it was just my install.