Comfy ui is great, it forces people to learn and if youve had the pleasure of trying to run flux on Comfyui with amd gpu youll know there was plenty of support on github and hugging face and users helping eachother, we should give thanks the coder friends we made along the way. Python is awesome, this shift in the number of users in this AI game. We are forced to step our game up
I quite like that complicated messy version of ComfyUI... makes me look clever knowing how to use it, if anyone sees me working on some images. :) Certainly give this a try once I fix my computer.
I found this video "Ollama does Windows?!? Matt Williams" that helped get Ollama working, and I was able to use the workflow. I learned a lot getting it going. "
can't get the APIllm general link to work , while the basic WF start with ollama from LLM-party is working , but there's so few explanation on how it works it's a pity i had the error first loading Rodent WF but everything went in place after installing missing nodes
If you want to change from the LLM party node API loader, you can check out the GitHub page for more information on whichever options you’d like to use instead. It does indeed support a lot 😊
I had some problems getting to work, I did and update and refresh but no go. In the end I gave chatgpt the output and asked it how to fix the errors. now I got it going, so maybe give that a try if your having problems.
Whmm with my meager 4070ti 12gb vram. Wouldnt it be better to use guff lama in ram so the image gen doesnt compete with lama? Or does it load into ram every time you queue? Im guessing guffm might not be out yet for this model though.
Great way for a nice workflow. But like many others have mentioned, it will not even open. Clean install ComfyUI, installed the packages mentioned on your page, but unfort. nothing happens. Any chance of a checkup?
Can it be used with Textgen webui? Ollama is awful lol, no way to use it across a network, it wont load your already downladed llms with out converting them and duplicating them, pain to set it for a new folder. I love your video's and find them informative, though it does seem your trying to turn comfyui into auto1111 lol complexity is not as much an enemy as tools that over-simplify can be... though perhaps thats just from personal standpoint.
is there a setting to see the sampling progress as its happening so that you can cancel it if its not what you want? not sure if its the node sampler custom advanced that doesn't show you the progress
thanks for using the quotes, and I feel like all things DIY AI should be in quotes because you're gonna git pipped as a windows newb (I learned most of it here tho)
Hi, you got a subscriber here, congratulations for the amazing work, I have an issue after upscaling it doesn't looks perfect, the edges has some blurry edges, any idea how to solve it? I enabled High res... but it didn't fix the issue
Hey all, so i am using pinokio for my comfi ui stuff and everything works fine exept ollama, i installed it but its in my user files and i installed it in pinokio but i still get this: Error code: 404 - {'error': {'message': 'model "llama3.2" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}} i know what to do i just dont know the parent folder it needs to be in... anyone knows?
@@NerdyRodent I have git installed but don't know where you ran the pull command, what folder?. I tried in my comfyui install folder and with the the git cmd window.
Weirdly, when I attempt to load these workflows in ComfyUI, literally nothing happens. No errors, no load, just leaves whatever was already open there. Not even a message in the Terminal. I thought it might be my ComfyUI having too many old and/or conflicting nodes, so I did a clean reinstall of the portable version with only the Manager, but that didn't change anything. One of my workflows (in .png format) does load, but the .jsons I downloaded from the repository do absolutely nothing. I re-downloaded them, in case they broke somehow, but that didn't change anything either, and I opened them up to make sure they have contents, and they do (quite a lot). Dunno what's going on there, but it's too bad, because this workflow looks fun.
I watched this video with interest and wanted to try this amazing LLM ability. Unfortunately, I have not been able to get to work your workflow which one I found on huggingface. Many nodes in it do not have connections, and I had to guess where to attach them. I managed to connect some nodes, but some did not give in, for example, the value slot of the Control Bridge node remained lit red. I had to set this this whole group of nodes to bypass, because I don't even understand what they are for. After that, workflow started working, but the generated image did not match the prompt at all, as if the Sampler did not see the prompt, although the LLM group is working properly and generates the desired text. I don't understand how to make workflow work correctly. It will be very kind of you if you fix the LLM workflow on huggingface.
If you change the way everything is connected then it will definitely work in unexpected ways! The best thing is to simply enter your prompt, and then press queue 😎
@@NerdyRodent it doesn't work because many nodes have lost connections. And queue just stops and does not do anything. So can you please check the workflow and fix it?
@@13-february you have to work through the nodes one by one. I did but it took a long time. do you have ggug, hyper-flux, a vae, clip 1 and 2(Vit-L-14-text) ?
@@rifz42 Yes, of course. I am well versed in the ComfyUI. And I'm pretty sure that workflow is the problem. I downloaded the one called Flux-Simple-LLM_v0 from huggingface and installed all the missing nodes and ollama of course. But the problem is that the some nodes' slots are not connected to anything. For example, the value slot of the Control Bridge mode has no connections and there occurs error during generation. In addition some image and models connections were also missing in this workflow. Perhaps the author provides a correct workflow on the patreon, but the one that I found on huggingface is completely damaged.
This is sooo nerdy and sooooo weird. The workflow you show is nowhere to be found in my Comfy install when I browse the 4 templates that are offered. What miracle do you perform to load this new layout into the program?
You should show the link connections of the nodes so we understand better the workflow, it doesn't make sense you show the nodes without the connections.
@@NerdyRodent Yes, I got that but it will be better if you show the links to explain the logic, I know is a very simple workflow but for people learning comfy it will be convenient even at the end you hide all links. Take it as a constructive feedback please, just a thought for your future videos ;)
Would you be able to create a workflow on comfyui in which at the very beginning you would add a photo of the character (let's say it would be a photo in just underwear), then you create a so-called bra and panties mask. From this mask, the bra and panties are created on a white background. And at the very end add an upscale to these photos on a white background. I think it wouldn't be a problem for you and you would really help me a lot.
You lost me super big time in the first minute. I downloaded comfy portable on Windows, I installed manager, I see 0.2.3, and it opens in a web browser. I don't have any workflows top menu, no new buttons on the side, no way to hide the spaghetti. What did I miss?
The use of Llama3.2 in interesting, but I don't think I'd ever use it. It doesn't really seem to do anything that actually improves the image quality or Flux's comprehension.
So it's not complicated now because... umm.. because they put a screen in front of the complicated stuff so you don't see it but it's still there and if you want everything to work properly you still need to visit the complicated stuff otherwise the umm... the umm the front screen that is less complicated, won't work properly..... So that's like having a car with no shell... and everyone's saying it's really complicated to fix and maintain but then someone built a shell for it, to hide the engine and electronics and now everybody knows how to fix and tweak it because it's hidden.... Excuse me for a moment whilst I go and stroke my beard and try to work out what this means.
via PI To upscale and add details to the output of Pyramid Flow in ComfyUI with Flux, you can use the "Iterative Upscale" workflow. Here's a step-by-step guide on how to do this: 1) Open ComfyUI and select the "Iterative Upscale" workflow. 2) Set the "Base Image" to the output of Pyramid Flow that you want to upscale. 3) Choose a suitable upscaler model, such as LDSR or Lanczos, and set the "Upscale Factor" to the desired value. 4) In the "Add Detail" section, select a model such as "DVV D8" or "Enhance1024" to add additional details to the upscaled image. 5) Adjust the "Prompt Weight" and "Prompt Text" to fine-tune the added details. 6) Click on "Run Workflow" to generate the upscaled and detailed image. 7) You can also add a "Loop" step to iteratively upscale and add details multiple times for even higher resolution and detail.
@@RedDragonGecko that's not strictly true with this one. Yes he has patreon. Not disputing that, I'm not a paying member. However, this workflow is freely available, suggest watching and listening to the video.
I didn't get the 'reset view' option without adding "--front-end-version Comfy-Org/ComfyUI_frontend@latest" to the end of the launch variables in . un_nvidia_gpu.bat for anyone missing them.
Great workflow, thanks for sharing.
This is the video ive been waiting for! Downloaded 3.2 like a week ago and sat on it. FINALLLLLLY THE TIME HAS COME!
4 weeks later can we get a quick count on pngs with 'emma watson' in the name? I'm doing a comparison as a sanity check
Waaaiiiit a second, you're telling me that comfyUI is now actually comfortable to use?...impressive.
Comfy ui is great, it forces people to learn and if youve had the pleasure of trying to run flux on Comfyui with amd gpu youll know there was plenty of support on github and hugging face and users helping eachother, we should give thanks the coder friends we made along the way. Python is awesome, this shift in the number of users in this AI game. We are forced to step our game up
i installed ollama from the website with the exe installer, not really sure if this is running locally or not🙄🙄
Yup, as it’s installed & running on your pc it’s running locally! 👍🏼
Hello! Been following you from the start, but this is straight up amazing.
I quite like that complicated messy version of ComfyUI... makes me look clever knowing how to use it, if anyone sees me working on some images. :) Certainly give this a try once I fix my computer.
Yeah, I like the original ComfyUI as well...yet to sail into these uncharted waters...lol~
this was really well done
it almost looks like auto1111, well done.
I found this video "Ollama does Windows?!? Matt Williams" that helped get Ollama working, and I was able to use the workflow. I learned a lot getting it going.
"
Thanks! I thought that rat was some Gordon Freeman wannabe :D
Oh, Nerdy Rodent, 🐭🎵
he really makes my day, ☀😊
showing us AI, 💻🤖
in a really British way. ☕🎶
😁
can't get the APIllm general link to work , while the basic WF start with ollama from LLM-party is working , but there's so few explanation on how it works it's a pity
i had the error first loading Rodent WF but everything went in place after installing missing nodes
If you want to change from the LLM party node API loader, you can check out the GitHub page for more information on whichever options you’d like to use instead. It does indeed support a lot 😊
Phenomenal
I had some problems getting to work, I did and update and refresh but no go. In the end I gave chatgpt the output and asked it how to fix the errors. now I got it going, so maybe give that a try if your having problems.
Whmm with my meager 4070ti 12gb vram. Wouldnt it be better to use guff lama in ram so the image gen doesnt compete with lama? Or does it load into ram every time you queue? Im guessing guffm might not be out yet for this model though.
With 12Gb you’d want a gguf that is less than 9 Gb. With a 1gb llama 3.2, that should probably fit in!
@@malditonuke Yup, the small size of llama makes it great!
Can you create a new docker image for this on runpod please
I would imagine so. Go for it!
24G vram recommended -- what's the equivalant in apple silicone? would M3/16G ram suffice for this exact model?
I have no idea about Mac stuff, but your best bet for anything AI is Linux + Nvidia!
Great way for a nice workflow. But like many others have mentioned, it will not even open. Clean install ComfyUI, installed the packages mentioned on your page, but unfort. nothing happens. Any chance of a checkup?
Start by clicking “update all” in manager and restarting to ensure you’ve got the latest version! Should be Oct 12 as a minimum
Open Source is becoming amazing !
NR works in R&D at "The Mouse"?
Nerdy's famous! Wow!
😉👋
Can it be used with Textgen webui? Ollama is awful lol, no way to use it across a network, it wont load your already downladed llms with out converting them and duplicating them, pain to set it for a new folder.
I love your video's and find them informative, though it does seem your trying to turn comfyui into auto1111 lol complexity is not as much an enemy as tools that over-simplify can be... though perhaps thats just from personal standpoint.
is there a setting to see the sampling progress as its happening so that you can cancel it if its not what you want? not sure if its the node sampler custom advanced that doesn't show you the progress
Very useful!
Can I switch the LLAMA 3.2 and use another variant of the 3.2 models?
Yup. Press 2 to go to the LLM settings and change there like in the video!
@@NerdyRodent What exactly should I change? Which node?
thanks for using the quotes, and I feel like all things DIY AI should be in quotes because you're gonna git pipped as a windows newb (I learned most of it here tho)
I wish I had the hardware
Hi, you got a subscriber here, congratulations for the amazing work, I have an issue after upscaling it doesn't looks perfect, the edges has some blurry edges, any idea how to solve it? I enabled High res... but it didn't fix the issue
Hires mode will do the upscale for you, yes
Hey all, so i am using pinokio for my comfi ui stuff and everything works fine exept ollama, i installed it but its in my user files and i installed it in pinokio but i still get this: Error code: 404 - {'error': {'message': 'model "llama3.2" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}} i know what to do i just dont know the parent folder it needs to be in... anyone knows?
Did you try pulling it first like in the video?
@@NerdyRodent yeah and it works now, only lama is in a wierd folder unrelated, Thanks!
@@NerdyRodent I have git installed but don't know where you ran the pull command, what folder?. I tried in my comfyui install folder and with the the git cmd window.
@@rifz42 git? No, it’s “ollama pull” like in the video 😉
so it works well, but it is loading this huge dev model every time... slowly on a 3090. is there some hidden setting to keep in loaded?
the GGUF is faster, but not as fun.
Weirdly, when I attempt to load these workflows in ComfyUI, literally nothing happens. No errors, no load, just leaves whatever was already open there. Not even a message in the Terminal. I thought it might be my ComfyUI having too many old and/or conflicting nodes, so I did a clean reinstall of the portable version with only the Manager, but that didn't change anything. One of my workflows (in .png format) does load, but the .jsons I downloaded from the repository do absolutely nothing. I re-downloaded them, in case they broke somehow, but that didn't change anything either, and I opened them up to make sure they have contents, and they do (quite a lot). Dunno what's going on there, but it's too bad, because this workflow looks fun.
I'm having the same issue.
Error occurred when executing ImpactControlBridge:
No module named 'comfy_execution'
My guess would be an old version of that custom node is installed. Click “update all” in manager to ensure you’re up to date!
I watched this video with interest and wanted to try this amazing LLM ability. Unfortunately, I have not been able to get to work your workflow which one I found on huggingface. Many nodes in it do not have connections, and I had to guess where to attach them. I managed to connect some nodes, but some did not give in, for example, the value slot of the Control Bridge node remained lit red. I had to set this this whole group of nodes to bypass, because I don't even understand what they are for.
After that, workflow started working, but the generated image did not match the prompt at all, as if the Sampler did not see the prompt, although the LLM group is working properly and generates the desired text. I don't understand how to make workflow work correctly. It will be very kind of you if you fix the LLM workflow on huggingface.
If you change the way everything is connected then it will definitely work in unexpected ways! The best thing is to simply enter your prompt, and then press queue 😎
@@NerdyRodent it doesn't work because many nodes have lost connections. And queue just stops and does not do anything. So can you please check the workflow and fix it?
@@13-february To fix your environment try updating ComfyUI, or go with a fresh install!
@@13-february you have to work through the nodes one by one. I did but it took a long time. do you have ggug, hyper-flux, a vae, clip 1 and 2(Vit-L-14-text) ?
@@rifz42 Yes, of course. I am well versed in the ComfyUI. And I'm pretty sure that workflow is the problem. I downloaded the one called Flux-Simple-LLM_v0 from huggingface and installed all the missing nodes and ollama of course. But the problem is that the some nodes' slots are not connected to anything. For example, the value slot of the Control Bridge mode has no connections and there occurs error during generation. In addition some image and models connections were also missing in this workflow. Perhaps the author provides a correct workflow on the patreon, but the one that I found on huggingface is completely damaged.
Nvidia Sana test next ?
😊
Finally, Spaggetti monster begone!
Heresy!
This is sooo nerdy and sooooo weird. The workflow you show is nowhere to be found in my Comfy install when I browse the 4 templates that are offered. What miracle do you perform to load this new layout into the program?
You should show the link connections of the nodes so we understand better the workflow, it doesn't make sense you show the nodes without the connections.
No. The point of the workflow is that you don’t see or care about any of the spaghetti 😃
@@NerdyRodent Yes, I got that but it will be better if you show the links to explain the logic, I know is a very simple workflow but for people learning comfy it will be convenient even at the end you hide all links. Take it as a constructive feedback please, just a thought for your future videos ;)
Would you be able to create a workflow on comfyui in which at the very beginning you would add a photo of the character (let's say it would be a photo in just underwear), then you create a so-called bra and panties mask. From this mask, the bra and panties are created on a white background. And at the very end add an upscale to these photos on a white background. I think it wouldn't be a problem for you and you would really help me a lot.
You lost me super big time in the first minute. I downloaded comfy portable on Windows, I installed manager, I see 0.2.3, and it opens in a web browser. I don't have any workflows top menu, no new buttons on the side, no way to hide the spaghetti. What did I miss?
If you haven’t turned the beta interface on yet, you can do so in settings - th-cam.com/video/g8W3xe5kRBQ/w-d-xo.html
The use of Llama3.2 in interesting, but I don't think I'd ever use it. It doesn't really seem to do anything that actually improves the image quality or Flux's comprehension.
Hi 👋
I just updated ComfyUI and nothing changed. Do you have to turn on this new one-screen workflow thing somehow?
If you haven’t turned the beta interface on yet, you can do so in settings - th-cam.com/video/g8W3xe5kRBQ/w-d-xo.html
So it's not complicated now because... umm.. because they put a screen in front of the complicated stuff so you don't see it but it's still there and if you want everything to work properly you still need to visit the complicated stuff otherwise the umm... the umm the front screen that is less complicated, won't work properly.....
So that's like having a car with no shell... and everyone's saying it's really complicated to fix and maintain but then someone built a shell for it, to hide the engine and electronics and now everybody knows how to fix and tweak it because it's hidden....
Excuse me for a moment whilst I go and stroke my beard and try to work out what this means.
Comfyui quantized Pyramid Flow with Flux iterative upscaler next ?😊
via PI
To upscale and add details to the output of Pyramid Flow in ComfyUI with Flux, you can use the "Iterative Upscale" workflow. Here's a step-by-step guide on how to do this:
1) Open ComfyUI and select the "Iterative Upscale" workflow.
2) Set the "Base Image" to the output of Pyramid Flow that you want to upscale.
3) Choose a suitable upscaler model, such as LDSR or Lanczos, and set the "Upscale Factor" to the desired value.
4) In the "Add Detail" section, select a model such as "DVV D8" or "Enhance1024" to add additional details to the upscaled image.
5) Adjust the "Prompt Weight" and "Prompt Text" to fine-tune the added details.
6) Click on "Run Workflow" to generate the upscaled and detailed image.
7) You can also add a "Loop" step to iteratively upscale and add details multiple times for even higher resolution and detail.
This channel used to be good. Now it just promotes workflows locked behind a paywall.
@@RedDragonGecko that's not strictly true with this one. Yes he has patreon. Not disputing that, I'm not a paying member. However, this workflow is freely available, suggest watching and listening to the video.
Nope. Huggingface has no paywall! Supporters packs are available for those who want to support, so the choice is yours! 😃
🤦♂
I didn't get the 'reset view' option without adding "--front-end-version Comfy-Org/ComfyUI_frontend@latest" to the end of the launch variables in .
un_nvidia_gpu.bat for anyone missing them.
Interesting, as it simply showed up for me when I started as usual! I take it you already had the new workflow and menu beta on already?
@@NerdyRodent yes, i was confused to as why it wasn't there. Not sure if it was just my install.