DreamingAI
DreamingAI
  • 25
  • 274 359
Coding an Anime Chatbot: A Journey with LLaMA 3 [TTS + Anime Model]
Hi! In this video I will explain how to finish the interface of an anime-inspired chatbot using Meta AI's LLaMA 3 that we started to create in the previous video!
*** Links from the Video Tutorial ***
Previous video: th-cam.com/video/z8khiyUxxPU/w-d-xo.html
CoquiAI repository: github.com/coqui-ai/TTS
Pixi Live2d Display repository: github.com/RaSan147/pixi-live2d-display
Download models from: sekai.best/l2d
Full code (for Patreon) : www.patreon.com/posts/104531671
Full code (for Everyone): COMING SOON
requirements.txt + Clear Text function: pastebin.com/ZwrKhJCi
❤️❤️❤️Support Links❤️❤️❤️
Patreon: www.patreon.com/DreamingAIChannel
Buy Me a Coffee ☕: ko-fi.com/C0C0AJECJ
The music at the end of the video was created with udio.com!
มุมมอง: 385

วีดีโอ

Coding an Anime Chatbot: A Journey with LLaMA 3 [Ollama + Chat Interface]Coding an Anime Chatbot: A Journey with LLaMA 3 [Ollama + Chat Interface]
Coding an Anime Chatbot: A Journey with LLaMA 3 [Ollama + Chat Interface]
มุมมอง 979หลายเดือนก่อน
Hi! Today, I'm diving into the fascinating world of LLMs to show you how to create an anime-inspired chatbot using Meta AI's LLaMA 3. If you're into Python and Flask, this tutorial is perfect for you! Links from the Video Tutorial CodePen: codepen.io/oieusouamiguel/pen/vbRrLm Ollama: ollama.com/ Commands: pip install flask pip install openai Scrollbar snippet: /*scroll bar*/ .message-list::-web...
Mastering ComfyUI: How to Bring Sketches to Life with Controlled Style (IPAdapter + ControlNet)Mastering ComfyUI: How to Bring Sketches to Life with Controlled Style (IPAdapter + ControlNet)
Mastering ComfyUI: How to Bring Sketches to Life with Controlled Style (IPAdapter + ControlNet)
มุมมอง 2.7K2 หลายเดือนก่อน
In this video I will explain how to turn our sketches into masterpieces while retaining control over the style of the generated image! I hope you enjoy it! ❤️ Links from the Video Tutorial ComfyUI GitHub: github.com/comfyanonymous/ComfyUI IPAdapter website: ip-adapter.github.io/ IPAdapter Github: github.com/tencent-ailab/IP-Adapter/ IPAdapter ComfyUI Nodes: github.com/cubiq/ComfyUI_IPAdapter_pl...
Unlock LoRA Mastery: Easy LoRA Model Creation with ComfyUI - Step-by-Step Tutorial!Unlock LoRA Mastery: Easy LoRA Model Creation with ComfyUI - Step-by-Step Tutorial!
Unlock LoRA Mastery: Easy LoRA Model Creation with ComfyUI - Step-by-Step Tutorial!
มุมมอง 11K3 หลายเดือนก่อน
Have you ever wanted to create your own customized LoRA model that perfectly fits your needs without having to compromise with predefined ones? In this easy-to-follow tutorial, I'll guide you through the process of creating your LoRA model using ComfyUI. No more limitations imposed by standard models, just the freedom to create exactly what you desire. Join us and discover how to bring your vis...
EASY Outpainting in ComfyUI: 3 Simple Ways with Auto-Tagging (Joytag) | Creative Workflow TutorialEASY Outpainting in ComfyUI: 3 Simple Ways with Auto-Tagging (Joytag) | Creative Workflow Tutorial
EASY Outpainting in ComfyUI: 3 Simple Ways with Auto-Tagging (Joytag) | Creative Workflow Tutorial
มุมมอง 3.2K4 หลายเดือนก่อน
In this video I will illustrate three ways of outpainting in confyui. I've been wanting to do this for a while, I hope you enjoy it! Links from the Video Tutorial ComfyUI Inpaint Nodes: github.com/Acly/comfyui-inpaint-nodes Comfy Fit Size: github.com/bronkula/comfyui-fitsize ComfyUI-N-Suite: github.com/Nuked88/ComfyUI-N-Nodes JoyTag: github.com/fpgaminer/joytag Moondream: github.com/vikhyat/moo...
Mastering ComfyUI: Getting started with API - TUTORIALMastering ComfyUI: Getting started with API - TUTORIAL
Mastering ComfyUI: Getting started with API - TUTORIAL
มุมมอง 9K5 หลายเดือนก่อน
Hello! As I promised, here's a tutorial on the very basics of ComfyUI API usage. Today, I will explain how to convert standard workflows into API-compatible formats and then use them in a Python script. Additionally, I will explain how to upload images or videos via the API for img2img or vid2vid usage. Links from the Video Tutorial Command to install websocket on python : pip install websocket...
Automatic Lip Sync for Anime: Wav2Lip Tutorial - Xmas Video [3/3]Automatic Lip Sync for Anime: Wav2Lip Tutorial - Xmas Video [3/3]
Automatic Lip Sync for Anime: Wav2Lip Tutorial - Xmas Video [3/3]
มุมมอง 2.9K5 หลายเดือนก่อน
Hey! In this video, I'll walk you through using Wav2Lip for automatic lipsync on anime characters. We'll tackle some tweaks I made to get it working smoothly. Links from the Video Tutorial Wav2Lip: github.com/Nuked88/Wav2Lip Python Download : www.python.org/downloads/windows/ FFMPEG: winget install "FFmpeg (Essentials Build)" UltimateSDUpscale: github.com/ssitu/ComfyUI_UltimateSDUpscale 4x-Ultr...
Mastering AI Animation: Text to Video with AnimateDiff + Prompt Schedule! - Xmas Video [2/3]Mastering AI Animation: Text to Video with AnimateDiff + Prompt Schedule! - Xmas Video [2/3]
Mastering AI Animation: Text to Video with AnimateDiff + Prompt Schedule! - Xmas Video [2/3]
มุมมอง 1.7K6 หลายเดือนก่อน
In the second part of this series, we will create a long animation with animatediff, and I will explain the basics of the Batch Scheduled Prompt custom node in ComfyUI! Links from the Video Tutorial AnimateDiff Evolved Tutorial: th-cam.com/video/y7WRG_ycS6M/w-d-xo.html ComfyUI_FizzNodes: github.com/FizzleDorf/ComfyUI_FizzNodes NumExpr 2.0 User Guide: numexpr.readthedocs.io/en/latest/user_guide....
Voices of the Future: How to Clone Any Voice with RVC and Make It Sing! - Xmas Video [1/3]Voices of the Future: How to Clone Any Voice with RVC and Make It Sing! - Xmas Video [1/3]
Voices of the Future: How to Clone Any Voice with RVC and Make It Sing! - Xmas Video [1/3]
มุมมอง 1.2K6 หลายเดือนก่อน
Step into a future where AI takes center stage in music creation. Explore the making of an AI-generated Silent Night using Synthesizer V and RVC. Links from the Video Tutorial Synthesizer V: dreamtonics.com/synthesizerv/ Retrieval-based-Voice-Conversion-WebUI: github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/en/README.en.md RoyaltyFree Blue Music: www.freemusicpublic...
MERRY XMAS - AI EDITION! #animatediff #ai #animeMERRY XMAS - AI EDITION! #animatediff #ai #anime
MERRY XMAS - AI EDITION! #animatediff #ai #anime
มุมมอง 5026 หลายเดือนก่อน
Merry xmas everyone! This video is a result of a month of work! I will summarize that in three tutorial that are coming in the next week. The first one will be available tomorrow!
Mastering AI Animation: Use Auto-Mask, ControlNet and AnimateDiff Evolved! - Video To VideoMastering AI Animation: Use Auto-Mask, ControlNet and AnimateDiff Evolved! - Video To Video
Mastering AI Animation: Use Auto-Mask, ControlNet and AnimateDiff Evolved! - Video To Video
มุมมอง 7K6 หลายเดือนก่อน
Hello Dreamers! In this video, we explore the limitless possibilities of AnimateDiff animation mastery. It's not just about editing - it's about breaking boundaries, pushing creative limits, and turning ordinary videos into extraordinary visual journeys. Let's embark on this creative revolution together! GPU USED: NVIDIA RTX 3080 10GB VRAM Links from the Video Tutorial ComfyUI-N-Suite: github.c...
Revolutionizing AI Image Generation: Boosting Speed with LCM Models!Revolutionizing AI Image Generation: Boosting Speed with LCM Models!
Revolutionizing AI Image Generation: Boosting Speed with LCM Models!
มุมมอง 3.9K7 หลายเดือนก่อน
Let's check out the cool stuff happening in the AI world with image creation! We're diving into this technique called Latent Consistency Models (LCM) that helps you make images three times faster without messing with the reliable diffusion models! Links from the Video Tutorial LCM Website: latent-consistency-models.github.io/ LCM Github: github.com/luosiallen/latent-consistency-model LCM LORA S...
Mastering ComfyUI: How to use ReActor for Face Swap - TUTORIALMastering ComfyUI: How to use ReActor for Face Swap - TUTORIAL
Mastering ComfyUI: How to use ReActor for Face Swap - TUTORIAL
มุมมอง 38K8 หลายเดือนก่อน
In this video, I'll walk you through the process of creating flawless faceswaps using the ReActor node. We'll cover installation, model selection, and how to achieve professional-quality results. So, grab your favorite images and let's dive into the future of AI-powered faceswaps together! Don't forget to like, subscribe, and leave your questions in the comments! Links from the Video Tutorial -...
Mastering ComfyUI: Creating Stunning Human Poses with ControlNet! - TUTORIALMastering ComfyUI: Creating Stunning Human Poses with ControlNet! - TUTORIAL
Mastering ComfyUI: Creating Stunning Human Poses with ControlNet! - TUTORIAL
มุมมอง 28K8 หลายเดือนก่อน
Join me in this tutorial as we dive deep into ControlNet, an AI model that revolutionizes the way we create human poses and compositions from reference images. Learn about the different ControlNet models, their applications, and how to use them effectively. Whether you're an artist, designer, or just curious about AI, this video will show you how to harness the power of ControlNet for stunning ...
Mastering ComfyUI: Getting Started with Video to Video!Mastering ComfyUI: Getting Started with Video to Video!
Mastering ComfyUI: Getting Started with Video to Video!
มุมมอง 56K9 หลายเดือนก่อน
In this video i will dive you into the captivating world of video transformation using ComfyUI's new custom nodes. Discover the secrets to creating stunning videos that push the boundaries of creativity and imagination. I hope to inspire you to unleash your video editing potential! Links from the Video Tutorial - ComfyUI-N-Suite: github.com/Nuked88/ComfyUI-N-Nodes - ComfyUI's ControlNet Auxilia...

ความคิดเห็น

  • @sinuva
    @sinuva 16 ชั่วโมงที่ผ่านมา

    i couldnt make this ipadapter model works = ( for some reason, it doent find the, for example, plus high strengh

  • @mr.entezaee
    @mr.entezaee 2 วันที่ผ่านมา

    ModuleNotFoundError: No module named 'jmespath' Train finished

  • @giusparsifal
    @giusparsifal 3 วันที่ผ่านมา

    Hello and thanks, a question, if I interrupt the process there is a backup or I have to begin from start? Thank you

  • @leol.4541
    @leol.4541 3 วันที่ผ่านมา

    I have installed ComfyUI Auxiliary Prepocessor, but I can't find any CannyEdge, I just have the regular Canny. Someone can help ? Also, just using Canny, I have a problem when rendering, apprently the GPU, but everything seems rite on my computer. And when I remove the Canny Node, Everything's seems right until the rendering comes to the KSampler Advanced node, there, the same problem appear, anyone can help please ?

  • @PaulRoneClarke
    @PaulRoneClarke 3 วันที่ผ่านมา

    Unfortunately these custom scripts bricked my Comfy installation "Assertion Error - Torch not compiled with CUDA enabled" I had to remove your scripts and run a python and Comfy update to get it back.

  • @XastherReeD
    @XastherReeD 3 วันที่ผ่านมา

    Ok, so this is the third question I had answered in just as many videos. Short videos, no tangents. Then you also shared PoseMyArt, which looks absolutely perfect for use with OpenPose. Yeah, I'm subscribed.

  • @iccang
    @iccang 5 วันที่ผ่านมา

    Hi... I got error when the proses on save the video. there is message like: Frames have been successfully reassembled into /Users/iccangninol/ComfyUI/temp/video.mp4 !!! Exception during processing!!! MoviePy error: the file /Users/iccangninol/ComfyUI/temp/video.mp4 could not be found! Please check that you entered the correct path. Traceback (most recent call last): File "/Users/iccangninol/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Users/iccangninol/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Users/iccangninol/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Users/iccangninol/ComfyUI/custom_nodes/ComfyUI-N-Nodes/py/video_node_advanced.py", line 616, in save_video video_clip = VideoFileClip(videos_output_temp_dir) File "/Users/iccangninol/miniconda3/envs/Comfy2/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, File "/Users/iccangninol/miniconda3/envs/Comfy2/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, File "/Users/iccangninol/miniconda3/envs/Comfy2/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 270, in ffmpeg_parse_infos raise IOError(("MoviePy error: the file %s could not be found! " OSError: MoviePy error: the file /Users/iccangninol/ComfyUI/temp/video.mp4 could not be found! Please check that you entered the correct path. Can you tell me, where wrong with my case?

  • @KEV.IN_
    @KEV.IN_ 5 วันที่ผ่านมา

    Hi, could you make a video about generating different images with different poses but with a same anime character by uploading the anime character?

  • @TBou_nyncuk
    @TBou_nyncuk 7 วันที่ผ่านมา

    Great vod mate!

  • @johnsummerlin7630
    @johnsummerlin7630 9 วันที่ผ่านมา

    11:45 clarification requested: "denoise" is not a right-click option on the depicted node. What needs to be loaded for this option to show up, as the source of this denoise-control is not clear. There are multiple "custom scripts" items in the manager menu, with different authors and different conflict warnings too.

  • @marcoantonionunezcosinga7828
    @marcoantonionunezcosinga7828 10 วันที่ผ่านมา

    Greetings, I am a newbie to this type of program, I have some problems installing ComfyUI-N-Nodes, although due to the lack of nodes it has the name SaveVideo. I will continue watching your videos, maybe there is one that will help me, thank you

  • @Digital_Paradise
    @Digital_Paradise 12 วันที่ผ่านมา

    Any idea how to PADDING at output face ? i want to swap on the eye and nose area and leave mouth area, i try figured out how to do that without masking cause i have batch process image... example i have image man eating big bread in front his mouth, but Reactor always change that bread into mouth shape

  • @eveekiviblog7361
    @eveekiviblog7361 13 วันที่ผ่านมา

    Please show how we can link comfy to telegram bot

  • @MolediesOflife
    @MolediesOflife 13 วันที่ผ่านมา

    crazy ai

  • @timjones9316
    @timjones9316 14 วันที่ผ่านมา

    Thanks! Have been struggling with the original version (did not get it to work). Your nodes really worked great and simple in the first attempt. The long explanation of the Lora-traning node also appreciated. (Note: building the lora with 45 images did take some time > 3.5 hrs, using a 4070ti)

  • @abrahamgeorgec
    @abrahamgeorgec 14 วันที่ผ่านมา

    Were you able to download the video to user defined folder using API. Which node to use for the same?

  • @abrahamgeorgec
    @abrahamgeorgec 14 วันที่ผ่านมา

    Nice Explanation. Where you able to download a video (from a video workflow) to a user specific folder ?

  • @soundmob329
    @soundmob329 14 วันที่ผ่านมา

    Of all tutorials, this was the only one that worked. everyone else skipped over the most important which was installing IN the python_embeded folder. They didn't even specify to download it to THAT specific path. that's literally all it took

  • @dongyanghan4030
    @dongyanghan4030 15 วันที่ผ่านมา

    Traceback (most recent call last): File "D:\BaiduSyncdisk\Proceduralization\default\4comfyui\batch_test0621.py", line 18, in <module> prompt_workflow = json.load(open('D:\\BaiduSyncdisk\\Proceduralization\\default\\4comfyui\\workflow_api.json')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "json\__init__.py", line 293, in load UnicodeDecodeError: 'gbk' codec can't decode byte 0xa8 in position 479: illegal multibyte sequence

  • @nlmnx5763
    @nlmnx5763 16 วันที่ผ่านมา

    thanks babe

  • @tcgerbilheroes4386
    @tcgerbilheroes4386 19 วันที่ผ่านมา

    the training is finished in 4 seconds and nothing is added to the folder i created for the model. the log : D:\AI\Comfyui\ComfyUI\custom_nodes\Lora-Training-in-Comfy/sd-scripts/train_network.py The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'D:\\AI\\Comfyui\\custom_nodes\\Lora-Training-in-Comfy\\sd-scripts\\train_network.py': [Errno 2] No such file or directory Traceback (most recent call last): File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib unpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib unpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1027, in <module> main() File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1023, in main launch_command(args) File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1017, in launch_command simple_launcher(args) File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 637, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\HelpTech\\AppData\\Local\\Programs\\Python\\Python310\\python.exe', 'custom_nodes/Lora-Training-in-Comfy/sd-scripts/train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=D:\\AI\\Comfyui\\ComfyUI\\models\\checkpoints\\bismuthmix_v30.safetensors', '--train_data_dir=D:/AI/Art/milffy', '--output_dir=D:\\AI\\Art\\milffy model', '--logging_dir=./logs', '--log_prefix=Milffy', '--resolution=512,512', '--network_module=networks.lora', '--max_train_epochs=5000', '--learning_rate=1e-4', '--unet_lr=1.e-4', '--text_encoder_lr=1.e-4', '--lr_scheduler=cosine_with_restarts', '--lr_warmup_steps=0', '--lr_scheduler_num_cycles=1', '--network_dim=32', '--network_alpha=32', '--output_name=Milffy', '--train_batch_size=1', '--save_every_n_epochs=100', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=26', '--cache_latents', '--prior_loss_weight=1', '--max_token_length=225', '--caption_extension=.txt', '--save_model_as=safetensors', '--min_bucket_reso=256', '--max_bucket_reso=1584', '--keep_tokens=0', '--xformers', '--shuffle_caption', '--clip_skip=2', '--optimizer_type=AdamW8bit', '--persistent_data_loader_workers', '--log_with=tensorboard']' returned non-zero exit status 2. Train finished

  • @Bitcoin_Baron
    @Bitcoin_Baron 19 วันที่ผ่านมา

    Can you update and just provide a downloadable archive we can just extract into the comfy folder? I can't understand the github instructions, they dont make sense for average user.

  • @anthonydelange4128
    @anthonydelange4128 21 วันที่ผ่านมา

    - Value not in list: video: 'Flying' not in [] issue with loadvideo.

  • @Roguefromearth
    @Roguefromearth 22 วันที่ผ่านมา

    Hey, I am getting error - ERROR: No matching distribution found for websockets-client

  • @curiouspers
    @curiouspers 22 วันที่ผ่านมา

    Hahaha, that's a perfect ending! <3

  • @Fatmir-lt6cq
    @Fatmir-lt6cq 24 วันที่ผ่านมา

    Thank you! how to swap complet face with hair?

  • @HiramLNoise
    @HiramLNoise 25 วันที่ผ่านมา

    That's like Photoshop with extra steps.

  • @roiyg19
    @roiyg19 25 วันที่ผ่านมา

    Thank you for this. How can i contact you regarding an interesting project?

  • @sdafsdf9628
    @sdafsdf9628 25 วันที่ผ่านมา

    Thank you very much for the exciting experiments. I have tested with an AI image in which a narrow idyllic alley in an Italian village is created. There are cobblestones, windows, doors and flowers. Unfortunately, all this creativity is lost in the outpaint. Fooocus handles it a little better, but the images are too dark there. The hard test is to enlarge an image, then reduce it to the original size in the graphics program and then enlarge it again using Outpaint. Repeating this 5 times (optically we go backwards) shows all the weaknesses. How can we use the creativity that is in the AI in the outpaint? Even with the original promt there is no improvement. It is also not possible to draw conclusions about the enlargement only from the original, the user has to say (text prompt) how the world should change, even if only slightly. If a light comes in from the right, then the lamp must come in at some point. If there is a shadow, there must be a person standing there at some point...

  • @killbadmashia9225
    @killbadmashia9225 25 วันที่ผ่านมา

    Where can we go and learn which components are used in a certain workflow to accomplish a task ? or what the workflow of Nodes would be to accomplish a certain task ?

  • @user-pt6mq9ff2s
    @user-pt6mq9ff2s 25 วันที่ผ่านมา

    Just remove the piano track, very distracting.

  • @Eugeniocaraujo
    @Eugeniocaraujo 26 วันที่ผ่านมา

    Unable to run it: Traceback (most recent call last): File "...\ComfyUI-master\custom_nodes\ComfyUI-N-Nodes-main\__init__.py", line 64, in <module> spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "...\ComfyUI-master\custom_nodes\ComfyUI-N-Nodes-main\py\frame_interpolator_node.py", line 18, in <module> from model.pytorch_msssim import ssim_matlab ModuleNotFoundError: No module named 'model' Cant load the interpolator in the comfyUI cause of this...

  • @youjohnnyd7773
    @youjohnnyd7773 26 วันที่ผ่านมา

    I receive error messages when creating image captions. Error occurred when executing GPT Sampler [n-suite]: list index out of range File "E:\AI\AITools\ComfyUI\execution.py", line 141, in recursive_execute input_data_all = get_input_data(inputs, class_def, unique_id, outputs, prompt, extra_data) File "E:\AI\AITools\ComfyUI\execution.py", line 26, in get_input_data obj = outputs[input_unique_id][output_index] Please help me fix this, thanks.

  • @eliassuzumura
    @eliassuzumura 26 วันที่ผ่านมา

    For the first time I was able to understand a full ConfyUI tutorial. Thank you.

  • @88.AmpLyte
    @88.AmpLyte 28 วันที่ผ่านมา

    Wow, thank you Brother. From the time you took to create simplistic custom versions, your explanations and they way you broke down individual variables.. i was able to take in a real understanding of these components as well as keep up with the new insights as the video progressed. 🧠💪👏

  • @theteknologist9574
    @theteknologist9574 หลายเดือนก่อน

    Awesome video. No filler, just the goods.

  • @DarksNote
    @DarksNote หลายเดือนก่อน

    Das Programm ist noch zu kompliziert für die Mehrheit.

  • @Potts2k8
    @Potts2k8 หลายเดือนก่อน

    "ERROR: insightface-0.7.3-cp311-cp311-win_amd64.whl is not a supported wheel on this platform."

  • @Potts2k8
    @Potts2k8 หลายเดือนก่อน

    Error occurred when executing ControlNetLoader: Error while deserializing header: MetadataIncompleteBuffer File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI odes.py", line 705, in load_controlnet controlnet = comfy.controlnet.load_controlnet(controlnet_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 326, in load_controlnet controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\(USER)\OneDrive\Desktop\SDXL\ComfyUI_windows_portable\python_embeded\Lib\site-packages\safetensors\torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 🤷‍♂

  • @The_Python_Turtle
    @The_Python_Turtle หลายเดือนก่อน

    awesome video, do you mind sharing what AI voice you are using? It sounds great

  • @britebay
    @britebay หลายเดือนก่อน

    You always provide the most accurate and relevant information. Thank you!

  • @luxed3583
    @luxed3583 หลายเดือนก่อน

    encountered this issue when trying to load test preprocessors "When loading the graph, the following node types were not found: PiDiNetPreprocessor ColorPreprocessor CannyEdgePreprocessor SAMPreprocessor DWPreprocessor BinaryPreprocessor.... Nodes that have failed to load will show as red on the graph." how do i fix this?

  • @gu9838
    @gu9838 หลายเดือนก่อน

    if i can get this to work offline with decent tts id be so happy. tried some of the cloud options and none are working the way i want. dopple updated their thing that broke their chat to emote too much lol. and having it fully offline would mean i have full control lol . sweet got deepspeed installed lol

    • @DreamingAIChannel
      @DreamingAIChannel หลายเดือนก่อน

      Well, I think the only effective way to get the best offline voice is to use coqui + RVC, BUT for now I have avoided implementing the RVC part because, waiting coqui + waiting RVC makes things too slow, in the next video I will explain the coqui part anyway!

  • @britebay
    @britebay หลายเดือนก่อน

    Great tutorial! Thank you!

  • @ismgroov4094
    @ismgroov4094 หลายเดือนก่อน

    Thx

  • @arothmanmusic
    @arothmanmusic หลายเดือนก่อน

    My workflow appears to be the same as yours, but when I run the prompt the masked area doesn't change at all. What might I be missing?

  • @LucidFirAI
    @LucidFirAI หลายเดือนก่อน

    I tried this, and some other tutorials, for fixing hands. The hands remain garbled. Please help?

    • @Make_a_Splash
      @Make_a_Splash หลายเดือนก่อน

      Hands are a different story. Maybe the model you are using is not good with hands. Try a different model or a Lora to help with the hands.

  • @mahilkr
    @mahilkr หลายเดือนก่อน

    That was a great explanation. How can I go about inpainting using two images: the original image and the mask data?