Thank you for being honest about the ones that look weird. For some reason, a lot of TH-camrs are applauding these crappy looking pictures and videos.??? I like the honesty
If only he had been honest about how slow and/or inefficient this process is, even on a 4090. Look at the stats on his video at 30:30: almost 4 secs per iteration, and the process uses 12x40 iterations per process... that's 1920 secs or 32 minutes for a 7 sec video (sample audio 1). Imagine you want a presenter for a standard 15 minute video. It would take 68 HOURS to generate: almost 3 days. On the cloud he suggests, that's 54USD. You can see that this can mount up pretty fast, and that's without even counting if you have to re-render due to mistakes or whatever. So, this is the problem with channels that want to hype and bring enthusiasm to some technology: they are too biased. 30:30 for the part where he minimally mentions this core problem. This is NOT personal computing: it belongs to huge datacenters consuming hundreds of thousands of watts where developers can manipulate the parameters and see results in real time, the way Microsoft demoed his technology. Imagine: to make it real time, you would need the equivalent computing capacity of 275 of those RTX 4090, plus the infrastructure to keep them running. That's like 137,500 Watts per hour, or a 90 Horse Power, 1.8 cc SOHC engine running at full capacity and burning some 9 gal of gasoline per hour, just so you han have your little window with a face moving at the dance of your mouse and keyboard.
@@I.Am.Nobody yeah, for sure as there is a lot of vested interest in saving costs. I personally have a friend who worked for a couple of years in optimizing AI tensors to the current nVidia architecture, using specific numeric types that could best be processed by these new units... but it IS going to take a lot of time, and we should be real about that.
Because coders only use the latest technologies available, without any prior optimization. For example, there's an optimized version of Stable Diffusion that can use older video cards, not just the latest ones. And the result is the same as with newer video cards. Of course, it is faster with newer video cards, but the result remains the same.
Nice!! Agents will have a face generated image but this Avatar will be animated Not by simple user prompts but in Real Time with Zero Latency by an LLM behind this Avatar and those same *Mimic Outputs Signals* can be used to feed the *Inputs PWM Signals* of electric motors to animate a Fully Autonomous Robotic Physical World Version, with these Physical Avataras performing as the Puppet of these Artificial Neural Networks. That's The *Last Stage* of this tech during the *AGI Incarnation World Event* of 2028... And So The Ray Kurzweil Singularity begins.
Sounds amazing. This combined with some nice embeddings could make a great conventional bot. Over time you could generate more answers to similar questions and make the conversation sound a lot more natural. I am currently building a coding curse and having an AI Tutor to ask questions that actually looks like a human would be great.
I have followed the instructions and started a test, but had to interrupt. The time this takes to produce video is so much, even on top-end hardware, that it is not practical for production. This should be mentioned on the video. Also, renting a computer on the cloud, even at 8 cents per hour or whatever would amount to quite a sum for even a modest production, and would take too long. This is just not practical, except for producing a video for the views and getting revenue from the ads. Will not suscribe. I have done some calculations to show how slow and/or inefficient this process is, even on a 4090. Look at the stats on his video at 30:30: almost 4 secs per iteration, and the process uses 12x40 iterations per process... that's 1920 secs or 32 minutes for a 7 sec video (sample audio 1). Imagine you want a presenter for a standard 15 minute video. It would take 68 HOURS to generate: almost 3 days. On the cloud he suggests, that's 54USD. You can see that this can mount up pretty fast, and that's without even counting if you have to re-render due to mistakes or whatever. So, this is the problem with channels that want to hype and bring enthusiasm to some technology: they are too biased. 30:30 for the part where he minimally mentions this core problem. This is NOT personal computing: it belongs to huge datacenters consuming hundreds of thousands of watts where developers can manipulate the parameters and see results in real time, the way Microsoft demoed his technology. Imagine: to make it real time, you would need the equivalent computing capacity of 275 of those RTX 4090, plus the infrastructure to keep them running. That's like 137,500 Watts per hour, or a 90 Horse Power, 1.8 cc SOHC engine running at full capacity and burning some 9 gal of gasoline per hour, just so you han have your little window with a face moving at the dance of your mouse and keyboard.
is there any model for comfy ui or another software that atlease sync lips and a little face or eyes with faster time? i am searching for a model i give an image + audio file and it give me 1 video. local pc.
Looks like an interesting alternative to Hedra. I like that it is more configurable. Future models should maybe have an option to describe the character and take this into account when doing the animation. For example age, or if it's an anime or some kind of doll.
Im not really sure if i should be happy or terrified that this is closed source. Those who have full access can do insanely good deepfakes. Propaganda will reach a whole new level.
I went in this panic in February 2023. I realized the internet is on a risk to die. I archived lots of movies, documentaries, lectures, went on eBay and ordered encyclopedias for all fields, just in case. Helped me be more relaxed working with AI and accepting the journey
@@theAIsearch I think in court it will probably be not too hard to figure out if it's AI generated, by proffesionals who can examine it (but who knows, maybe people will find a way to make it hard to prove, we'll have to wait and see) But it will have huge consequenses on social media in general Most people already don't read beyond the headlines. So even if it will be debunked as AI generated, the damage will already be done So yeah. The potential of this tech is terrifying
1. The tech is in early stages and developing very quickly. Eventually it will not be obvious at all 2. Many are already fooled by obvious false social media posts.
Man I installed everything and followed every instruction, but upon trying to make a video, it takes me a very long time to render. I use an RTX 3080 8GB GDDR6 VRAM. What Can I do to speed up the rendering. Now it takes like 7 hours to make a video, help me please.
While the installation, a red error message appears says: ERROR: Package 'hallo' requires a different Python: 3.9.5 not in '=3.10' Although I checked the Python version, and it is 3.10.11 .. Why that?
The only drawback to this technology concerns the eyes and head movements. When you're talking to someone or looking at a video camera, cell phone, etc., your eyes aren't staring blearily at the lens all the time, or even at a person, all the time due to our process of thoughts. Even our face moves when we talk. Only robots, or AI, do not do that. But coders are not psychologists anyway.
Am I taking crazy pills or what this thing ran for like two and a half hours and then bottomed out and said it wants to restart now if I just paid for 2 hours of servers?
@@theAIsearch@johnangel5480 He's probably using a conda environment and that's a big issue for those who don't want to install python system-wide but rather in the conda sandbox. Search for "How can I activate a Conda environment from PowerShell?" in Stack Overflow.
I've been using Vidnoz to do deep fakes of streamers and send the video to them in discord. Never fails to get a laugh! You can make them say what ever you want? Ever wanted to be a VIP or a mod in chat? Now you can make a fake video of the streamer promoting you! KEKW
How do you see the open source alternatives comparing to the copyrighted ones? Are they getting close? I think that with open source, we could not only deal better with AI ethics ("Skynet", "big Brother", etc.), but we could get better results, since AI needs lot of data and open source helps with accessibility = it's free. I really appreciate your work btw :D
open source is as good or only a bit behind. eg. Hallo is as good as Hedra, which is the best closed model you can use right now. i think Meta is also releasing an open source competitor to GPT-4o which will be natively multimodal. expect open source to only be a few months behind
how to get this to work locally with GUI on windows for long clips of audio if you can figure it out please provide a full walkthrough step by step I have the hello and hello 2 portable but cant get them working and hugging face doesn't work or doesn't support long audio perhaps ran locally but in a browser or in a portable app what I have I haven't been able to figure out but seems to be for using it locally with gui
when installing install ps1 I get errors it seems like things arent installing right but it says its done at the bottom in pwrshell and there is no torch even when using venv
Can someone please explain something in plain English. .if I increase the RAM of my computer with an external hard drive will I be able to install HALLO?!
Absolutely crazy and beautiful. Imagine a utopia where there are no degenerates hungry for power or malicious intentions. Humanity, being kind, would achieve greatness in such a short time.
@@Derpyzilla894 It's not about 'image' generation, you need to see further. There are many powerful models out there, probably some really close to AGI, maybe from Anthropic or OpenAI or some national research private labs. The point is, if we were in a utopia, those models would already be out there. But you can't let an AGI roam free with so many mentally unstable people, asking how to create explosives, making deepfakes of people in NSFW contexts; the list goes on. That's why many AIs are so restricted.
@@Derpyzilla894 It's not about 'image' generation. There are many powerful models out there, probably some really close to AGI, maybe from Anthropic or OpenAI or some national research private labs. The point is, if we were in a utopia, those models would already be out there. But you can't let an AGI roam free with so many mentally unstable people, asking how to create explosives, making deepfakes of people in NSFW contexts; the list goes on. That's why many AIs are so restricted.
Probably yes....I have an AI video face changer...on GPU it takes maybe an hour, but on CPU at least 5+ hours so yes it probably will but expect a long wait.
@@theAIsearch The error is literally "Error" in red writing with no explanation. This error appears in the Output Window. It goes through all the steps of building with lots of sliding bars and then the red error is the output. It gives me absolutely nothing to go on. Thanks for spending some cycles on this.
Well I tried this with an actual gradio interface that someone was kind enough to send me the link and for right now it's just too slow and when I say slow I mean really slow, for a 2 minute video it was going on 5 hours when I finally shut it down. I hope at some point they optimize it because it looks interesting
Hm..... More and more I look at these new models I feel like someone is definitely gonna slap a model into bot and let em learn continuous and wallah we made the first ever humanoid.
Thank you for being honest about the ones that look weird. For some reason, a lot of TH-camrs are applauding these crappy looking pictures and videos.??? I like the honesty
No problem!!
If only he had been honest about how slow and/or inefficient this process is, even on a 4090. Look at the stats on his video at 30:30: almost 4 secs per iteration, and the process uses 12x40 iterations per process... that's 1920 secs or 32 minutes for a 7 sec video (sample audio 1). Imagine you want a presenter for a standard 15 minute video. It would take 68 HOURS to generate: almost 3 days. On the cloud he suggests, that's 54USD. You can see that this can mount up pretty fast, and that's without even counting if you have to re-render due to mistakes or whatever. So, this is the problem with channels that want to hype and bring enthusiasm to some technology: they are too biased. 30:30 for the part where he minimally mentions this core problem. This is NOT personal computing: it belongs to huge datacenters consuming hundreds of thousands of watts where developers can manipulate the parameters and see results in real time, the way Microsoft demoed his technology. Imagine: to make it real time, you would need the equivalent computing capacity of 275 of those RTX 4090, plus the infrastructure to keep them running. That's like 137,500 Watts per hour, or a 90 Horse Power, 1.8 cc SOHC engine running at full capacity and burning some 9 gal of gasoline per hour, just so you han have your little window with a face moving at the dance of your mouse and keyboard.
@@I.Am.Nobody yeah, for sure as there is a lot of vested interest in saving costs. I personally have a friend who worked for a couple of years in optimizing AI tensors to the current nVidia architecture, using specific numeric types that could best be processed by these new units... but it IS going to take a lot of time, and we should be real about that.
In a sequence that feel natural:
VASA>EMO>Hallo>Hedra
Him: "This is a gpu that you and i could easily buy...." (RTX 4090)
Me: .........
Because coders only use the latest technologies available, without any prior optimization. For example, there's an optimized version of Stable Diffusion that can use older video cards, not just the latest ones. And the result is the same as with newer video cards. Of course, it is faster with newer video cards, but the result remains the same.
Would you rather buy a 4090 for 1.8K USD or a H100 for 28k USD?
@@pixobit5882 Neither. My 1080 is just fine until I upgrade my PC.
What an absolute waste of time. Reminds me of the initial VR hype.
You can easily buy it if you have the money.
@@pixobit5882 I bought a 3090, which has the same 24GB of VRAM and a lot cheaper
"realistic" you haven't seen the monsters i have created
lol is it some good nightmare fuel?
@@theAIsearch yeh, don't add people with teeth showing
@@theAIsearch Can i install hallo with an external ssd?
Yeah… the monsters I created just turned 21 & 18 👹🧌😳
@@nathanielthomas7718 Ahh, you downloaded pregnancy by mistake.
Thanks for the nightmares on that hugginface anime one! 😂
Loved it. I’m trying it to troll my friends tonight after work.
Have fun!
AI's growing really fast!
😃
Yes, but the rending is still pretty robotic and emotionless.
Like a malignant tumor.
delete this =D
its not there maybe in another 80 years lol
it cant do hands and feet
Nice!! Agents will have a face generated image but this Avatar will be animated Not by simple user prompts but in Real Time with Zero Latency by an LLM behind this Avatar and those same *Mimic Outputs Signals* can be used to feed the *Inputs PWM Signals* of electric motors to animate a Fully Autonomous Robotic Physical World Version, with these Physical Avataras performing as the Puppet of these Artificial Neural Networks.
That's The *Last Stage* of this tech during the *AGI Incarnation World Event* of 2028... And So The Ray Kurzweil Singularity begins.
Suspect: "I didn't do it."
Investigator: "We have video evidence."
a full confession
Sounds amazing. This combined with some nice embeddings could make a great conventional bot. Over time you could generate more answers to similar questions and make the conversation sound a lot more natural. I am currently building a coding curse and having an AI Tutor to ask questions that actually looks like a human would be great.
At the begining of the video you should have said it will cost 80c/hr to use it and I would have skipped the video and saved my self 15 minutes.
I have followed the instructions and started a test, but had to interrupt. The time this takes to produce video is so much, even on top-end hardware, that it is not practical for production. This should be mentioned on the video. Also, renting a computer on the cloud, even at 8 cents per hour or whatever would amount to quite a sum for even a modest production, and would take too long. This is just not practical, except for producing a video for the views and getting revenue from the ads. Will not suscribe. I have done some calculations to show how slow and/or inefficient this process is, even on a 4090. Look at the stats on his video at 30:30: almost 4 secs per iteration, and the process uses 12x40 iterations per process... that's 1920 secs or 32 minutes for a 7 sec video (sample audio 1). Imagine you want a presenter for a standard 15 minute video. It would take 68 HOURS to generate: almost 3 days. On the cloud he suggests, that's 54USD. You can see that this can mount up pretty fast, and that's without even counting if you have to re-render due to mistakes or whatever. So, this is the problem with channels that want to hype and bring enthusiasm to some technology: they are too biased. 30:30 for the part where he minimally mentions this core problem. This is NOT personal computing: it belongs to huge datacenters consuming hundreds of thousands of watts where developers can manipulate the parameters and see results in real time, the way Microsoft demoed his technology. Imagine: to make it real time, you would need the equivalent computing capacity of 275 of those RTX 4090, plus the infrastructure to keep them running. That's like 137,500 Watts per hour, or a 90 Horse Power, 1.8 cc SOHC engine running at full capacity and burning some 9 gal of gasoline per hour, just so you han have your little window with a face moving at the dance of your mouse and keyboard.
😅
is there any model for comfy ui or another software that atlease sync lips and a little face or eyes with faster time?
i am searching for a model i give an image + audio file and it give me 1 video.
local pc.
Man now things are really starting to pick up.
indeed!
Imagine if they wereopen source and all worked together.
Hey, I'm a new sub! Loved your last vid, watching this one now!
Awesome! Thank you!
I think I'll give it another 5 months, then look back at how you said "it's the most realistic" and then be like "That looked like shit" lmao
yep, that's the world of AI
OMG Harry Potter talking portraits here we gooooo
😃
finaly i can imagine my father and sister as if they were alive again 😭😭
Interesting use case 🥲
If you have voice recordings of them you can use other ai tool to generate clips of them talking to use with their photos
@@Agret im glad i haved them saved long voice note i kept when he at hospital .i cant believe it worth more than i know
wait until we get the deep fake mother in laws
@@danielornstein300 dam thats deep maybe in the future you can replace a robot with your dad
Awesome! Do they have a version where you can input a video rather than a photo? Thanks!
not that I know of. will keep you posted
As for"laughing and coughing" @21:05 -->. they did say to use "Only English", so we shouldn't be surprised that it failed here.
Looks like an interesting alternative to Hedra. I like that it is more configurable. Future models should maybe have an option to describe the character and take this into account when doing the animation. For example age, or if it's an anime or some kind of doll.
that'd be cool
Im not really sure if i should be happy or terrified that this is closed source. Those who have full access can do insanely good deepfakes. Propaganda will reach a whole new level.
this tech can definitely be dangerous. imagine all the forged evidence it can create
I went in this panic in February 2023. I realized the internet is on a risk to die. I archived lots of movies, documentaries, lectures, went on eBay and ordered encyclopedias for all fields, just in case. Helped me be more relaxed working with AI and accepting the journey
@@theAIsearch I think in court it will probably be not too hard to figure out if it's AI generated, by proffesionals who can examine it
(but who knows, maybe people will find a way to make it hard to prove, we'll have to wait and see)
But it will have huge consequenses on social media in general
Most people already don't read beyond the headlines.
So even if it will be debunked as AI generated, the damage will already be done
So yeah. The potential of this tech is terrifying
It's obvious: People just need to learn that they can't trust such videos.
1. The tech is in early stages and developing very quickly. Eventually it will not be obvious at all
2. Many are already fooled by obvious false social media posts.
Man I installed everything and followed every instruction, but upon trying to make a video, it takes me a very long time to render. I use an RTX 3080 8GB GDDR6 VRAM. What Can I do to speed up the rendering. Now it takes like 7 hours to make a video, help me please.
what did you do for faster it?
do you know any model for local pc for sync lip and audio file?
While the installation, a red error message appears says:
ERROR: Package 'hallo' requires a different Python: 3.9.5 not in '=3.10'
Although I checked the Python version, and it is 3.10.11 .. Why that?
Never mind, I found a portable file and it doesn't require any installation. Just download, unpack, update then run.
thanks im gonna use this for my animations
😃
This is incredible!!💗
😃
Great video! Loving the step by step screen share tutorial!
Thanks!
Great value as always
Thanks for the sharing the video.
Is there an app or AI plugin to correct weird AI generated faces?
you could try using stable diffusion img2img & inpaint to fix weird parts. are you familiar with how to use stable diffusion?
@@theAIsearch Sorry no.
There should be an AI app made specifically for correcting faces and fingers. AI can make other things.
Still looks like the video has been dubbed, almost right, but still somewhat wrong
The only drawback to this technology concerns the eyes and head movements. When you're talking to someone or looking at a video camera, cell phone, etc., your eyes aren't staring blearily at the lens all the time, or even at a person, all the time due to our process of thoughts. Even our face moves when we talk. Only robots, or AI, do not do that. But coders are not psychologists anyway.
Does Hallo provides it's own audio voices like Hedra, or do you have to upload all the voices yourself?
you need to upload
@@theAIsearch I see. Ok, thanks!
Can't get it to work in comfy either, not sure why all the nodes are in green but when I generate it just flashes to the end and does nothing?
Am I taking crazy pills or what this thing ran for like two and a half hours and then bottomed out and said it wants to restart now if I just paid for 2 hours of servers?
All the fears you used to have, now have been redrawn !
😃
Since AMD can now use Cuda wouldn't AMD cards be able to be used?
it says English only but you played a demo clip with Italian audio
good point. it'll work if the language consists of mostly english syllables
not work... torch is not present for me... can you help please?
hmm it should install torch when you run install.ps1. maybe try running that again
26:56 double clicking the .ps1 file doesnt run anything for me, it only opens the notepad and displays the file's code
what am i doing wrong?
right click and run with powershell instead of notepad
@@theAIsearch@johnangel5480
He's probably using a conda environment and that's a big issue for those who don't want to install python system-wide but rather in the conda sandbox. Search for "How can I activate a Conda environment from PowerShell?" in Stack Overflow.
amazing, how we can make the time less with set low setting?
is there any way?
my gpu is 4070 super 12gb, it took 10-15 min.
I've been using Vidnoz to do deep fakes of streamers and send the video to them in discord. Never fails to get a laugh! You can make them say what ever you want? Ever wanted to be a VIP or a mod in chat? Now you can make a fake video of the streamer promoting you! KEKW
Is there a script available to set everything up in Windows for Deepfake.🤔
How do you see the open source alternatives comparing to the copyrighted ones? Are they getting close?
I think that with open source, we could not only deal better with AI ethics ("Skynet", "big Brother", etc.), but we could get better results, since AI needs lot of data and open source helps with accessibility = it's free.
I really appreciate your work btw :D
open source is as good or only a bit behind. eg. Hallo is as good as Hedra, which is the best closed model you can use right now. i think Meta is also releasing an open source competitor to GPT-4o which will be natively multimodal. expect open source to only be a few months behind
@@theAIsearch thank you!
Hello sir. thanks for sharing. could you help? "The requested GPU duration (240s) is larger than the maximum allowed retry in -1 day, 23:59:59"
You've driven me nuts !
This is great for animating drawn images.. 👀
Excellent review and content !
Thanks!
6:05 watch the old guy's beard spazz out!
how to get this to work locally with GUI on windows for long clips of audio if you can figure it out please provide a full walkthrough step by step I have the hello and hello 2 portable but cant get them working and hugging face doesn't work or doesn't support long audio perhaps ran locally but in a browser or in a portable app what I have I haven't been able to figure out but seems to be for using it locally with gui
have hallo*
can you mention the minimum GPU requirement for running it locally ?
when installing install ps1 I get errors it seems like things arent installing right but it says its done at the bottom in pwrshell and there is no torch even when using venv
no module named torch
I would love to be able to help train a model if I have lots of video for reference. I would think that could make things more realistic?
Can someone please explain something in plain English. .if I increase the RAM of my computer with an external hard drive will I be able to install HALLO?!
no, you need VRAM which is your GPU's memory. increasing just RAM won't work
If only we had this around during the peak of StarCraft! The custom mods we can make!!
😃
Now you're thinking
Debates this year are going to be wild!😅
Nobody's cooked up a little web interface for this? Having to do it all through the command prompt is very basic.
Thanks. Is there an equivalent that's available for Android phone
unfortunately you need a gpu. or you could run it on hugging face on your phone
Hallo can be installed easily with pinokio
Extremely disturbing. Organised crime field day.
Absolutely crazy and beautiful. Imagine a utopia where there are no degenerates hungry for power or malicious intentions. Humanity, being kind, would achieve greatness in such a short time.
I am confused how this achives that future.... I just want ai gene editing and then boom utopia.... hopefully.
@Derpyzilla894 hope we figure out a way to live forever before i die!
@@Derpyzilla894 It's not about 'image' generation, you need to see further. There are many powerful models out there, probably some really close to AGI, maybe from Anthropic or OpenAI or some national research private labs. The point is, if we were in a utopia, those models would already be out there. But you can't let an AGI roam free with so many mentally unstable people, asking how to create explosives, making deepfakes of people in NSFW contexts; the list goes on. That's why many AIs are so restricted.
@@theAIsearch soulkiller from cyberpunk 2077? hahaha
@@Derpyzilla894 It's not about 'image' generation. There are many powerful models out there, probably some really close to AGI, maybe from Anthropic or OpenAI or some national research private labs. The point is, if we were in a utopia, those models would already be out there. But you can't let an AGI roam free with so many mentally unstable people, asking how to create explosives, making deepfakes of people in NSFW contexts; the list goes on. That's why many AIs are so restricted.
The problem with Hedra minute limit face morphs
I have CUDA version 6 with 512mb ram. Can it work on CPU only
Probably yes....I have an AI video face changer...on GPU it takes maybe an hour, but on CPU at least 5+ hours so yes it probably will but expect a long wait.
Dude with high movement and high expression it looks way more convincing
agreed. tweaking the settings can make it very natural
I only have one question. Ist this Open source and free?
yes
@@theAIsearch OK, then I will take a look.
Thank you for the info:)
no problem
I can't find the links to those songs you mentioned in the video. Btw, great job.
thanks. it's in the middle of this page fudan-generative-vision.github.io/hallo/
you don't need admin prompt when doing git clone, but only to enable ps1 script execution
thanks for the tip!
Did you know that TikTok already had this "effect" before all this AI hype, right?
Ug 10 mins with a 4090. Here i was hoping for real timr
the memes are gonna be fire this year!
😃
i don't understand why this exist and what is the porpouse of this
the image of the beast, yaah. off to heaven we go...
11:27 that's really the most believable
Hedras emotions are synced better and lip sync is better most of the time. You're blind. Most of these examples are pretty bad on sync
Well this is terrifying.
😃
Do we REALLY need that comment under every single AI video? It's progress, embrace it.
I duplicate the space and follow all the steps and just get a red "error" message.
what's the error?
@@theAIsearch The error is literally "Error" in red writing with no explanation. This error appears in the Output Window. It goes through all the steps of building with lots of sliding bars and then the red error is the output. It gives me absolutely nothing to go on. Thanks for spending some cycles on this.
Mix this with Udio AND BOOM! MAGICO!
😃
my folder didnt come with the powershell ps1 file
try to clone the repo again or download the ps1 directly from the github page
Error: This Space only works in duplicated instances.
yes, you need to duplicate it and pay for compute unfortunately. around $0.8/h
@@theAIsearch And for pc version to?
This is both amazing, and worrying, as I wonder how this could be abused, especially in politics.
I'm sure this will be heavily abused in politics
Does not work: ModuleNotFoundError: No module named 'triton'
reface had something similar to hello a couple years ago. I think they removed it.
Thanks for the video, it seems that the program needs to improve the times
good coverage appreciate it
No problem 👍
What if there wearing a mask?
Well I tried this with an actual gradio interface that someone was kind enough to send me the link and for right now it's just too slow and when I say slow I mean really slow, for a 2 minute video it was going on 5 hours when I finally shut it down. I hope at some point they optimize it because it looks interesting
We have gone across the uncanny valley
Pinokio has a one click script install for this.
Thanks for sharing!
Wow!
Looks very unnatural tbh, I am wondering why people hyping this ai shit all the time
The way the Hedra anime girl just slowly got worse 😭
lol yes
mazing explanation, thank you so much
Can I use it on phone?
maybe if you use huggingface. you'll need to pay for compute tho
That doesn't look good for anime.
Everyone knows animes speak in low framerate.
I'd have to say that EMO is the best
Hm..... More and more I look at these new models I feel like someone is definitely gonna slap a model into bot and let em learn continuous and wallah we made the first ever humanoid.
Maybe AI will take care of the Earth after wiping us from it ...
It'll do a much better job than us
We can finally make the new generation meme with sound.
😃
jajaja, los ejemplos cripis son graciosos
Hahaha, the creepy examples are so funnies
Are you italian?
it's not good with anime
"A GPU which you or I could easily buy"
... if you don't care about eating and heating for the rest of the year and want to dip into your mortgage.
lol just gotta live in your parents basement, sell your car, eat costco free samples every day, and save up for a few months