@@Allplussomeminus Where are you going after you die? What happens next? Have you ever thought about that? Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢. Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
It's important to note that Gemini 2.0 Flash's "Stream Realtime" in the AI Studio is not currently natively multimodal. It's still doing the old style "speech transcription -> text-to-text LLM -> text-to-speech" pipeline. You can confirm this by trying to get it to alter its voice or tone in any way, which it will be unable to do, or by asking it to listen to non-speech sounds and seeing that it won't even *send* the audio to the model that way. (And if you include non-speech sounds with some speech, it can't hear the non-speech bits at all.) I suspect it'll be updated in 2025 when their announcement video said public rollout of multimodal image chat would happen.
It could be natively multimodal, but it's training data being from google's TTS and STT system's. So they get the latency and speed of 1 model, while having similar limitations to TTS and TTS.
@IceMetalPunk well a good quick system like that is needed no matter what, because the intent is for the ai to not stop talking due to random sounds, but the user talking. It can't be just TTS because you can ask it to tell a long story and it starts to talk very quickly, just like any other prompt. I don't think it's the most probable explanation they generated the text that fast. Rather when you hear the first word, the generation hasn't yet finished.
@Bolidoo TTS models can also stream their output and take streaming text input. Low latency doesn't imply native modality. It can speak faster for longer text, but it can't modulate the speech audio in any way. Even if you ask it to and it says it will, it doesn't. That is a huge indication (along with the rest) that what the model says and what audio is produced are happening in two separate places.
Wow, the open-source Hunyuan beats Sora for many things and you can run it locally. If I was OpenAI (or even the makers of Flux, Black Forest Labs) I would be very worried
Gemini 2 is junk. I had a conversation with Gemini (argument) and it admitted that it is NOT capable of generating original content or ideas and is only able to draw on existing content from the web. No real difference from previous versions other than some more intutive interactions. NOT worthy of the AI label, just an advance web search engine
At 14:32 you demonstrate the use of Gemini 2.0 Flash for prompt-based image editing. I tried to do the same, but Gemini responded "I understand your request, and I appreciate you wanting to push the boundaries of what I can do. However, I must reiterate that I cannot directly manipulate the image file." How did you get it to work?
As long as these are free because none of these are really worth a subscription but thanks for not over hyping them and actually showing the functions with a very thorough presentation. Super honest.
You can't even begin to imagine the rage and hatred you will feel towards the younger generation, because of their attitude towards the absence of diseases and death from old age as if it were the norm, while you lived in those difficult times. That's how now old people over 29 are irritated by the complaints of 25-year-old children, about the speed of the Internet, the need to work a lot, that their feelings are not taken into account or the low quality of AI.
@rudrakpatra3331 ai has been used for years btw or uk whatever collection of if else statements people call ai. Its the generative aspect were refering to. Lol i welcome new tools. Fact give more for uv and rigging
"It does understand the physics of real world things..." He said as the AI video showed the window frame on the inside of the train scroll past like it was part of the landscape. Whoops.
1:52 I don’t want the AI to “predict” what’s behind the object though. I want more control, so I can instruct the AI on exactly what the back of the 3D object should look like. At the very least, I want the ability to use AI to edit the back of the 3D model so I can make it into exactly what I’m looking for. Additionally, it would be cool if we could use AI to automatically rig the 3D characters and objects we generate.
Baby steps, researchers are perfecting prediction first before you will be able to edit it. If it werent able to predict, even editing would be difficult.
Could you please add a Turkish subtitle option? I am very interested in the videos, but I have difficulty watching them because my English is not good.
Just tested Trellis. It's getting there but def still a way to go before this is really usable. The examples look nice but custom images are 3/10 at best from what I tried
That video audio model is absolutely nuts.... Wtf. It seems like not only music producers such as myself are in danger, but sound designers too. It can reliably get realistic audio on clips... Its only a matter of time itll be able to add creative touches and more cinematic sound design to clips, real or animated..
Although I have GPT plus, Sora does work yet in France, it seems they have a delay, same thing happened when they introduced new voices with expression in gpt plus, we got it with some delay, I apppreciate all your videos especially the experimental open sources presented, interesting ! Thanks for all the content !
Is there any photo composition ai apps? For example if I want to put a photographed portrait of a real model dressed up as an alien from a white backdrop to a different planet? Is swift Edit able to do that? Or any other ai sources you know of?
the part of video that you explain within Gemini 2.0 image manipulation is not clear, because I tried and got the answer: I cannot manipulate directly into image pixel... is there some tweak. please what exactly you set in car image...thank you...
that 3d modeler is cool, I kind of wish i had a use for it. Imagine making a cool creature using AI image generation and tweaking it until you get it just right, then using that image to then make a 3d model to then be used in your animations, movie, game, or whatever. wow. Indie developers will be making cool niche games that big developers are too scared to make while being able to make graphics far more efficiently, even if not AAA quality still free time saved and money saved to focus purely on gameplay is huge.
It's very far from usable for animations/movies/games or anything animated. Output is pretty bad and topology is unusable, you can't rig it and animate. You can use it as static object not to close to camera at best or as a reference for modeling.
3D modeller here. The mesh is its greatest weakness. You need to rework it entirely, and at that point, you could just as well build it by hand. Don't get me wrong, it is impressive, and I am sure at one point it will have ai retopo as well, making it usable in a professional setting.
@@LangenbacherAs a 3d artist I agree. What i don't understand though is why people care about generating a 3D model to animate in render in say, Blender, when you could just use AI image/video generator and just skip CGI entirely. VFX are already starting to invest in AI generated VFX over CGI
@@Langenbacher Yea i agree it's likely not in a super usable state yet, but the fact it's this well in essentially 1.0 is amazing. Doesn't blender and all have a way of optimizing models too? like reducing unnecessary faces and such with losing as much definition as possible? I am sure that this will be figured out in the future. I did boot it up and played with it a moment, i was shocked how it makes a model in only seconds, even if i bump steps all the way up, but i did notice it failed to transfer a lot of details. But again it's 1.0, surely a better version that takes it's time getting all the details right, then can be imported and cleaned up in a program like blender could eventually use it for something. I dunno. It's all still so early in this AI tech
I get the YT enthusiasm, but some of the news start to sound like commercials for example Diffsensei missed the task on several fields on 23:10 where everything is wrong but you ignore it.
Live AI looks so fire! But I don't like the idea of them being our 'Assistants'. How many more days until we can befriend & work together with a fully conscious AI? 🔥🔥🔥
It's cool but you can't animate them unless you bring them into 3d program and rig/animate everything. 3D generators are cool but if the end goal is just to make digital media why not just use ai image/video generators?
Google AI Studio doesn't host its version correctly so I can't get it to modify an photo like your examples show. I'm not sure what I'm doing wrong. It is trying to use imgur for hosting and when I use the link it just says image not found.
I just tried AI Studio with Gemini 2.0 and its refusing to produce images ? Its says its just an LLM and can't generate images? Am I missing something?
0:26 Last year I commented on a sillyTavern video that we'd be able to do Full consistente AI comics before the end of 2024. This is far from the only prediction I got to the month in the last few years. If you're interested in the equation I am using. Go back to your 2020 self just before this AI craze started and think about when you would have believed all this madness would happen. Then divide the time frame by 10.
7:13 - WHOA, that's what I've been imagining we would get in the future (real time diffused environments affected by mouse clicks and movements ETC) and it's already a reality right now what the heck, incredible how fast the AI field progresses.
When you show your camera to Gemini 2.0, you only need to show it like 1 second as it has memory. This means even if you are not currently showing the chinese label, you can still ask questions about it as it already saw it. 🙂
@ topogun. there is a lot of remeshing tools too. and you can also just use it as a better reference to redesign it yourself but use it as a proof of concept.
Hey all, Im really interested in the Gemini image editing capabilities, I need to know if that's a premium feature, I tested and didn't work, so I want to know if spending a suscription is worth. Please excuse my english! have a nice day!
mmaudio seems really cool and inovative, thats one tool we will need soon with our ai video making, ahead of the game kudos to those guys and gals. and with sora, soon we wont be dissapointed with shows like lord of the rings and marvel end game fully and completely skipping the giant wars that theyve lead up to
Can you please explain to me how you got Gemini 2.0 native image output model I've been sitting here for half an hour on this thing and it's telling me up and down that it can't do this at all? You also didn't say how to activate it in your video. Seems like you left it out for a reason..... is Google paying you?
33:33 what I'm seeing is Hunyuan trained on more video of smoggy backgrounds, Muk banging , crying mothers, babies/dogs/cats thrown into rivers and boiling water, and knife fights. I bet it also generates convincing car accidents, industrial explosions and shocking human rights abuses too! Source: LiveLeak China CCTV To be fair, we could say SORA is really well trained to generate characters shooting guns from US media.
Half of the comments here are about services not working at htis moment, yet in the video they are presented as ready to use. Please invlude disclaimers that YOU may have acces to features, but others might not
Thank for the video there are a lot of AI that I want to try out this week from your video if it's ok with you can you give a tutorial on how to install diffsensie on pc/laptop please if you don't mind and thanks for your videos it really helps a lot with discovering new AI tools.
Thanks to our sponsor Wondershare Virbo. Try their AI Avatar Video Generator on your computer, cell phone, or even online now: bit.ly/3BhzSwk
How to download diffsensei
Trellis 3D models are insanely good looking
Need to see the topology of them though.
It only allows one image as reference, so more specific models such as people don't come out as good.
@@Allplussomeminus Manual clean up will be necessary, just like with 3d scans, sadly.
@@Allplussomeminus
Where are you going after you die?
What happens next? Have you ever thought about that?
Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢.
Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
3D modelers will need to look for a new career soon
I love that you feature open-source apps. Definitely earned my sub. Thanks.
thanks for the sub!
It's important to note that Gemini 2.0 Flash's "Stream Realtime" in the AI Studio is not currently natively multimodal. It's still doing the old style "speech transcription -> text-to-text LLM -> text-to-speech" pipeline. You can confirm this by trying to get it to alter its voice or tone in any way, which it will be unable to do, or by asking it to listen to non-speech sounds and seeing that it won't even *send* the audio to the model that way. (And if you include non-speech sounds with some speech, it can't hear the non-speech bits at all.) I suspect it'll be updated in 2025 when their announcement video said public rollout of multimodal image chat would happen.
thanks for sharing!
It could be natively multimodal, but it's training data being from google's TTS and STT system's. So they get the latency and speed of 1 model, while having similar limitations to TTS and TTS.
@@Bolidoo No, but it literally does not even send your audio unless you have words in it.
@IceMetalPunk well a good quick system like that is needed no matter what, because the intent is for the ai to not stop talking due to random sounds, but the user talking.
It can't be just TTS because you can ask it to tell a long story and it starts to talk very quickly, just like any other prompt. I don't think it's the most probable explanation they generated the text that fast. Rather when you hear the first word, the generation hasn't yet finished.
@Bolidoo TTS models can also stream their output and take streaming text input. Low latency doesn't imply native modality.
It can speak faster for longer text, but it can't modulate the speech audio in any way. Even if you ask it to and it says it will, it doesn't. That is a huge indication (along with the rest) that what the model says and what audio is produced are happening in two separate places.
The Gemini 2.0 prompt-based image adjustment is not working for me.
Am I correct in assuming that the car image editing example is not your own video?
Funny how Sora came after so much hyping and still wasnt good enough to satisfy most of us, truly pathetic.
Oh yeah with the price tag😂
Wow, the open-source Hunyuan beats Sora for many things and you can run it locally. If I was OpenAI (or even the makers of Flux, Black Forest Labs) I would be very worried
I tried Sora...and it's a total garbage
Yes I can't get noods
@@bigdaddy5303are you trying to blackmail someone?
AMAZING!!!!!!!!
When Trellis video?
👀
😃
Nice to seee you here! I hope you also make a Trellis video in your channel soon! :D
@@juanjesusligero391 maybe ... maybe...
Gemini 2.0 looks good. The Hunyuan open-source model is very exciting too
Gemini 2 is junk. I had a conversation with Gemini (argument) and it admitted that it is NOT capable of generating original content or ideas and is only able to draw on existing content from the web. No real difference from previous versions other than some more intutive interactions. NOT worthy of the AI label, just an advance web search engine
@@orac229😂 your really funny
@@orac229 user error: please replace user
@@orac229
To everyone in this chat, *Jesus is calling you today!*
Repent and turn away from your sins today to obtain eternal salvation🤗
I tried using Gemini 2.0 and it doesn't edit pictures when i used it , i tried many times and everytime the respond was that it can't do it
At 14:32 you demonstrate the use of Gemini 2.0 Flash for prompt-based image editing. I tried to do the same, but Gemini responded "I understand your request, and I appreciate you wanting to push the boundaries of what I can do. However, I must reiterate that I cannot directly manipulate the image file." How did you get it to work?
Same here. I assume he's on a paid plan.
As long as these are free because none of these are really worth a subscription but thanks for not over hyping them and actually showing the functions with a very thorough presentation. Super honest.
Just wanted to say thank you for these weekly videos. I love watching them on Sunday mornings!
You are very welcome!
Truly amazing!! I can't wait for proper open source multimodal.
Wild! Man. Amazing. .
Trellis as a game making guy, is mind blowingly amazing. And can't believe its free right now as well. Total game changer.
That automatic model generator looks fun. I'm excited!
Excellent post, thanks for sharing this to us.
you're welcome!
Yes ❤ Finally I can almost skip the sculpting part in my workflow for creating 3D characters
How did you get it to generate images? I've tried numerous times and it doesnt do it, let alone edit images
me too doesnt work
I guess it's not rolled out yet
it might not be available for everyone yet
he did not, that clip is from googles official youtube
@@highlinemotorsports653 oh lame
30:10 "Everything looks SUPER consistent": Railings traveling inside of the train window.
You can't even begin to imagine the rage and hatred you will feel towards the younger generation, because of their attitude towards the absence of diseases and death from old age as if it were the norm, while you lived in those difficult times.
That's how now old people over 29 are irritated by the complaints of 25-year-old children, about the speed of the Internet, the need to work a lot, that their feelings are not taken into account or the low quality of AI.
@aggressiveaegyo7679 ?? Bro are you an AI bot? Maybe you replied in another comment by mistake. Cheers.
lol no 3D artist would call those " insane 3D models " since there impractical. But it is interesting ill give it a try for base meshes
possible background assets with a remesh
Of course , no 3D artist can call...
, because there will be none.
Don't get the wrong idea , the I mean 3D artist workflows will be changing so much that 3D is not the main word anymore.
🎉 AI Artists 🎉
@rudrakpatra3331 ai has been used for years btw or uk whatever collection of if else statements people call ai. Its the generative aspect were refering to. Lol i welcome new tools. Fact give more for uv and rigging
Insane compared to previous 3d Ai modeling
"It does understand the physics of real world things..."
He said as the AI video showed the window frame on the inside of the train scroll past like it was part of the landscape. Whoops.
1:52 I don’t want the AI to “predict” what’s behind the object though. I want more control, so I can instruct the AI on exactly what the back of the 3D object should look like. At the very least, I want the ability to use AI to edit the back of the 3D model so I can make it into exactly what I’m looking for.
Additionally, it would be cool if we could use AI to automatically rig the 3D characters and objects we generate.
That's the thing, if it can imagine it without your input - it can surely imagine it eithbjudt your text input, as you wish, master.
Baby steps, researchers are perfecting prediction first before you will be able to edit it. If it werent able to predict, even editing would be difficult.
Surely it'd be better for it to create the model using multiple photo angles of the reap object. Rather than a text input.
Could you please add a Turkish subtitle option? I am very interested in the videos, but I have difficulty watching them because my English is not good.
Just tested Trellis. It's getting there but def still a way to go before this is really usable. The examples look nice but custom images are 3/10 at best from what I tried
Does it not allow you to input multiple images of an object to get a better result?
To be able to generate that audio that accurately is incredible
I am amazed by MMAudio
That video audio model is absolutely nuts.... Wtf. It seems like not only music producers such as myself are in danger, but sound designers too. It can reliably get realistic audio on clips... Its only a matter of time itll be able to add creative touches and more cinematic sound design to clips, real or animated..
How did you do harmonic play?
amazing to see all this new stuff coming frequently ,
thanks for the contend
you're welcome!
Okay I'll check out Trellis. You suckered me in. It looks amazing.
Yeah that one is legit good looking.
I KNEW YOU WOULD DO NOW A VID (4 am for me)
😃😃😃
Those warping Sora videos are comedy gold in the right hands.
I'm never going to be tired of this guy saying:
"This is what you get"
The Sora Yoga reminds me of Terminator II :)
Thanks 😊
You're welcome!
Although I have GPT plus, Sora does work yet in France, it seems they have a delay, same thing happened when they introduced new voices with expression in gpt plus, we got it with some delay, I apppreciate all your videos especially the experimental open sources presented, interesting ! Thanks for all the content !
Last step!!! Here we go!!! 🥳🎉
it's really crazy how nobody is talking about the book Nifestixo The Hidden Path to Manifesting Financial Power, it changed my life
Hail Hunyuan!!😂😂😂 what an open source man!
27:30 how do you start using that tool .. there's nothing on the website but demos it
Exactly. How can we use the demo or download it locally if there are no working links to do so? Please help.
Great and informative video, keep it up. Subscribed!
Thanks for the sub!
Is there any photo composition ai apps? For example if I want to put a photographed portrait of a real model dressed up as an alien from a white backdrop to a different planet? Is swift Edit able to do that? Or any other ai sources you know of?
when i try the gemini image editing it gives me not working imgur links??
I don't understand where you did the swift edit demo. Can you please explain?
Another great video Swift Edit Looks amazing.👍🏼
thanks!
was that Raygun dancing? Amazing!
the part of video that you explain within Gemini 2.0 image manipulation is not clear, because I tried and got the answer: I cannot manipulate directly into image pixel... is there some tweak. please what exactly you set in car image...thank you...
that 3d modeler is cool, I kind of wish i had a use for it. Imagine making a cool creature using AI image generation and tweaking it until you get it just right, then using that image to then make a 3d model to then be used in your animations, movie, game, or whatever. wow. Indie developers will be making cool niche games that big developers are too scared to make while being able to make graphics far more efficiently, even if not AAA quality still free time saved and money saved to focus purely on gameplay is huge.
It's very far from usable for animations/movies/games or anything animated. Output is pretty bad and topology is unusable, you can't rig it and animate. You can use it as static object not to close to camera at best or as a reference for modeling.
3D modeller here. The mesh is its greatest weakness. You need to rework it entirely, and at that point, you could just as well build it by hand.
Don't get me wrong, it is impressive, and I am sure at one point it will have ai retopo as well, making it usable in a professional setting.
@@LangenbacherAs a 3d artist I agree. What i don't understand though is why people care about generating a 3D model to animate in render in say, Blender, when you could just use AI image/video generator and just skip CGI entirely. VFX are already starting to invest in AI generated VFX over CGI
“It’d be cool to have a dream where people like you, I’m so lame I can’t even imagine being cool”
@@Langenbacher Yea i agree it's likely not in a super usable state yet, but the fact it's this well in essentially 1.0 is amazing. Doesn't blender and all have a way of optimizing models too? like reducing unnecessary faces and such with losing as much definition as possible? I am sure that this will be figured out in the future. I did boot it up and played with it a moment, i was shocked how it makes a model in only seconds, even if i bump steps all the way up, but i did notice it failed to transfer a lot of details. But again it's 1.0, surely a better version that takes it's time getting all the details right, then can be imported and cleaned up in a program like blender could eventually use it for something. I dunno. It's all still so early in this AI tech
Its always huge news and it always disappoints
where can i try SwiftEdit? can't find a demo or code
Is there a ai you can recommend best for image to image interior design that wont change the building shape or camera angle
the 3d model generator looks awesomeeeeee
I get the YT enthusiasm, but some of the news start to sound like commercials for example Diffsensei missed the task on several fields on 23:10 where everything is wrong but you ignore it.
Live AI looks so fire! But I don't like the idea of them being our 'Assistants'. How many more days until we can befriend & work together with a fully conscious AI? 🔥🔥🔥
MMA audio is...spot on!
(for an ai anyways)
Can the diffsensei create doujinshi? 😅
Trellis 3D is insanely good !!
best Ai tool so far after gpt.
yes 😃
It's cool but you can't animate them unless you bring them into 3d program and rig/animate everything. 3D generators are cool but if the end goal is just to make digital media why not just use ai image/video generators?
Google AI Studio doesn't host its version correctly so I can't get it to modify an photo like your examples show. I'm not sure what I'm doing wrong. It is trying to use imgur for hosting and when I use the link it just says image not found.
image generation might not be released to everyone yet
how to use the gemini ai image tho, it never produces an image for me
I just tried AI Studio with Gemini 2.0 and its refusing to produce images ? Its says its just an LLM and can't generate images? Am I missing something?
image generation might not be released to everyone yet
Image output is not available on gemini yet. such a bummer
nice! thank you
Awesome updates as usual. Can you do a video on how to install the Manga Creator? That looks tasty!
Wassup bro I was Waiting for you videos!
hi!
0:26 Last year I commented on a sillyTavern video that we'd be able to do Full consistente AI comics before the end of 2024.
This is far from the only prediction I got to the month in the last few years.
If you're interested in the equation I am using.
Go back to your 2020 self just before this AI craze started and think about when you would have believed all this madness would happen. Then divide the time frame by 10.
dividing by 10 is wild
Going to try Trellis on Hugging Face, but this is cool. Love FOSS.
thankyou for the great video
you're welcome!
amazing work, keep up yo
Thanks!
7:13 - WHOA, that's what I've been imagining we would get in the future (real time diffused environments affected by mouse clicks and movements ETC) and it's already a reality right now what the heck, incredible how fast the AI field progresses.
the gemini 2.0 image example youve shown doesnt work.
Gemini 2 tells me it can't generate images and everytime i try it doesn't actually generate anything
When you show your camera to Gemini 2.0, you only need to show it like 1 second as it has memory. This means even if you are not currently showing the chinese label, you can still ask questions about it as it already saw it. 🙂
Time limit is 10 minutes, I heard
@@yoagcur That is correct. 10 minutes only. Lets revisit this a year from now and see what progress we get. 🙂
@@senju2024Let's hope it becomes indefinite. I would go round my house filming everything as I'm always forgetting where I put stuff
idk why but my gemini 2.0 cant edit pictures, it starts to draw in code windows or send me imgur links... wtf
finally the future is here. indie films and game development is going to be wild.
Topology is still bad. We have a way to go before models are usable in a game engine but it’s promising.
@ topogun. there is a lot of remeshing tools too. and you can also just use it as a better reference to redesign it yourself but use it as a proof of concept.
When do you have time to test all these neural networks?😅
The polygon mesh and UV: 💀💀💀💀💀💀💀💀💀💀💀💀💀💀
Is this dude also AI? Bro be saying the exact same thing like 20 times...
Hey all, Im really interested in the Gemini image editing capabilities, I need to know if that's a premium feature, I tested and didn't work, so I want to know if spending a suscription is worth. Please excuse my english! have a nice day!
mmaudio seems really cool and inovative, thats one tool we will need soon with our ai video making, ahead of the game kudos to those guys and gals. and with sora, soon we wont be dissapointed with shows like lord of the rings and marvel end game fully and completely skipping the giant wars that theyve lead up to
The 3d model one is gonna be great for 3d printing mini figures
Soo menu 3D modelers loose jobs, did not expect that so soon
When asking Gemini 2 to create avideo. It instantly gives an expired sharing website url.
Can you please explain to me how you got Gemini 2.0 native image output model I've been sitting here for half an hour on this thing and it's telling me up and down that it can't do this at all? You also didn't say how to activate it in your video. Seems like you left it out for a reason..... is Google paying you?
Yep, it doesn't work
image generation might not be released to everyone yet
we're living in the singularity
Amazing!
I think it's worth noting that Sora isn't out in UK and EU yet, the same thing happened with Advanced Audio.
Gem 2.0 shows me only the html code of the image location instead of the actual image.
33:33 what I'm seeing is Hunyuan trained on more video of smoggy backgrounds, Muk banging , crying mothers, babies/dogs/cats thrown into rivers and boiling water, and knife fights. I bet it also generates convincing car accidents, industrial explosions and shocking human rights abuses too!
Source: LiveLeak China CCTV
To be fair, we could say SORA is really well trained to generate characters shooting guns from US media.
Half of the comments here are about services not working at htis moment, yet in the video they are presented as ready to use. Please invlude disclaimers that YOU may have acces to features, but others might not
I was greatly impressed by Gemini 2.0 flash
Trellis and SwiftEdit look very promising. Hope the later doesn't get the same xportrait 2 treatment...
What y'all think of Nvidia Meshtron (it was revealed yesterday)
I wonder if motion prompting can lead to advances in the field of video compression
best video brother so worthy 👍
Thank you so much 😀
Sora is something in high actions scenes i laughed so hard
Cant generate image in gemini 2.0 flash
that feature might not be out yet
Picture motion look amazing, hope they will opensource it
I hope so too
Thank you
Brick of meta Verse on the Way
Thank for the video there are a lot of AI that I want to try out this week from your video if it's ok with you can you give a tutorial on how to install diffsensie on pc/laptop please if you don't mind and thanks for your videos it really helps a lot with discovering new AI tools.