@@Allplussomeminus Where are you going after you die? What happens next? Have you ever thought about that? Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢. Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
Wow, the open-source Hunyuan beats Sora for many things and you can run it locally. If I was OpenAI (or even the makers of Flux, Black Forest Labs) I would be very worried
Gemini 2 is junk. I had a conversation with Gemini (argument) and it admitted that it is NOT capable of generating original content or ideas and is only able to draw on existing content from the web. No real difference from previous versions other than some more intutive interactions. NOT worthy of the AI label, just an advance web search engine
It's important to note that Gemini 2.0 Flash's "Stream Realtime" in the AI Studio is not currently natively multimodal. It's still doing the old style "speech transcription -> text-to-text LLM -> text-to-speech" pipeline. You can confirm this by trying to get it to alter its voice or tone in any way, which it will be unable to do, or by asking it to listen to non-speech sounds and seeing that it won't even *send* the audio to the model that way. (And if you include non-speech sounds with some speech, it can't hear the non-speech bits at all.) I suspect it'll be updated in 2025 when their announcement video said public rollout of multimodal image chat would happen.
It could be natively multimodal, but it's training data being from google's TTS and STT system's. So they get the latency and speed of 1 model, while having similar limitations to TTS and TTS.
@IceMetalPunk well a good quick system like that is needed no matter what, because the intent is for the ai to not stop talking due to random sounds, but the user talking. It can't be just TTS because you can ask it to tell a long story and it starts to talk very quickly, just like any other prompt. I don't think it's the most probable explanation they generated the text that fast. Rather when you hear the first word, the generation hasn't yet finished.
@Bolidoo TTS models can also stream their output and take streaming text input. Low latency doesn't imply native modality. It can speak faster for longer text, but it can't modulate the speech audio in any way. Even if you ask it to and it says it will, it doesn't. That is a huge indication (along with the rest) that what the model says and what audio is produced are happening in two separate places.
1:52 I don’t want the AI to “predict” what’s behind the object though. I want more control, so I can instruct the AI on exactly what the back of the 3D object should look like. At the very least, I want the ability to use AI to edit the back of the 3D model so I can make it into exactly what I’m looking for. Additionally, it would be cool if we could use AI to automatically rig the 3D characters and objects we generate.
Baby steps, researchers are perfecting prediction first before you will be able to edit it. If it werent able to predict, even editing would be difficult.
that 3d modeler is cool, I kind of wish i had a use for it. Imagine making a cool creature using AI image generation and tweaking it until you get it just right, then using that image to then make a 3d model to then be used in your animations, movie, game, or whatever. wow. Indie developers will be making cool niche games that big developers are too scared to make while being able to make graphics far more efficiently, even if not AAA quality still free time saved and money saved to focus purely on gameplay is huge.
It's very far from usable for animations/movies/games or anything animated. Output is pretty bad and topology is unusable, you can't rig it and animate. You can use it as static object not to close to camera at best or as a reference for modeling.
3D modeller here. The mesh is its greatest weakness. You need to rework it entirely, and at that point, you could just as well build it by hand. Don't get me wrong, it is impressive, and I am sure at one point it will have ai retopo as well, making it usable in a professional setting.
@@LangenbacherAs a 3d artist I agree. What i don't understand though is why people care about generating a 3D model to animate in render in say, Blender, when you could just use AI image/video generator and just skip CGI entirely. VFX are already starting to invest in AI generated VFX over CGI
@@Langenbacher Yea i agree it's likely not in a super usable state yet, but the fact it's this well in essentially 1.0 is amazing. Doesn't blender and all have a way of optimizing models too? like reducing unnecessary faces and such with losing as much definition as possible? I am sure that this will be figured out in the future. I did boot it up and played with it a moment, i was shocked how it makes a model in only seconds, even if i bump steps all the way up, but i did notice it failed to transfer a lot of details. But again it's 1.0, surely a better version that takes it's time getting all the details right, then can be imported and cleaned up in a program like blender could eventually use it for something. I dunno. It's all still so early in this AI tech
That video audio model is absolutely nuts.... Wtf. It seems like not only music producers such as myself are in danger, but sound designers too. It can reliably get realistic audio on clips... Its only a matter of time itll be able to add creative touches and more cinematic sound design to clips, real or animated..
When you show your camera to Gemini 2.0, you only need to show it like 1 second as it has memory. This means even if you are not currently showing the chinese label, you can still ask questions about it as it already saw it. 🙂
Just tested Trellis. It's getting there but def still a way to go before this is really usable. The examples look nice but custom images are 3/10 at best from what I tried
You can't even begin to imagine the rage and hatred you will feel towards the younger generation, because of their attitude towards the absence of diseases and death from old age as if it were the norm, while you lived in those difficult times. That's how now old people over 29 are irritated by the complaints of 25-year-old children, about the speed of the Internet, the need to work a lot, that their feelings are not taken into account or the low quality of AI.
7:13 - WHOA, that's what I've been imagining we would get in the future (real time diffused environments affected by mouse clicks and movements ETC) and it's already a reality right now what the heck, incredible how fast the AI field progresses.
the part of video that you explain within Gemini 2.0 image manipulation is not clear, because I tried and got the answer: I cannot manipulate directly into image pixel... is there some tweak. please what exactly you set in car image...thank you...
Is there any photo composition ai apps? For example if I want to put a photographed portrait of a real model dressed up as an alien from a white backdrop to a different planet? Is swift Edit able to do that? Or any other ai sources you know of?
At 14:32 you demonstrate the use of Gemini 2.0 Flash for prompt-based image editing. I tried to do the same, but Gemini responded "I understand your request, and I appreciate you wanting to push the boundaries of what I can do. However, I must reiterate that I cannot directly manipulate the image file." How did you get it to work?
2026 gonna be sick i be with new ai gain speed we only gonna go faster and faster with ai to the point whin ai solve DNA to give us immortally i just hope i be alive to that point even if i have to down pay for 500 years to pay off my immortality .
Google AI Studio doesn't host its version correctly so I can't get it to modify an photo like your examples show. I'm not sure what I'm doing wrong. It is trying to use imgur for hosting and when I use the link it just says image not found.
0:26 Last year I commented on a sillyTavern video that we'd be able to do Full consistente AI comics before the end of 2024. This is far from the only prediction I got to the month in the last few years. If you're interested in the equation I am using. Go back to your 2020 self just before this AI craze started and think about when you would have believed all this madness would happen. Then divide the time frame by 10.
OpenAI but is it OpenSource? No? Then Hunyuan will forever win this competition. Open source outlives subscription as it is limited to the people that pretty much trained the AI where as open source it has potential to be more improved by community. Of course it could also fall into subscription base within someone's wrong hands when someone decides to make money of it. Wishing that hunyuan will truly be open source, can't wait for image to video for comfy.
"It does understand the physics of real world things..." He said as the AI video showed the window frame on the inside of the train scroll past like it was part of the landscape. Whoops.
mmaudio seems really cool and inovative, thats one tool we will need soon with our ai video making, ahead of the game kudos to those guys and gals. and with sora, soon we wont be dissapointed with shows like lord of the rings and marvel end game fully and completely skipping the giant wars that theyve lead up to
I just tried AI Studio with Gemini 2.0 and its refusing to produce images ? Its says its just an LLM and can't generate images? Am I missing something?
I get the YT enthusiasm, but some of the news start to sound like commercials for example Diffsensei missed the task on several fields on 23:10 where everything is wrong but you ignore it.
21:47 It would be useful if we could edit each of the panels though. I’d like to have the control to decide exactly what each of the panels will look like.
the 3d model generator is intricate so might be good for large things but i think hyperhuman or whatever its called is still the best and first of all this kind of amazingness, this new one looks like its a copy basically, but if its free then it wins coz hyperhuman is about 30 per month
Can you please explain to me how you got Gemini 2.0 native image output model I've been sitting here for half an hour on this thing and it's telling me up and down that it can't do this at all? You also didn't say how to activate it in your video. Seems like you left it out for a reason..... is Google paying you?
It's cool but you can't animate them unless you bring them into 3d program and rig/animate everything. 3D generators are cool but if the end goal is just to make digital media why not just use ai image/video generators?
@ topogun. there is a lot of remeshing tools too. and you can also just use it as a better reference to redesign it yourself but use it as a proof of concept.
Consistence character AI is a prank nowadays, the clothes are always different, It's a joke I really wanna see a Character from a netflix show in different position
Trellis is interesting but only for a substitute for 3d scanning. Probably most useful for generating 3d models like rocks. Maybe it would also work on faces, ask an AI to generate you 20 faces. Then this converts the faces into 3d, then you project a face with good topology around the trellis model. But you need 16GB of ram, unless the demo plans to stay up for awhile. Something useful from AI for once, saving you time. Put it in an installer so people don't have to mess about with the console to install or run it, that's a time sink.
Thank for the video there are a lot of AI that I want to try out this week from your video if it's ok with you can you give a tutorial on how to install diffsensie on pc/laptop please if you don't mind and thanks for your videos it really helps a lot with discovering new AI tools.
Hi can you make full tutorial on git installed and miniconda to use them to install ai model and delete ai model that I install using them and how to update the ai model that I install using them. Sorry for the bad english
Could you please add a Turkish subtitle option? I am very interested in the videos, but I have difficulty watching them because my English is not good.
Trellis 3D models are insanely good looking
Need to see the topology of them though.
It only allows one image as reference, so more specific models such as people don't come out as good.
@@Allplussomeminus Manual clean up will be necessary, just like with 3d scans, sadly.
@@Allplussomeminus
Where are you going after you die?
What happens next? Have you ever thought about that?
Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢.
Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement
3D modelers will need to look for a new career soon
Funny how Sora came after so much hyping and still wasnt good enough to satisfy most of us, truly pathetic.
Oh yeah with the price tag😂
Wow, the open-source Hunyuan beats Sora for many things and you can run it locally. If I was OpenAI (or even the makers of Flux, Black Forest Labs) I would be very worried
I tried Sora...and it's a total garbage
Yes I can't get noods
@@bigdaddy5303are you trying to blackmail someone?
Gemini 2.0 looks good. The Hunyuan open-source model is very exciting too
Gemini 2 is junk. I had a conversation with Gemini (argument) and it admitted that it is NOT capable of generating original content or ideas and is only able to draw on existing content from the web. No real difference from previous versions other than some more intutive interactions. NOT worthy of the AI label, just an advance web search engine
@@orac229😂 your really funny
@@orac229 user error: please replace user
@@orac229
To everyone in this chat, *Jesus is calling you today!*
Repent and turn away from your sins today to obtain eternal salvation🤗
I love that you feature open-source apps. Definitely earned my sub. Thanks.
thanks for the sub!
Just wanted to say thank you for these weekly videos. I love watching them on Sunday mornings!
You are very welcome!
It's important to note that Gemini 2.0 Flash's "Stream Realtime" in the AI Studio is not currently natively multimodal. It's still doing the old style "speech transcription -> text-to-text LLM -> text-to-speech" pipeline. You can confirm this by trying to get it to alter its voice or tone in any way, which it will be unable to do, or by asking it to listen to non-speech sounds and seeing that it won't even *send* the audio to the model that way. (And if you include non-speech sounds with some speech, it can't hear the non-speech bits at all.) I suspect it'll be updated in 2025 when their announcement video said public rollout of multimodal image chat would happen.
thanks for sharing!
It could be natively multimodal, but it's training data being from google's TTS and STT system's. So they get the latency and speed of 1 model, while having similar limitations to TTS and TTS.
@@Bolidoo No, but it literally does not even send your audio unless you have words in it.
@IceMetalPunk well a good quick system like that is needed no matter what, because the intent is for the ai to not stop talking due to random sounds, but the user talking.
It can't be just TTS because you can ask it to tell a long story and it starts to talk very quickly, just like any other prompt. I don't think it's the most probable explanation they generated the text that fast. Rather when you hear the first word, the generation hasn't yet finished.
@Bolidoo TTS models can also stream their output and take streaming text input. Low latency doesn't imply native modality.
It can speak faster for longer text, but it can't modulate the speech audio in any way. Even if you ask it to and it says it will, it doesn't. That is a huge indication (along with the rest) that what the model says and what audio is produced are happening in two separate places.
lol no 3D artist would call those " insane 3D models " since there impractical. But it is interesting ill give it a try for base meshes
possible background assets with a remesh
I'm never going to be tired of this guy saying:
"This is what you get"
1:52 I don’t want the AI to “predict” what’s behind the object though. I want more control, so I can instruct the AI on exactly what the back of the 3D object should look like. At the very least, I want the ability to use AI to edit the back of the 3D model so I can make it into exactly what I’m looking for.
Additionally, it would be cool if we could use AI to automatically rig the 3D characters and objects we generate.
That's the thing, if it can imagine it without your input - it can surely imagine it eithbjudt your text input, as you wish, master.
Baby steps, researchers are perfecting prediction first before you will be able to edit it. If it werent able to predict, even editing would be difficult.
Surely it'd be better for it to create the model using multiple photo angles of the reap object. Rather than a text input.
To be able to generate that audio that accurately is incredible
Truly amazing!! I can't wait for proper open source multimodal.
AMAZING!!!!!!!!
When Trellis video?
👀
😃
Nice to seee you here! I hope you also make a Trellis video in your channel soon! :D
that 3d modeler is cool, I kind of wish i had a use for it. Imagine making a cool creature using AI image generation and tweaking it until you get it just right, then using that image to then make a 3d model to then be used in your animations, movie, game, or whatever. wow. Indie developers will be making cool niche games that big developers are too scared to make while being able to make graphics far more efficiently, even if not AAA quality still free time saved and money saved to focus purely on gameplay is huge.
It's very far from usable for animations/movies/games or anything animated. Output is pretty bad and topology is unusable, you can't rig it and animate. You can use it as static object not to close to camera at best or as a reference for modeling.
3D modeller here. The mesh is its greatest weakness. You need to rework it entirely, and at that point, you could just as well build it by hand.
Don't get me wrong, it is impressive, and I am sure at one point it will have ai retopo as well, making it usable in a professional setting.
@@LangenbacherAs a 3d artist I agree. What i don't understand though is why people care about generating a 3D model to animate in render in say, Blender, when you could just use AI image/video generator and just skip CGI entirely. VFX are already starting to invest in AI generated VFX over CGI
“It’d be cool to have a dream where people like you, I’m so lame I can’t even imagine being cool”
@@Langenbacher Yea i agree it's likely not in a super usable state yet, but the fact it's this well in essentially 1.0 is amazing. Doesn't blender and all have a way of optimizing models too? like reducing unnecessary faces and such with losing as much definition as possible? I am sure that this will be figured out in the future. I did boot it up and played with it a moment, i was shocked how it makes a model in only seconds, even if i bump steps all the way up, but i did notice it failed to transfer a lot of details. But again it's 1.0, surely a better version that takes it's time getting all the details right, then can be imported and cleaned up in a program like blender could eventually use it for something. I dunno. It's all still so early in this AI tech
The Sora Yoga reminds me of Terminator II :)
amazing to see all this new stuff coming frequently ,
thanks for the contend
you're welcome!
Yes ❤ Finally I can almost skip the sculpting part in my workflow for creating 3D characters
How did you get it to generate images? I've tried numerous times and it doesnt do it, let alone edit images
me too doesnt work
I guess it's not rolled out yet
it might not be available for everyone yet
That video audio model is absolutely nuts.... Wtf. It seems like not only music producers such as myself are in danger, but sound designers too. It can reliably get realistic audio on clips... Its only a matter of time itll be able to add creative touches and more cinematic sound design to clips, real or animated..
Wild! Man. Amazing. .
Trellis as a game making guy, is mind blowingly amazing. And can't believe its free right now as well. Total game changer.
When you show your camera to Gemini 2.0, you only need to show it like 1 second as it has memory. This means even if you are not currently showing the chinese label, you can still ask questions about it as it already saw it. 🙂
Time limit is 10 minutes, I heard
Just tested Trellis. It's getting there but def still a way to go before this is really usable. The examples look nice but custom images are 3/10 at best from what I tried
Does it not allow you to input multiple images of an object to get a better result?
30:10 "Everything looks SUPER consistent": Railings traveling inside of the train window.
You can't even begin to imagine the rage and hatred you will feel towards the younger generation, because of their attitude towards the absence of diseases and death from old age as if it were the norm, while you lived in those difficult times.
That's how now old people over 29 are irritated by the complaints of 25-year-old children, about the speed of the Internet, the need to work a lot, that their feelings are not taken into account or the low quality of AI.
Great and informative video, keep it up. Subscribed!
Thanks for the sub!
I tried using Gemini 2.0 and it doesn't edit pictures when i used it , i tried many times and everytime the respond was that it can't do it
I KNEW YOU WOULD DO NOW A VID (4 am for me)
😃😃😃
The polygon mesh and UV: 💀💀💀💀💀💀💀💀💀💀💀💀💀💀
27:30 how do you start using that tool .. there's nothing on the website but demos it
Exactly. How can we use the demo or download it locally if there are no working links to do so? Please help.
Another great video Swift Edit Looks amazing.👍🏼
thanks!
How did you do harmonic play?
I don't understand where you did the swift edit demo. Can you please explain?
Sora is something in high actions scenes i laughed so hard
I think it's worth noting that Sora isn't out in UK and EU yet, the same thing happened with Advanced Audio.
7:13 - WHOA, that's what I've been imagining we would get in the future (real time diffused environments affected by mouse clicks and movements ETC) and it's already a reality right now what the heck, incredible how fast the AI field progresses.
the part of video that you explain within Gemini 2.0 image manipulation is not clear, because I tried and got the answer: I cannot manipulate directly into image pixel... is there some tweak. please what exactly you set in car image...thank you...
Is there any photo composition ai apps? For example if I want to put a photographed portrait of a real model dressed up as an alien from a white backdrop to a different planet? Is swift Edit able to do that? Or any other ai sources you know of?
At 14:32 you demonstrate the use of Gemini 2.0 Flash for prompt-based image editing. I tried to do the same, but Gemini responded "I understand your request, and I appreciate you wanting to push the boundaries of what I can do. However, I must reiterate that I cannot directly manipulate the image file." How did you get it to work?
Trellis and SwiftEdit look very promising. Hope the later doesn't get the same xportrait 2 treatment...
3D Model : Prompt -- 'Increase the size of the Melons"
2026 gonna be sick i be with new ai gain speed we only gonna go faster and faster with ai to the point whin ai solve DNA to give us immortally i just hope i be alive to that point even if i have to down pay for 500 years to pay off my immortality .
Believe in Jesus and you will have your immortality free.
There is no morality, ever living being dies , that’s life pal
where can i try SwiftEdit? can't find a demo or code
Google AI Studio doesn't host its version correctly so I can't get it to modify an photo like your examples show. I'm not sure what I'm doing wrong. It is trying to use imgur for hosting and when I use the link it just says image not found.
image generation might not be released to everyone yet
we're living in the singularity
Its always huge news and it always disappoints
According to Chat llm (which Includes multiple chat bots) It will take about 5-10 years to reach perfection with these video models.
0:26 Last year I commented on a sillyTavern video that we'd be able to do Full consistente AI comics before the end of 2024.
This is far from the only prediction I got to the month in the last few years.
If you're interested in the equation I am using.
Go back to your 2020 self just before this AI craze started and think about when you would have believed all this madness would happen. Then divide the time frame by 10.
dividing by 10 is wild
Brick of meta Verse on the Way
Wassup bro I was Waiting for you videos!
hi!
OpenAI but is it OpenSource? No? Then Hunyuan will forever win this competition. Open source outlives subscription as it is limited to the people that pretty much trained the AI where as open source it has potential to be more improved by community. Of course it could also fall into subscription base within someone's wrong hands when someone decides to make money of it. Wishing that hunyuan will truly be open source, can't wait for image to video for comfy.
Last step!!! Here we go!!! 🥳🎉
"It does understand the physics of real world things..."
He said as the AI video showed the window frame on the inside of the train scroll past like it was part of the landscape. Whoops.
mmaudio seems really cool and inovative, thats one tool we will need soon with our ai video making, ahead of the game kudos to those guys and gals. and with sora, soon we wont be dissapointed with shows like lord of the rings and marvel end game fully and completely skipping the giant wars that theyve lead up to
MMA audio is...spot on!
(for an ai anyways)
The 3d model one is gonna be great for 3d printing mini figures
Gem 2.0 shows me only the html code of the image location instead of the actual image.
When asking Gemini 2 to create avideo. It instantly gives an expired sharing website url.
I just tried AI Studio with Gemini 2.0 and its refusing to produce images ? Its says its just an LLM and can't generate images? Am I missing something?
image generation might not be released to everyone yet
When do you have time to test all these neural networks?😅
I get the YT enthusiasm, but some of the news start to sound like commercials for example Diffsensei missed the task on several fields on 23:10 where everything is wrong but you ignore it.
Welp, that’s my dreams of being a professional 3D artist out the window I guess…
I learned to 3D model 15 years ago. I knew this day would come but I honestly didn't think it wold come this fast!
21:47 It would be useful if we could edit each of the panels though. I’d like to have the control to decide exactly what each of the panels will look like.
how to use the gemini ai image tho, it never produces an image for me
Picture motion look amazing, hope they will opensource it
I hope so too
I wonder if motion prompting can lead to advances in the field of video compression
I was greatly impressed by Gemini 2.0 flash
Image output is not available on gemini yet. such a bummer
Going to try Trellis on Hugging Face, but this is cool. Love FOSS.
Lol Trellis 3D is generating the best quality model
the 3d model generator is intricate so might be good for large things but i think hyperhuman or whatever its called is still the best and first of all this kind of amazingness, this new one looks like its a copy basically, but if its free then it wins coz hyperhuman is about 30 per month
i cant get sora to make simple coherent videos to save my life. I have to remake a video 30 times to get one usable video. even the simplest shit...
Can you please explain to me how you got Gemini 2.0 native image output model I've been sitting here for half an hour on this thing and it's telling me up and down that it can't do this at all? You also didn't say how to activate it in your video. Seems like you left it out for a reason..... is Google paying you?
Yep, it doesn't work
image generation might not be released to everyone yet
Trellis 3D is insanely good !!
best Ai tool so far after gpt.
yes 😃
It's cool but you can't animate them unless you bring them into 3d program and rig/animate everything. 3D generators are cool but if the end goal is just to make digital media why not just use ai image/video generators?
finally the future is here. indie films and game development is going to be wild.
Topology is still bad. We have a way to go before models are usable in a game engine but it’s promising.
@ topogun. there is a lot of remeshing tools too. and you can also just use it as a better reference to redesign it yourself but use it as a proof of concept.
I opened your channel page.. and apperantly I was only recommended the “normal” videos, everything else you upload on ai is… something else
Yeah this is a gooner account trying to use AI for their weird fantasies.
Made in Abyss jumpscare @ 22:39
Swift edit and motion prompting are two i am most interested in trying, figures it's the ones not out yet. let us know when they are :)
Consistence character AI is a prank nowadays, the clothes are always different, It's a joke I really wanna see a Character from a netflix show in different position
What y'all think of Nvidia Meshtron (it was revealed yesterday)
Trellis is interesting but only for a substitute for 3d scanning. Probably most useful for generating 3d models like rocks. Maybe it would also work on faces, ask an AI to generate you 20 faces. Then this converts the faces into 3d, then you project a face with good topology around the trellis model. But you need 16GB of ram, unless the demo plans to stay up for awhile. Something useful from AI for once, saving you time. Put it in an installer so people don't have to mess about with the console to install or run it, that's a time sink.
Can I use Gemini 2.0 and add Photorealistic Avatar 3d? Please make a tutorial
i dont know of a quick & easy way yet
TLDR: SORA won the breakdance Olympics
2:02 i think this is an old printing machine
Cant generate image in gemini 2.0 flash
that feature might not be out yet
Rayguns cousin at the end has gone too far...🤣
Thank for the video there are a lot of AI that I want to try out this week from your video if it's ok with you can you give a tutorial on how to install diffsensie on pc/laptop please if you don't mind and thanks for your videos it really helps a lot with discovering new AI tools.
I’m sure this is like what PlayStation used to give players on the rewards program.
Soo menu 3D modelers loose jobs, did not expect that so soon
I cannot get google ai studio to make images
nice! thank you
WTF!!! This morning I Wake up as an expert 3D artist an old mangaka and a filmaker 😅 and Tony stark 😮
🚀🚀🚀
36:04 "And it would generate a real person."
Weird Science.
why make models for a game whin players in game can generate there own models styles that what games gonna look like real deal player one shit
Great job! I was wondering if there is an AI that creates characters with walk or jump spritesheets that can be used to make 2D games!
My first 3D prompt: Cortana naked
Ai vision has gotta be great for blind people
Hi can you make full tutorial on git installed and miniconda to use them to install ai model and delete ai model that I install using them and how to update the ai model that I install using them. Sorry for the bad english
Could you please add a Turkish subtitle option? I am very interested in the videos, but I have difficulty watching them because my English is not good.
Sora finally 🔥🔥
Lol but I can't access it because of high demand 😔
are you able to create an account?
@@theAIsearch i already have one
I consider Gemini to be nothing... unless they give out the native multimodal outputs.
What does that mean?
@xenn2996 this model is nothing yet until they actually add the native generation also audio abilities
I am amazed by MMAudio
it's a gundam!