Thank you for dropping a comment!! Much appreciated and really, it's my pleasure! I'd be playing with all this stuff anyhow, might as well share it with you all!
I don't think Krea gets the recognition it deserves. Glad to see this. Oh, and you have a career as Anson Mount's stunt double if the whole TH-cam thing goes south.
Love coming here for the straight-to-the point and easy to get analysis Tim. You've certainly got a potential side career teaching clear, engaging and direct content creation! Thanks to you, I started creating AI videos a couple of months ago and you're my first go-to for latest news. Thank you 😊
bruh i don't wanna spam your channel but its so much I can relate to now in just one month of learning stable diffusion. the AI its like it will truly show you yourself for the very 1st time. I never realized how grumpy my mouth naturally forms when I'm just chilling, all my life I've heard "what's wrong", "you ok", at times I'm totally at peace. this tech got me fixing my attitude in real life! lol
Haha, isn't that crazy? It's like having someone draw your portrait. Like, THAT'S how I look to other people?! Since it's all generated from input data...yeah, it's not lying!
@@robertdouble559 Tim's joke aside, after many years of exposure to both modes, I believe that usage and screen size primarily determine your preference. If you mainly use a phone or an 8 to 10-inch laptop/tablet for general purposes, you may not develop a strong preference for either mode. However, if you read texts daily on a 32-inch monitor and have been doing so for many years, the eye strain and fatigue will begin to take their toll. I mean, there’s a reason SWAT units use flashbangs instead of darkbangs 😉
Agreed!! While the Kling method was pretty interesting-- I mean, man-- 20 to 30 videos is a big ask. Photos? A lot easier. I still think there is something that can be done on the T2V side-- like, if you could LoRa in other characters, plus style references for locations/settings-- I mean, there's something there. But, currently? It's kind of a lot for T2V.
หลายเดือนก่อน +10
According to the subtitles, huge technical advances in KOREA!
This is AMAZING!!! I’m currently subscribed to multiple different AI tools, but I’m seriously considering moving everything over to Krea since it offers everything I need on one platform.
You probably always been, and it is me that just realize late, that you are becoming one of the best, if not the best, source of "up to date" pertinent source of information in the generative AI area !!! But I just have a question: Do you take the time to sleep sometimes !!! lol But seriously, I hope you will continue to do this impressive work, with all the research and test behind your videos! THUMBS UP !!!!
I gotta admit, Tim, I'm not even that interested in AI (certainly *_not_* in *_creating_* my own classics), but your vids are just so damn *_funny--!_*
@@TheoreticallyMedia Thanks. Can you suggest a low cost platform to do the lipsync in post, or would it be best just to use Kling? Looking for inexpensive options.
Theoretical Simple idea to get consistency of character : Import a picture of you Guy on a neutral background. Prompt any good video generator to make a 360 orbit around. Extract the frames and re-import them to train your model. Do you think this would work ? @theoretically Media ?
I do like Krea and it's definitely looking like its going to be a big player in AI Creations. Aside from that I do like Pixverse for video generations - getting some good results with it :) Great video Tim 😃😃
I'll see you at the LTX meetup, Tim! Question: Have you tried Flux via Freepik? The quality is some of the best I've seen. You can ref character, style, structure similar to Leonardo and upscale within up to (I think) 4k
I have!! I actually have a big video w/ Freepik coming up when they launch their video model-- they've teased that around, so I don't think I'm spilling tea here. But yeah, loving Freepik. Crazy good platform!
Hello. I would like to ask how you can set it so that the camera does not turn in any direction and does not zoom in the video? Only a certain point should be fixed, instead of a rotating position. I did a lot of tests and in vain I wrote that there would be no zoom or rotation, but it still didn't work.
As an elderly man who uses light mode during the day and night mode during the night would like to ask if there's any discord community where people are keeping up with the changes. Soon I would work on a couple of music videos and I wish to get informed what tools work the best. Things move so fast!
I don't get how Krea lets you generate video with Klinr or Minimax. I looked at their subscription options and I don't see anything about video. Is it just accessing your own Kling and Minimax account via the Krea interface?
trained on site. I don't see an option to download. That would be something-- It would be cool to see custom models that you could upload to any of the platforms. The only one that (kinda) does it is Scenario-- where you can import in Civit models from that site. With Civit acting as a bit of a hub for trained models. So, not quite quite what you're looking for-- but the closest we have currently.
Thanks so much Tim! in this crazy time of Black Friday Offers Galore ... Kling mentioned the Training is only available to Pro and up. Which might be same for Krea. (Even OpenArt has a pretty good training and video set up [maybe even better than Krea] but only for paid users ) But after seeing your video I think it's still cheaper to train your yourself or character on Replicate, Fal ai, ect for Flux dev and then run that perfected image in Kling. Most of the time you save money and don't have to wait 7 mins to see what text to video has cooked up. Also VIDU is creeping up on everybody if it can just stay stable enough and boost their quality. AWESOME job again Tim! I think my money might be able to sleep sounding in my wallet this Black Friday LOL Has any seen Vidu's High Res video output from the Paid plans?
Wait. Loras.. woods? That is a lot of coffee. You sound like me, frankly. Also: I use CC and when you mention Krea, it keeps coming up as Korea, and I am giggling like a fool. "Ok, let's head over to Korea." Hee hee. And: Those guitars in the background and those versions of you stalking you... Brilliant. THIS is the new plot for Man In The Blue Business Suit going forward! Great experiments. Keep bashing. I learn from you without having to pay through the booty for AI subs.
Yeah, I love Replicate as well-- but you almost can't beat the API toolset that Krea has going on right now. It's crazy how fast they manage to get everything up and working.
I noticed there remains a problem with rendering hands, fingers. Some images you had six fingers, some only four. It is becoming the first thing I look for when generating images. 😁
Agreed-- kind of dark horses in the AI Platform wars. I try to shout them out whenever I can. I've done, probably 5 or 6 videos w/ Krea in it? I like them a lot--
In general, they seem smart enough to stay away from humans! Closest I've come in that arena is once stumbling into a bear in the woods. Thankfully, it didn't seem too interested in me-- but I'll say: you don't think you're scared of Bears until one is standing in front of you!
I'll try it out! I only had my stupid face to work with for the video, but I'll try to find some images of someone else to train up and see what happens. Hypothetically, you should be able to do it. You just have to load both models in.
The best thing with this is that it will kill the look-me-generation photos where people want to show off them being in places/situations because you'll never know if its real or not
1:53 My mom always told me: “distrust and stay away from the people who use the Light mode, and you will live a long life full of happiness” ... today and at my 103 years of age, I can say, that my mother was very wise
So wait - In Krea we can now animate using all of those engines without having to go to their sites? Kling, Luma, Runway, etc ???? How is that possible? I wonder if you get the True Engine or a lower version.
should work, as long as the training images were consistent. I mean, it's still AI, so you might get some weird results here and there, but that's where you need to play with those style sliders. But yeah, technically I could take 25 images of my dog and train for an AI version. And I wouldn't even have poop to clean up!
Man this is so funny now that I get this, fanny pack and keychain one of my trained models put this thuggish lookin hoodie sweater over his head, I'm like whoa! that aint me... no more? is it? Then I thought harder on the images I uploaded for AI training and lo & behold one of my old thug pics wearing a hoodie slipped in when I uploaded. It's real interesting observing/predicting the elements that will end up in the AI design & being able to see how itgot fused in!
Looking at all of this I'm thinking we need just 1 more generation of improved processing power devoted to both training and inference/rendering to generate videos indistinguishable from scenes in a Hollywood (or at least NetFlix) film. So I'd say by the time the Nvidia 5090 GPU is shipping in quantity we will get the first "wow, this looks like something Christopher Nolan could have filmed"!
Next, we need to figure out how to tell the AI which of the characters in the image is which. Lora easily changes the features of several characters in the image to be similar, meaning that the image shows several people with the same face.
The first one, for sure-- the second test (Iron Man) I actually went over to Kling to run in 1.5. Funny enough, I think it kinda flipped, with 1.5 doing a better job than Minimax. To be fair, in the video-- in my mind-- it came out a tie. Haha.
Depends, if you want a single character speaking to camera Runway, Hedra, LTX Studio can do that now. Multiple programs can do shot extensions to take a 5 or 10 second shot up to 30 seconds or you can manually do it by taking the last frame of a shot and using it as the first shot in a new video for img2vid. The issue being the more extensions you do the more likely you will get distortions. I'm sure this won't be the case for very long.
@@stevedrake360 You're welcome. Seaweed claims to do over 30 seconds for other shots, but isn't available outside of China. If you are looking to do non-speaking shots, then Runway and Luma Labs most aggressively promote shot extensions. I would totally not be surprised by something by the end of this year hit 15-20 second shots, but as it is the industry norm is still generally around 5-6 seconds. Kling does 10 seconds.
@@BrentLynch-zi9uh Again, thanks a bunch! By the way, in your first response you mentioned "img2vid." I'm brand new to this and don't know what that is.
Krea's superpower is how quickly they integrate the latest stuff. They're so fast with things like Realtime generation, creative upscaling, and...y'know, I didn't even GET to it in this video, but they've got Ideogram in there as well. It's crazy.
Haha, the JK Simmons one I hope?! I actually interviewed him once-- like, seriously, the NICEST guy. Since we share a last name, we chatted a bunch about our family history. Turns out, we are not related. haha. Still, great dude. If I actually COULD grow a mustache, I would-- but, sadly, I look like a 16 year old kid trying to grow facial hair...still. Sigh.
Yeeeeah, I know-- I hate that. To be honest, no special treatment here-- I just got lucky being online at the right time. That said, Krea ships fast, so I expect it to be live pretty quickly. And generally, the way I look at these is a review/walkthrough where I make a bunch of mistakes and pass it along to you-- so when it does drop, you don't waste time screwing up like I did!
Why not use a gray or white background on your reference. Then you likely don't have to care to much of that its picking up to much of you're backgrounds.
i dont understand why it takes elements from your background in the training data. that seems to be one of the dumbest things they have to fix. the AI should be able to recognize the face and only include that in the training and exclude other objects.
We've come a long way-- but yeah, still a hike ahead. That said-- like a year ago? Pre-Sora (tease) this would have been a pipedream. A year from now? I'm sure there will still be issues, but it's hard to imagine how much better its going to look.
@@TheoreticallyMedia True. Do you think Sora will ever see the light of day? Meta is not releasing their text to video, but rolling it into Instagram features. Do you think Sora is headed a similar way due to legal and ethical concerns?
To all future Ai movie makers, beautiful woman aren't always "perfect" Barbie looking.. Please refer back to 80s and 90s actors before making these movies..
Im sorry Minimax is good at images but extremely bad at the end. Cannot wait for this month to go back to runway. its beta but u pay if its like the biggest alpha programm. Take your prives down!!! Rinway 21 videos done while Minimax just 1, horrible
Script A was written by a human? I'm suddenly ashamed to be a human, that script was bloody terrible! And the author runs a script writing school? I guess it's true that those to can't, teach.
Oh, I hate them. A really big YT’er (in the finance space) once told me: the more cringe the thumbnail, the better the video performs. …he’s not wrong. Thumbnails are the worst part of every YT’ers day, I promise you.
Actually, can you do me a favor? What’s a thumbnail style you actually like? I’m curious as to your thoughts on what a good/non-cringe thumbnail might be.
@@TheoreticallyMedia god, it's just the stupid pog face stuff, the beast hype BS. your thumbnails are fine without the facial expressions. your vids are good, so when i see these thumbnails youre using, its very dissonant, and its a shame because almost all AI youtubers behave incredibly annoying
I'm so proud of you! Finally walking in the middle of the lane like a boss!
Haha, the Flux outputs have this vaguely europe city vibes-- I'm pretty sure I can't get a jaywalking ticket there!
Thank you for keeping us up to speed with these great AI tools
Thank you for dropping a comment!! Much appreciated and really, it's my pleasure! I'd be playing with all this stuff anyhow, might as well share it with you all!
I don't think Krea gets the recognition it deserves. Glad to see this. Oh, and you have a career as Anson Mount's stunt double if the whole TH-cam thing goes south.
agreed. The silent killers, in my opinion. They're SO fast with adopting new tech.
Love coming here for the straight-to-the point and easy to get analysis Tim. You've certainly got a potential side career teaching clear, engaging and direct content creation! Thanks to you, I started creating AI videos a couple of months ago and you're my first go-to for latest news. Thank you 😊
Tim, your work/ videos have inspired me over the last few months to bring my visions to life. Awesome!
bruh i don't wanna spam your channel but its so much I can relate to now in just one month of learning stable diffusion. the AI its like it will truly show you yourself for the very 1st time. I never realized how grumpy my mouth naturally forms when I'm just chilling, all my life I've heard "what's wrong", "you ok", at times I'm totally at peace. this tech got me fixing my attitude in real life! lol
Haha, isn't that crazy? It's like having someone draw your portrait. Like, THAT'S how I look to other people?!
Since it's all generated from input data...yeah, it's not lying!
I liked you before, Tim. But after hearing you say "Only psychopaths use light mode" I think I'm in love.
so you’ve never been out in the Sun
And who looks like a psychopath in almost every character rendering?
My son and I discussed this topic yesterday. Apparently I am a psychopath! ;)
I'll qualify that. I like a dark working grey interface, but a white gallery page to view finished work.
@@robertdouble559 Tim's joke aside, after many years of exposure to both modes, I believe that usage and screen size primarily determine your preference.
If you mainly use a phone or an 8 to 10-inch laptop/tablet for general purposes, you may not develop a strong preference for either mode. However, if you read texts daily on a 32-inch monitor and have been doing so for many years, the eye strain and fatigue will begin to take their toll.
I mean, there’s a reason SWAT units use flashbangs instead of darkbangs 😉
i love krea… they’re always pushing the edge 💪
That 'you vs ghosts' segment cracked me up, it was hilarious
Fat me with the judging ghost really cracked me up!! Haha, so mean but so funny.
Finally someone made it easy.
Agreed!! While the Kling method was pretty interesting-- I mean, man-- 20 to 30 videos is a big ask. Photos? A lot easier.
I still think there is something that can be done on the T2V side-- like, if you could LoRa in other characters, plus style references for locations/settings-- I mean, there's something there.
But, currently? It's kind of a lot for T2V.
According to the subtitles, huge technical advances in KOREA!
Haha, I'll dig into the editor and try to change that! That's hilarious!
Just amazing ❤ everyday learning something new from you, keep it up man 🏆
nice from KREA; I use everyday , but dont know this .. thank you Tim
This is AMAZING!!! I’m currently subscribed to multiple different AI tools, but I’m seriously considering moving everything over to Krea since it offers everything I need on one platform.
You probably always been, and it is me that just realize late, that you are becoming one of the best, if not the best, source of "up to date" pertinent source of information in the generative AI area !!!
But I just have a question: Do you take the time to sleep sometimes !!! lol
But seriously, I hope you will continue to do this impressive work, with all the research and test behind your videos!
THUMBS UP !!!!
Haha, coffee, coffee, coffee. And when I do sleep, I just make sure to have a few podcasts running so I can keep up with everything in my dreams!
I gotta admit, Tim, I'm not even that interested in AI (certainly *_not_* in *_creating_* my own classics), but your vids are just so damn *_funny--!_*
Haha, gotta keep the comedy in to stay sane in these nutty times! Appreciate the watch!
Are you able to do lipsyncing with using Kling, Minimax or Runway via Krea? Thanks. Great information as usual.
You can on Kling-- but only on their platform. That doesn't seem to be available in any of the API versions I've seen around
@@TheoreticallyMedia Thanks. Can you suggest a low cost platform to do the lipsync in post, or would it be best just to use Kling? Looking for inexpensive options.
Another great breakdown mate, bloody good stuff!, I would liked to have seen some of the 2K and 4K results
Theoretical Simple idea to get consistency of character :
Import a picture of you Guy on a neutral background.
Prompt any good video generator to make a 360 orbit around.
Extract the frames and re-import them to train your model.
Do you think this would work ? @theoretically Media ?
You are amazing man. is there a way or command to generate multiple consistent characters in one or multiple different scenes in runway?
Good work as always! Keep it up! :))
I do like Krea and it's definitely looking like its going to be a big player in AI Creations. Aside from that I do like Pixverse for video generations - getting some good results with it :)
Great video Tim 😃😃
I just walked out of the pub, what happened?! 😂 Nice one Timo!
Haha, "I knew I shouldn't have ordered that last pint!"
what about if you add cartoon type face, will it work?
I'll see you at the LTX meetup, Tim! Question: Have you tried Flux via Freepik? The quality is some of the best I've seen. You can ref character, style, structure similar to Leonardo and upscale within up to (I think) 4k
I have!! I actually have a big video w/ Freepik coming up when they launch their video model-- they've teased that around, so I don't think I'm spilling tea here. But yeah, loving Freepik. Crazy good platform!
@@TheoreticallyMedia Yeah, Freepik def teases video coming soon in the platform. Looking forward to your take on it all!
open blues jam session hahaha, love your self deprecating humor :)
Hello. I would like to ask how you can set it so that the camera does not turn in any direction and does not zoom in the video? Only a certain point should be fixed, instead of a rotating position. I did a lot of tests and in vain I wrote that there would be no zoom or rotation, but it still didn't work.
07:40 The plot twist we all needed as Mr Wonderful enters the scene with a 'yet to be deliverered' thought
As an elderly man who uses light mode during the day and night mode during the night would like to ask if there's any discord community where people are keeping up with the changes. Soon I would work on a couple of music videos and I wish to get informed what tools work the best. Things move so fast!
I don't get how Krea lets you generate video with Klinr or Minimax. I looked at their subscription options and I don't see anything about video. Is it just accessing your own Kling and Minimax account via the Krea interface?
Are you able to download the flux models that krea create or do they have to be used on the site?
trained on site. I don't see an option to download.
That would be something-- It would be cool to see custom models that you could upload to any of the platforms. The only one that (kinda) does it is Scenario-- where you can import in Civit models from that site. With Civit acting as a bit of a hub for trained models.
So, not quite quite what you're looking for-- but the closest we have currently.
Can you move on to adapting your test to include the best option for lip syncing (for both TTS and real voices)? Excellent work!
Got one coming up next week that is mindblowingly good.
This looks perfect and totally enough. I wonder what will professional portrait photographers do in a couple of months🙃
LOL at the guitars hanging on the walls in the images. 🤣
Thanks so much Tim! in this crazy time of Black Friday Offers Galore ... Kling mentioned the Training is only available to Pro and up. Which might be same for Krea. (Even OpenArt has a pretty good training and video set up [maybe even better than Krea] but only for paid users ) But after seeing your video I think it's still cheaper to train your yourself or character on Replicate, Fal ai, ect for Flux dev and then run that perfected image in Kling. Most of the time you save money and don't have to wait 7 mins to see what text to video has cooked up. Also VIDU is creeping up on everybody if it can just stay stable enough and boost their quality. AWESOME job again Tim! I think my money might be able to sleep sounding in my wallet this Black Friday LOL Has any seen Vidu's High Res video output from the Paid plans?
Ah man, Id love to meet you in person Tim! Unfortunately I wont be in town that time. Where are you gonna be?
Wait. Loras.. woods? That is a lot of coffee. You sound like me, frankly. Also: I use CC and when you mention Krea, it keeps coming up as Korea, and I am giggling like a fool. "Ok, let's head over to Korea." Hee hee. And: Those guitars in the background and those versions of you stalking you... Brilliant. THIS is the new plot for Man In The Blue Business Suit going forward! Great experiments. Keep bashing. I learn from you without having to pay through the booty for AI subs.
wow, you make a great super hero! Amazing technology, but can we trust any communication unless it's face to face now?
There are a few places I've found that have had Flux Lora training that's easy like this one. Replicate is the one I have used.
Yeah, I love Replicate as well-- but you almost can't beat the API toolset that Krea has going on right now. It's crazy how fast they manage to get everything up and working.
I noticed there remains a problem with rendering hands, fingers. Some images you had six fingers, some only four. It is becoming the first thing I look for when generating images. 😁
The hair movement of the man in bluesuit... straight out of brylcream commercial haha
Haha-- I know-- I kinda wish meat-space me had hair that did that!
Krea and Kive rarely get any airtime, but they're both super handy.
Agreed-- kind of dark horses in the AI Platform wars. I try to shout them out whenever I can. I've done, probably 5 or 6 videos w/ Krea in it? I like them a lot--
Yes we got wolfs around our house (Sweden). But we don't see them so much.
In general, they seem smart enough to stay away from humans! Closest I've come in that arena is once stumbling into a bear in the woods. Thankfully, it didn't seem too interested in me-- but I'll say: you don't think you're scared of Bears until one is standing in front of you!
I think windows 10/11 default MS paint can remove backgrounds.
Are multiple trained characters in a frame possible?
I'll try it out! I only had my stupid face to work with for the video, but I'll try to find some images of someone else to train up and see what happens. Hypothetically, you should be able to do it. You just have to load both models in.
The best thing with this is that it will kill the look-me-generation photos where people want to show off them being in places/situations because you'll never know if its real or not
1:53 My mom always told me: “distrust and stay away from the people who use the Light mode, and you will live a long life full of happiness” ... today and at my 103 years of age, I can say, that my mother was very wise
hi son
So wait - In Krea we can now animate using all of those engines without having to go to their sites? Kling, Luma, Runway, etc ???? How is that possible? I wonder if you get the True Engine or a lower version.
What would happen with animal heads or pixar style characters?
should work, as long as the training images were consistent. I mean, it's still AI, so you might get some weird results here and there, but that's where you need to play with those style sliders. But yeah, technically I could take 25 images of my dog and train for an AI version.
And I wouldn't even have poop to clean up!
Man this is so funny now that I get this, fanny pack and keychain one of my trained models put this thuggish lookin hoodie sweater over his head, I'm like whoa! that aint me... no more? is it? Then I thought harder on the images I uploaded for AI training and lo & behold one of my old thug pics wearing a hoodie slipped in when I uploaded. It's real interesting observing/predicting the elements that will end up in the AI design & being able to see how itgot fused in!
Looking at all of this I'm thinking we need just 1 more generation of improved processing power devoted to both training and inference/rendering to generate videos indistinguishable from scenes in a Hollywood (or at least NetFlix) film. So I'd say by the time the Nvidia 5090 GPU is shipping in quantity we will get the first "wow, this looks like something Christopher Nolan could have filmed"!
Great video
Appreciate it!! Had fun with this one-- especially the "drinking coffee with a ghost" part-- that cracked me up.
Does someone have access code for Krea?
I'm trying to get some for you guys...
@ you are the best one!!
True! "Only psychopaths use the light mode" 😅
Amazing❤
Next, we need to figure out how to tell the AI which of the characters in the image is which. Lora easily changes the features of several characters in the image to be similar, meaning that the image shows several people with the same face.
Did you try pixel dojo?
You are awesome, thank you!
Hey Tim the shoot out wasn’t a fair one as the Kling model you used was not 1.5. Krea only has 1.0. The newest Kling model trounces minimax
The first one, for sure-- the second test (Iron Man) I actually went over to Kling to run in 1.5.
Funny enough, I think it kinda flipped, with 1.5 doing a better job than Minimax. To be fair, in the video-- in my mind-- it came out a tie. Haha.
Great! More Tims = more videos!
Does anyone know if there's an animation program that will do clips up to 30 seconds in length?
Depends, if you want a single character speaking to camera Runway, Hedra, LTX Studio can do that now. Multiple programs can do shot extensions to take a 5 or 10 second shot up to 30 seconds or you can manually do it by taking the last frame of a shot and using it as the first shot in a new video for img2vid. The issue being the more extensions you do the more likely you will get distortions. I'm sure this won't be the case for very long.
@@BrentLynch-zi9uh Thanks so much for your response- it is very helpful!
@@stevedrake360 You're welcome. Seaweed claims to do over 30 seconds for other shots, but isn't available outside of China. If you are looking to do non-speaking shots, then Runway and Luma Labs most aggressively promote shot extensions. I would totally not be surprised by something by the end of this year hit 15-20 second shots, but as it is the industry norm is still generally around 5-6 seconds. Kling does 10 seconds.
@@BrentLynch-zi9uh Again, thanks a bunch! By the way, in your first response you mentioned "img2vid." I'm brand new to this and don't know what that is.
add to my fav videos, I will take a look later, looks like with my custom gpts and this ,, I think my next video will blow all around :)
Totally Agree. We're about to take a BIG jump up-- and as we always say: This is the worst it will ever be!
That ghost around 7:23 is James May
Why are we talking about Lora training for Flux as if it's new? Did I miss something?
Mostly because it got a lot easier for people. Basically the non-comfy crowd.
@@TheoreticallyMedia Fair enough - I think I'm just disappointed because when I read the video title I thought it was something new!
Just wow
How long do you suspect it will be before an entire movie can be done this way with zero actors, zero sets, zero cameras etc
❤❤❤❤
i think Krea is now the best, i know midjourney is still on top , but krea is now the 2nd best image generator
Krea's superpower is how quickly they integrate the latest stuff. They're so fast with things like Realtime generation, creative upscaling, and...y'know, I didn't even GET to it in this video, but they've got Ideogram in there as well. It's crazy.
@ yeah, thank you for the video, never heard about it 😄
good info
screenwriter AI v Human link?
code please 🙏
Gonna check in with them to see if I can get some for you all!
Haha so many hillaroooouz moments 🤣 thanks
THIS KREA TOOL IS VERYY GREATT I like it , but I think you should upgrade plan to continue generating photo of yourself
You’ve really let yourself GHOST 👻 😂
HAHA! Chef's Kiss!!
6:12 🤣🤣
check out the full space movie ZAPPA GALAXY made with AI coming december
You're just one moustache away from becoming Jonah Jameson
Haha, the JK Simmons one I hope?! I actually interviewed him once-- like, seriously, the NICEST guy. Since we share a last name, we chatted a bunch about our family history. Turns out, we are not related. haha. Still, great dude.
If I actually COULD grow a mustache, I would-- but, sadly, I look like a 16 year old kid trying to grow facial hair...still. Sigh.
the dreaded "as a quick note i have special beta access that you don't"
Yeeeeah, I know-- I hate that. To be honest, no special treatment here-- I just got lucky being online at the right time. That said, Krea ships fast, so I expect it to be live pretty quickly. And generally, the way I look at these is a review/walkthrough where I make a bunch of mistakes and pass it along to you-- so when it does drop, you don't waste time screwing up like I did!
@@TheoreticallyMedia thank you for your service
Krea looks cool and all but I personally prefer PixelDojo.
LOL noticed you skipped PIKA
Te falto ser el Agente K
Why not use a gray or white background on your reference. Then you likely don't have to care to much of that its picking up to much of you're backgrounds.
Yeah, I mention that at some point in the video-- well, I say to knock out the backgrounds-- but, same kind of thing. Neutral backgrounds for sure!
i dont understand why it takes elements from your background in the training data. that seems to be one of the dumbest things they have to fix. the AI should be able to recognize the face and only include that in the training and exclude other objects.
👋 hi
Heya, Louis!! Played with Krea yet? Lots of cool toys in here!
Only psychopaths use light mode !!! A classic 🤣🤣
not low ranking adaption, tim - Low Rank Adaptation
I think u will be the best ai star cinamatic in new Hollywood
I thought u will make a new season of different cinamatic art
Keep gone 😢
Constancy
Haha. Thank you! Changed it!
Man, these video results are still so bad. Too much morphing, warping and such.
We've come a long way-- but yeah, still a hike ahead. That said-- like a year ago? Pre-Sora (tease) this would have been a pipedream.
A year from now? I'm sure there will still be issues, but it's hard to imagine how much better its going to look.
@@TheoreticallyMedia True. Do you think Sora will ever see the light of day? Meta is not releasing their text to video, but rolling it into Instagram features. Do you think Sora is headed a similar way due to legal and ethical concerns?
@@john_blues rumor has it we'll see Sora in less than 2 weeks. That's the rumor at least!
@@TheoreticallyMedia 😲😃
AI is almost there...surpassing humans...we are like a cockroach to them say in 12 months time...🤣🤣🤣
First
tonight you dine on the CHICKEN DINNER, because YOU are the winner!
@@TheoreticallyMedia looks like meat is back on the menu, boys!
PURELY SUPPORTIVE ALGO POSITIVE MANIPULATION COMMENT!!
This is the comment that CHANGED EVERYTHING FOREVER!
@@TheoreticallyMedia Never hurts to double up on positive algo vibes! :)
To all future Ai movie makers, beautiful woman aren't always "perfect" Barbie looking.. Please refer back to 80s and 90s actors before making these movies..
Do you have Asian heritage? I swear these AI video generators think you're Asian.
Haha, I actually am half Japanese. I guess the AI models somehow know that?
@TheoreticallyMedia that explains it! Nicely done AI.
Im sorry Minimax is good at images but extremely bad at the end. Cannot wait for this month to go back to runway. its beta but u pay if its like the biggest alpha programm. Take your prives down!!! Rinway 21 videos done while Minimax just 1, horrible
"You've reached your limit of 1 generation at a time with the free plan" So I guess this is untestable unless you're a paid subscriber? Shame.
maybe? I'm not sure if the training beta has been launched yet. But I'd suspect that it'll be a paid plan feature when it does.
Script A was written by a human? I'm suddenly ashamed to be a human, that script was bloody terrible! And the author runs a script writing school? I guess it's true that those to can't, teach.
please upload less cringe video thumbnails, thanks
Oh, I hate them. A really big YT’er (in the finance space) once told me: the more cringe the thumbnail, the better the video performs.
…he’s not wrong.
Thumbnails are the worst part of every YT’ers day, I promise you.
Actually, can you do me a favor? What’s a thumbnail style you actually like? I’m curious as to your thoughts on what a good/non-cringe thumbnail might be.
@@TheoreticallyMedia god, it's just the stupid pog face stuff, the beast hype BS. your thumbnails are fine without the facial expressions. your vids are good, so when i see these thumbnails youre using, its very dissonant, and its a shame because almost all AI youtubers behave incredibly annoying