Kling changed the game, very easy to generate movements and camera movements, easily obeys commands. I'm very pleased, especially for the affordable 50% subscription cost.
With Pika, I had to prompt it like 50 times sometimes (I'm not exagerating). So this is great news that it only took you a few times!! Thank you again!
Roles work for me (like actress, newsreader), the two known ones, on the homepage: “Commercial” setting. Facial expression, body language differ quite a lot between countries, continents (I, “half Chinese” ha-ha, ... indicate your nationality). In movies, commercials, etc., it's not the same, but it's more similar, more international, and more specific, so it might be easier for Kling.
I find it easier to describe it in a viewers perspective like for example, high angle, close-up view shot. It brings the camera a lot closer and it makes it easier to pan at the character in question
@@taoprompts no problem your video really helps me out on my ai film project ps I’m not sure if you got the news but body the new ai video generator came out I’m currently studying it today it’s honestly awesome I just wish they fixed the quality more but I’m sure it well be lots of fun for you if you try it 😁👍
You are Awesome, Brother, Thank You!! It looks like I'll go with Kling's Pro Version. I've gotten some great results with the free version. Many Thanks. 🙏🏾
kling takes forever to generate my videos from images, what could be the problem. like it takes literally like 4 hours or more to create just one 5 second video
Have you found any difference between prompting in English vs Chinese? I saw it recommended elsewhere to prompt in Chinese (which I don't speak, so I use Google translate). I've been using Chinese on Kling for the past week and getting good results but haven't wanted to waste credits by doing comparisons against English prompts.
Thank you again for your great tips. Do you have any suggestions on how to improve the quality of the standard video vs. professional video besides usid Topaz Video AI or Krea AI?
Capcut has a free video upscaler: www.capcut.com/tools/ai-video-upscaler It's not as good as the paid ones but it does sharpen the video a bit and is worth testing for free
Thank you Tao! I'm pretty sure that I'm going to go with King Ai then. I was waiting for Sora to come out, but after I think I'll get into KIing. I was using Pika...which was better than Runway just b/c of the cost vs product. Thank you again!
Glad to hear this was helpful! Btw some people have said that Kling changed the model slightly these last few days and have been having some issues with it
@@taoprompts Sorry if this response duplicates, but I am on Kling and it's generating a video for the past 2 hours. I think it's their program b/c this is a brand new computer. Thank you again!
@@YouUsAndAI I've gotten a lot of comments about how generation jobs are getting stuck these last few days. They'll probably sort that out soon, it may be that they have had too many users sign up
I have tried a lot with camera motion, it doesn't work at all. Many times it completely changes the landscape. Though even if it did work, the quality of videos(and also images) is so bad it is completely useless, see on a big screen. Wish it had default 2K resolution like Genmo AI. Now even if they add more features, they would cost astronomically more credits. But more importantly how do we even unsubscribe? This is starting to worry me. Other AIs had clear methods but this biggest fraud does not. Does membership automatically expire? If anyone in this world bothers to talk about that, please mention that here.
Which video AI model do you think is best? I've tried a few. Runway does too much of it's own thing and changes the faces of the image. Luma is great, but there is hardly any control. Pika is good but not great. Kling seems to be a real contender.
@@taoprompts i’m using Kling also now. I prefer Luma for image to video and really text to image also. I think Kling really changes the faces a lot and there’s a lot of weird tearing.
Thank you so much for this awesome tutorial! Kling is amazing. I have been pushing it to do some super human strength feats and it has been some what successful. One thing I keep trying is having a jailed inmate bend the cell bars apart. The AI sets up the scene correctly, even shows the character gripping the bars and struggling with them but they never bend even after multiple tries and prompt fixes, even using a text to prompt AI to help :( I even thought that perhaps the AI concluded steel can't bend so i said they were rubber and still no good. Any thoughts of why the AI won't perform the feat? Thank you in advance for responding.
What the Ai can do is entirely based on the data it's trained with. The scene you just described of someone bending bars probably occurs extremely rarely so it will be hard for the Ai to create that. You could try turning up the relevance slider, but there are some motions the Ai won't be able to do
@@taoprompts Thank you for replying. I understand.. Regardless Kling is the best so far. What it has been able to do is amazing,. especially the real human body muscle movements. AI is developing so fast! Please continue to educate us on this! Thanks again!
Can you explain why when using Kling AI Image to Video and following the new prompt guidelines (also greatly explained by @Cyberjungle) that Image "A" using Prompt "A" work fine generating exactly what I want, but when I reuse Image "A" and the same Prompt "A" the AI generates a totally useless video translation compared to the first one?
I'm not sure what you mean. Different random seeds will create different videos even with the same prompt and image references. As far as I know, there isn't anyway to control the seed atm, so there will be some variance in the video outputs
@@taoprompts Ahhhh! "Random Seeds" I thought that was only an issue in Text to Image Generation when the AI uses a Random Seed to create, unless you specified a Reference Seed for the same image which is what I have experienced. In Image to Video I hadn't read or learned how Seeding affected the generation as I thought it was all prompt driven. I guess I have some more learning to do lol. Thanks Tao!
Kling is good but the problem is that the team behind kling are training it on eating videos so it might get worse on the other things while RunwayML Gen 3 and Luma dream machine are being trained on variety of things.
Kling is really good with eating videos, I think that's a small percentage of their dataset though. They're really good with a wide variety of motions that I haven't seen in Runway or Luma
Sure, you can prompt for any motion you can think of. It won't always follow the prompt correctly but depending on your product it can work pretty well
amazing start and game changer. lets hope it will get better. I have had the above issues as mentioned in your video above. nonetheless, i get by with some.. still more room for improvement.
Ai video has gotten so much better than even a few months ago. It can't do big motions but for smaller movements and closeup shots it does an amazing job
I haven't tested out end frame that much, based on what I've seen the end frame in Kling doesn't work as well in Luma. It's a nice function to have if you want to have extra control over the motions
I really do appreciate your videos,thanks bro Please I'll like to ask a question and I'll be glad if u can reply or do a video about it. Please can you do a video or explain how to write a prompt for a 360 degree camera angle movement and camera following movement on kling or luma AI Please can u tell me the prompt I can use to get those 2 effect on kling and luma AI
For following if you start with the shot from behind of a person (the person's back) and then prompt for "follow the person" that typically works well. I'm not sure about 360 degrees though
Kling is getting extremely slow all of a sudden. For a 5 second, standard image to video generation it can now sit at 99% for several hours before completed. Last night it took over 5 hours for one 5 second video. I'm using the free version, but people on reddit who are using the paid version is experiencing the same thing. This is bad.
That does seem to happen a lot. I haven't seen a great way to prevent it completely. I think the prompt diverges from the original image, or if the original image is very different than the training data, it can cause the scenes to swap.
Please someone help me prevent Kling from changing my face to Chinese (Oriental features). I'm Brazilian and whenever I upload my photos my face becomes Chinese.
I tried it with image to image for anime but I don’t think this was really trained on that, I even used the end frame feature and it was bad, luma is still in the lead on anime in-between
I showed some examples for vintage style photos in this animation styles video ( th-cam.com/video/hllfdBh57nQ/w-d-xo.html ), in general similar prompts as I used here should also work.
This is most likely due to server load. in recent days, probably for many people, with a free subscription, you have to wait 10 plus hours. I upload all my pictures in the morning, and there are 6 of them, then in the evening they are ready
I found that the sweet spot is between 75. Too high and the movement is very minimal and slow motion or all that happens is a zoom in. Too low and the animation leans into uncanny valley territory.
Nice video, but you didn't actually show us what pushing the "creativity" slider all the way to "0" (creative) does. You only showed the "1" (relevance) ones.
This is most likely due to server load. in recent days, probably for many people, with a free subscription, you have to wait 10 hours. I upload all my pictures in the morning, and there are 6 of them, then in the evening they are ready
I don't think Kling handles 2d animation very well, It's great at live action and 3D animation. Bur I've had 2d images that turned out fantastic on one try in Luma and Pixverse, but Kling would mess up or turn into a 3d model. Pixverse is the most consistent at animating good in 2d, well Luma does look better the animation details might not be as good.
Sometimes when the Ai tries to create motion, it has trouble identifying exactly where in the video the motion should be. And spreads the motion out to other parts of the video. Hence why you'll see the plants shaking.
there seems to be some sort of face tracking tech now, which actually looks worse. All the faces seem suddently more contrasted with everything else, and this is bad. Overall less coherence. more random stuff is happening. Before it used to stick at 75% for like 30 seconds and then just suddenly complete. But the process only took 2 minutes.
It does look like they have changed the model and now it saturates much more. I'm curious to see if they plan to bring back the old model and give us an option to use whichever one we prefer
What's going on with Kling AI? Why am I getting worse results? The quality has really dropped it's not realistic anymore. the composition and lighting feel more natural before, but now, the characters look awkward and out of place. The expressions and interactions don’t seem believable, and the overall realism is much lower than before.
Hope they add the camera control for image to video. It was instantly the first thing that came into my mind after seconds of visiting the website. It is so crazy they don't have it. Also resolution control on image to video, even though there is resolution control in image to image. The video generation is fast, but I wonder why the cost is so high then if it isn't taking a huge toll on them, image generation is completed within seconds. Credits are used up instantly and any new feature would cost far more credits. Too expensive to start with and 50% discount is 100% scam, it hides the enormous costs later on, they should clearly state the unbelievably huge costs before anyone subscribes to this trash.
Hold on, how do we unsubscribe to this biggest fraud? It only says purchase again. Does the subscription automatically expire? I can't find anyone talking about this important thing on internet.
Kling changed the game, very easy to generate movements and camera movements, easily obeys commands. I'm very pleased, especially for the affordable 50% subscription cost.
agree i tested all available option and kling ai so far is the best for me:)
It's a huge step forward from even 2 months ago. The prompt control is really good
But the ‚static‘ option is missing …
I’ve been playing around with it a bit and I’m impress on what it can do..I can only imagine what would be possible in a years time.
The prompt control is really strong, definitely much better than even a few months ago
With Pika, I had to prompt it like 50 times sometimes (I'm not exagerating). So this is great news that it only took you a few times!! Thank you again!
Great tips - thanks! I was impatient and bought a Gen3 subscription. Now I’m fully switched to Kling! :)
Kling is one of the best 👍. Let's see if Gen3 adds any more features like camera motion, that would help improve their tool a lot
Roles work for me (like actress, newsreader), the two known ones, on the homepage: “Commercial” setting.
Facial expression, body language differ quite a lot between countries, continents (I, “half Chinese” ha-ha, ... indicate your nationality).
In movies, commercials, etc., it's not the same, but it's more similar, more international, and more specific, so it might be easier for Kling.
Kling AI generates better than Runway and I was convinced of this myself when I created the video.
in gen3 mostly i get zoom in without movement of the object, i may do it in video editor i do not need ai for this:)
Kling is pretty amazing, you can do so much with the prompt
Thank you for your blessed perception, on how to extract the best from Kling.
I'm happy to help man, Kling can do some pretty impressive stuff
Thank you, exactly what I needed. Glad to see that it’s not me struggling with camera movements but it’s Kling 😂
If they added in camera motions for image-to-video, it would make their platform so much better 👍
You’re turning into the Kling King. Now I need to try it. Looks great. Thanks for all the updates and tutorials.
Thank you 💯. It's worth trying, the free version can still make some pretty impressive videos
I find it easier to describe it in a viewers perspective like for example, high angle, close-up view shot. It brings the camera a lot closer and it makes it easier to pan at the character in question
Thanks for the tip! I will have to try that out
@@taoprompts no problem your video really helps me out on my ai film project ps I’m not sure if you got the news but body the new ai video generator came out I’m currently studying it today it’s honestly awesome I just wish they fixed the quality more but I’m sure it well be lots of fun for you if you try it 😁👍
Another video, another banger, appreciate you man.
💯Glad you like the vids!
Kling is easily the best video generator right now!
i purchased the quarterly of kling ai and all i can say is superb
Yeah, it's one of the best right now
The race is on, kling just needs the higher fidelity of runway and wamn
@@BabylonBaller Yes!
It's great to see so much competition, they will all push each other
Thanks for this!
You are Awesome, Brother, Thank You!! It looks like I'll go with Kling's Pro Version. I've gotten some great results with the free version. Many Thanks. 🙏🏾
Thanks man, it's nice to have the extra credits with the subscription, just way faster to experiment with different prompts 👍
Do you have any tutorials where you discuss the possibility of using lyp-syncing ? Great content by the way!
99% pending problem
Yes brroo
Great video !!
Thank you Tao, I took the deluxe version it will be useful to me 😜🎉
That's great! I'm looking forward to see what videos you come up with on your channel
kling takes forever to generate my videos from images, what could be the problem. like it takes literally like 4 hours or more to create just one 5 second video
That's weird, it works pretty fast for me. I could be slower if you are a free user
Have you found any difference between prompting in English vs Chinese? I saw it recommended elsewhere to prompt in Chinese (which I don't speak, so I use Google translate). I've been using Chinese on Kling for the past week and getting good results but haven't wanted to waste credits by doing comparisons against English prompts.
I tried that for 4-5 videos and didn't notice a difference, it may be worth testing it out more though.
@@taoprompts Cool - thanks!
Can you do a video of how to put people together hugging each other . Example a picture of the same person as a child and as an adult . ❤
you save me a lot of test bro. thanks
No problem man, glad I could help.
Wow,haven't join you for awhile, congrats on 22k.
Thanks, and great to see you back 👍
Can you create a video about "how to create different angle of objects?" in Midjourney. Like a car for example.
I'll definitely do more tests on prompts in Kling 👍
I suggest to use cinematic vocabulary. Like we use when we write a scrip. That might work, or at least it does with Sora.
I will try that out, when they make 10s videos available for image-to-video it should give us a lot more options for scripting
Thank you again for your great tips. Do you have any suggestions on how to improve the quality of the standard video vs. professional video besides usid Topaz Video AI or Krea AI?
Capcut has a free video upscaler: www.capcut.com/tools/ai-video-upscaler
It's not as good as the paid ones but it does sharpen the video a bit and is worth testing for free
@@taoprompts Thank you for the tip.
Thank you Tao! I'm pretty sure that I'm going to go with King Ai then. I was waiting for Sora to come out, but after I think I'll get into KIing. I was using Pika...which was better than Runway just b/c of the cost vs product. Thank you again!
Glad to hear this was helpful! Btw some people have said that Kling changed the model slightly these last few days and have been having some issues with it
@@taoprompts Sorry if this response duplicates, but I am on Kling and it's generating a video for the past 2 hours. I think it's their program b/c this is a brand new computer. Thank you again!
@@YouUsAndAI I've gotten a lot of comments about how generation jobs are getting stuck these last few days. They'll probably sort that out soon, it may be that they have had too many users sign up
@@taoprompts Okay, few b/c I just bought a new computer and thought maybe I got a dud. lol. Thank you!
👍 thanks for tips
How come there is still that watermark in the paid version?
If you hover over the download arrow, there's an option to download without watermark
@@taoprompts thanks, I was trying to do this on mobile, but apparently yeah, it can only be done on a PC.
thankyou as always
couldn't get it to work - the sign up. It requires that I drag a puzzle but when I try, nothing happens
The puzzle handle you need to drag is below the image
Bro iam trying to create a video and the generation stock at 99% after several hour later it still in 99% please give me some solution
yes same here it takes 6 hours or more to generate a video. before it took only 2 minutes
I think they had to slow it down for free users, they probably are overwhelmed right now
Nice video.
Professional mode is a waste of credits from my tests so far. Camera motion prompting does not work, despite trying many variations.
It's too bad that camera motion prompts don't work, hopefully they will give us some way to control zoom/pan/rotate
Does anyone know how to unsubscribe to this fraud? No one is talking about that.
I have tried a lot with camera motion, it doesn't work at all. Many times it completely changes the landscape. Though even if it did work, the quality of videos(and also images) is so bad it is completely useless, see on a big screen. Wish it had default 2K resolution like Genmo AI. Now even if they add more features, they would cost astronomically more credits. But more importantly how do we even unsubscribe? This is starting to worry me. Other AIs had clear methods but this biggest fraud does not. Does membership automatically expire? If anyone in this world bothers to talk about that, please mention that here.
GOOD MORNING bro
Good morning 😎
Hi bro, pls reply... When I using kling ai, the processing are stuck at 99% for many hour. How to solve it??
I've heard a lot of people having this issue, I think they may be overwhelmed atm
I try the husky puppy runs onto the woman's lap but I am getting strange results what was your creative or R strength at ?
I used professional mode, 0.5
Which video AI model do you think is best? I've tried a few. Runway does too much of it's own thing and changes the faces of the image. Luma is great, but there is hardly any control. Pika is good but not great. Kling seems to be a real contender.
I like Kling the most myself, I think it does a great job with a large variety of visual styles & motions
@@taoprompts i’m using Kling also now. I prefer Luma for image to video and really text to image also. I think Kling really changes the faces a lot and there’s a lot of weird tearing.
Thank you so much for this awesome tutorial! Kling is amazing. I have been pushing it to do some super human strength feats and it has been some what successful. One thing I keep trying is having a jailed inmate bend the cell bars apart. The AI sets up the scene correctly, even shows the character gripping the bars and struggling with them but they never bend even after multiple tries and prompt fixes, even using a text to prompt AI to help :( I even thought that perhaps the AI concluded steel can't bend so i said they were rubber and still no good. Any thoughts of why the AI won't perform the feat? Thank you in advance for responding.
What the Ai can do is entirely based on the data it's trained with. The scene you just described of someone bending bars probably occurs extremely rarely so it will be hard for the Ai to create that. You could try turning up the relevance slider, but there are some motions the Ai won't be able to do
@@taoprompts Thank you for replying. I understand.. Regardless Kling is the best so far. What it has been able to do is amazing,. especially the real human body muscle movements. AI is developing so fast! Please continue to educate us on this! Thanks again!
Can you explain why when using Kling AI Image to Video and following the new prompt guidelines (also greatly explained by @Cyberjungle) that Image "A" using Prompt "A" work fine generating exactly what I want, but when I reuse Image "A" and the same Prompt "A" the AI generates a totally useless video translation compared to the first one?
I'm not sure what you mean. Different random seeds will create different videos even with the same prompt and image references. As far as I know, there isn't anyway to control the seed atm, so there will be some variance in the video outputs
@@taoprompts Ahhhh! "Random Seeds" I thought that was only an issue in Text to Image Generation when the AI uses a Random Seed to create, unless you specified a Reference Seed for the same image which is what I have experienced. In Image to Video I hadn't read or learned how Seeding affected the generation as I thought it was all prompt driven. I guess I have some more learning to do lol. Thanks Tao!
Is there a big difference between the standard mode and professional mode?
I think so. Professional mode seems to do a bit better job with keeping frame consistency and videos have sharper features.
Kling is good but the problem is that the team behind kling are training it on eating videos so it might get worse on the other things while RunwayML Gen 3 and Luma dream machine are being trained on variety of things.
Kling is really good with eating videos, I think that's a small percentage of their dataset though.
They're really good with a wide variety of motions that I haven't seen in Runway or Luma
so if i understand, the professional mode is more bad than the standard?
The details are sharper and quality is better overall.
For some specific situations, professional mode may give less motion
Is it possible to make a motion video for a marketing advertisement for a product?
Sure, you can prompt for any motion you can think of. It won't always follow the prompt correctly but depending on your product it can work pretty well
amazing start and game changer. lets hope it will get better. I have had the above issues as mentioned in your video above. nonetheless, i get by with some.. still more room for improvement.
Ai video has gotten so much better than even a few months ago. It can't do big motions but for smaller movements and closeup shots it does an amazing job
Can you make a video on how to convert video into anime or any other style using an open source tool? Thanks, a big fan 🙏🏻
Thanks for the suggestion, I will look into it 👍
NICE VIDEO!
Thank you 🙏
You should compare Kling to Vidu , which also has impressive movement
Thanks for suggesting 👍, I will check it out
use a chinese word of zoom in which is "镜头拉近",will works almost everytime
Thanks for the suggestion 👍
Thanks for the video. Can we use our own image? Adv pls
yeah you can animate your own photos
I can't figure out if end frame is good or bad or even necessary? liked and sub.
I haven't tested out end frame that much, based on what I've seen the end frame in Kling doesn't work as well in Luma. It's a nice function to have if you want to have extra control over the motions
Thanks for the great info and video! One question: were your examples in the 2nd half of this video made using 'standard mode' or 'professional mode'?
standard mode
@@taoprompts ah thanks. I hope they improve the pro mode higher-resolution prompt understanding soon!
I really do appreciate your videos,thanks bro
Please I'll like to ask a question and I'll be glad if u can reply or do a video about it.
Please can you do a video or explain how to write a prompt for a 360 degree camera angle movement and camera following movement on kling or luma AI
Please can u tell me the prompt I can use to get those 2 effect on kling and luma AI
For following if you start with the shot from behind of a person (the person's back) and then prompt for "follow the person" that typically works well.
I'm not sure about 360 degrees though
@@taoprompts thanks
What prompt would I use to have a stationary camera, no camera movement at all?
That seems hard to do, I will try to find a solution
How to lock kling moving camera? i used so many static camera commands but it doesn't work. please help me. Thank
I will try to find a solution
@@taoprompts thanks you
have you tried flux the new ai model ?
I've tried flux schnell. Based on results I've seen it's a really nice model
Kling image to video now takes several hours to generate. It has been this way for last two days now.
I think they had to slow it down for free users, they probably are overwhelmed right now
@@taoprompts No. This happens to those with paid subscriptions as well.
Bro, kling ai is now taking so much, i waited for 3 days but still it didn't generate and strucks at 99% ,
They have been having some speed issues lately, especially for free users
Kling is getting extremely slow all of a sudden. For a 5 second, standard image to video generation it can now sit at 99% for several hours before completed. Last night it took over 5 hours for one 5 second video. I'm using the free version, but people on reddit who are using the paid version is experiencing the same thing. This is bad.
Same here bro
same here bro. i am looking for a solution
same here yup 8 hours for me
It seems to be working fine for me today
@@taoprompts what browser do you use?. It takes so many hours to do image to video. I create my images in Leonardo ai
Have you found a way to prevent kling from changing the scene entirely? the amount of time it does it is crazy
That does seem to happen a lot. I haven't seen a great way to prevent it completely. I think the prompt diverges from the original image, or if the original image is very different than the training data, it can cause the scenes to swap.
Does professional mode make video resolution to 1080p?
No, I got 720p. Although the details will be sharper in professional mode
May I ask how the value of relevance is applied when there is no prompt?
I'm not sure what would happen in that case
The longer you extend the video, the more messed up it gets :(*
Yeah, Ai video struggles to stay consistent over long clips
Interesting, so standard mode follows prompt instructions more.
It seems to be that way if you prompt for higher motion clips
Please someone help me prevent Kling from changing my face to Chinese (Oriental features). I'm Brazilian and whenever I upload my photos my face becomes Chinese.
try specifically prompting for the race/ethnicity
Have you tried vidu ai?
I haven't tested that out yet, thanks for suggesting
I tried it with image to image for anime but I don’t think this was really trained on that, I even used the end frame feature and it was bad, luma is still in the lead on anime in-between
For anime clips it's hard to get a lot of motion, Kling tends to work better for photorealistic or CGI style videos
do you have any video tutorial for prompt about to bring old photo to more look alive ?
I showed some examples for vintage style photos in this animation styles video ( th-cam.com/video/hllfdBh57nQ/w-d-xo.html ), in general similar prompts as I used here should also work.
@@taoprompts thankyou
why Kling stop genereting video at 99% i have refrashe my page i regeneret the image to video. butvits duplicate and stack at 99%
This is most likely due to server load. in recent days, probably for many people, with a free subscription, you have to wait 10 plus hours. I upload all my pictures in the morning, and there are 6 of them, then in the evening they are ready
another thing that kling struggles with are 2D images, i tried several different 2D images ai generated and the motion and the quality was the worst
It works best with photorealistic type videos or 3d animations. It does have a hard time with 2d cartoons
I found that the sweet spot is between 75. Too high and the movement is very minimal and slow motion or all that happens is a zoom in. Too low and the animation leans into uncanny valley territory.
Thanks for the tip!
@@taoprompts i also make the prompt short and simple and then I add "realistic and detailed" at the end.
Love your video, Do yo have a trick to make static or fixed shot? I try to use many prompt but camera always move 😢
@@siwakornie5239 Did you leave creativity at 0?
@@matheuswebmktedesign I’ve never tried setting it to 0, only 0.1. I'll give it a try. Thanks!
@@siwakornie5239 To achieve more consistency in your prompt, always leave it at 0, especially in this case that it is necessary to have no movements.
I haven't found a consistent way to accomplish that yet, hopefully they gives us more camera control.
❤🎉
i like how kling sometimes smears the skin off her face for a sec
It does have some weird distortions sometimes when you try to add in a lot of motion
you look so handsome today!
Thank you!
@@taoprompts ahh! you noticed me! :D thank you!! your videos are so helpful and i watch them whenever you upload
@@vihariii4754 For sure! I like making these types of videos a lot, I'm happy to hear you like watching them 🙏🙏
@@taoprompts sending you lots of love and support! Bless you
I have been wasting credits ro generate consitent character,, If I attach a reference image, its always the same pose he has..
Try turning up the relevance slider all the way and prompt for the movement you want
Nice video, but you didn't actually show us what pushing the "creativity" slider all the way to "0" (creative) does. You only showed the "1" (relevance) ones.
It won't follow the prompt well
@@taoprompts I know. That's the part we want to see.
Anyone with a prompt for eyes blinking?
I just prompt for "the man blinks" on a close up shot of a person's face and turn the relevance slider all the way up.
It's going 99℅ and not moving cant get videos...
This is most likely due to server load. in recent days, probably for many people, with a free subscription, you have to wait 10 hours. I upload all my pictures in the morning, and there are 6 of them, then in the evening they are ready
I don't think Kling handles 2d animation very well, It's great at live action and 3D animation. Bur I've had 2d images that turned out fantastic on one try in Luma and Pixverse, but Kling would mess up or turn into a 3d model. Pixverse is the most consistent at animating good in 2d, well Luma does look better the animation details might not be as good.
That's true, Kling doesn't do 2d cartoon styles images very well. I haven't tried Pixverse myself, but it sounds promising
Professional mode created a mini earthquake in place of woman lying down?!
Sometimes when the Ai tries to create motion, it has trouble identifying exactly where in the video the motion should be. And spreads the motion out to other parts of the video. Hence why you'll see the plants shaking.
there seems to be some sort of face tracking tech now, which actually looks worse. All the faces seem suddently more contrasted with everything else, and this is bad. Overall less coherence. more random stuff is happening. Before it used to stick at 75% for like 30 seconds and then just suddenly complete. But the process only took 2 minutes.
It does look like they have changed the model and now it saturates much more. I'm curious to see if they plan to bring back the old model and give us an option to use whichever one we prefer
kling ai too long time generate for image to video wasting time to wait it for 4days generate video
What's going on with Kling AI? Why am I getting worse results? The quality has really dropped it's not realistic anymore. the composition and lighting feel more natural before, but now, the characters look awkward and out of place. The expressions and interactions don’t seem believable, and the overall realism is much lower than before.
I think they pushed some new updates which changed how it works, maybe to make things faster
Too slow to generate and it has been on 99% since yesterday
Hope they add the camera control for image to video. It was instantly the first thing that came into my mind after seconds of visiting the website. It is so crazy they don't have it. Also resolution control on image to video, even though there is resolution control in image to image. The video generation is fast, but I wonder why the cost is so high then if it isn't taking a huge toll on them, image generation is completed within seconds. Credits are used up instantly and any new feature would cost far more credits. Too expensive to start with and 50% discount is 100% scam, it hides the enormous costs later on, they should clearly state the unbelievably huge costs before anyone subscribes to this trash.
Hold on, how do we unsubscribe to this biggest fraud? It only says purchase again. Does the subscription automatically expire? I can't find anyone talking about this important thing on internet.
you could email support and see how to cancel your subscription
💯