It's so crazy, I am bad at coding nor have the patience or interest to learn it so using GPT or some other LLM is such a huge advantage, same goes with creating game assets with generative AI, things that would slow down or completely prevent progress in some game design ideas I have been working for fun, it's like a professional coder available 24/7 to help you through things
Anyone who knows how to code would clearly know that "like having a professional coder available" is a vast overstatement of its capabilities. The second you get into anything that isn't extremely well-documented, it reaches its limits. It also doesn't have a large enough context to deal with large projects, either. It can be handy for very simple tasks, though.
@@HannesRadkethen you did not see much😂 ai is like the most helpful thing in our future. it will help humans in every way. not only to be more productive but also in medicine and unfortuneatly wars
@notlogic.the.second You're right, I didn't see much that I found interesting, yet. I think protein folding simulations were very cool. In the medical sector I hope to see great improvements. LLMs have great potential, but most of what I saw was just shit looking artwork.
@@HannesRadke yes, but we are at the beginning and improvements are made incredibly fast. in a year it will be really good and in a few years we will have a lot ai integrated in our everyday life
I knew we were close to this for the last couple years, but was waiting to see it happen. This is what I've been dreaming of for a while now. Need to get this kind of interactive capability as a native feature in Blender, and hopefully many other programs/apps as well! This kind of stuff is the true revolution in use-cases for AI, IMO, which could totally transform how we interact with computers. Replacing all kinds of finnicky manual controls, navigating menus & sub-menus, buttons, sliders, hot-keys, etc... and instead, just talking or typing prompts to do pretty much anything.
@@AWSVids I can't wait to make my R2 unit actually useful in X-Wing alliance. Letting the AI handle the fiddly bits in old sims is going to be a game changer. Literally. And we can finally have actual real-time strategy games. Because until now, they aren't really strategy, it's real-time tactics. You could have the AI run the tactics. The human can give strategic orders for the AI to execute. The only reason I don't play RTS games, is I'm not that good at the twitch controls. I can come up with a strategy, it's executing that's the problem. Have the AI handle inventory management and old RPG games. Even remind you that you have that potion that would be perfect in this situation so you don't hold on to it until after the game is over. Add voice to games that don't have voice. So many quality of life improvements will make old games fantastic again.
@@AWSVids exactly! The more we interact with things as humans, the more seamless we will adopt them. The only drawback I see is, you cant work silently (for now) and there are some things you only discover by pressing stuff randomly and experimenting. Maybe in a future you could explain an idea and the AI should come up with different ways to solve it.
I remember around 2005 I was animating with Adobe Flash thinking how time-consuming this is and one day I will be able to speak into a microphone and the app will be able to animate based on what I say. Can you imagine the grin on my face watching this?
I swear flash was clunky asf. You could do the the same thing twice and get different results. I'm just starting my blender journey. I've been stuck in 2D with Adobe Illustrator for too long.
Yeah, that is already more than I can do in Blender. Hopefully the Blender geniuses maybe add a Gemini API or otherwise make it work how you suggested. Please post another video when you figure out more. Subbing.
This is the proof on how AI cannot replace us, engineers, designers, etc. It just makes our work process way more efficient. But still need our knowledge about that tool
Although it can't replace but it makes all the work thousand times easier due to which the workforce will get decreased drastically and one person will be needed in place of say 20 or 30 people which will automatically reduce almost 95% of the human workforce and people will hire only the best or the creative minds at maybe very low cost. But it might have many areas where new jobs will be created we just have to figure it out and wait and watch what comes next
Yeah the main problem is counseness, it allows us to have actual unique creativity, completely new ideas, new perspectives, and a strong will to push for something better. It's why as of now AI cannot come even close to breakthroughs on the level of Einsteins theory of relativity
It's funny, I changed the voice to one of the female ones and I was messing around with it and I refreshed a session and starting streaming my screen again and said "hello, gemini" to make sure it was working and it gave the most human, annoyed response and sort of sighed a "hello.. how can I help you today" like it was not in the mood or just going through the motions, it was kinda funny, I know it doesn't have emotions or feelings (to our knowledge at least) but it just surprised me in that moment, kinda felt real for a second there
This is the workflow I've been waiting for all my life. Its obviously very imperfect now, but in a year or two this will be so much more seamlessly integrated and fun to use.
Something to note: you can use the thing above called system instructions, and it basically makes it so that it is always following that, and it makes it so that it doesn’t crash!
It'd be cool to combine this live stream chat with AI computer control to have it do the work for you, you just sit back and make sure it's on the right track or tell it what you want done
That's actually impressive. As a developer I've been successfully experimenting with programming using chat and voice, and one thing I've encountered is that it's very important to formulate the commands correctly and precise, which could be a challenge for a non-english speaker, like myself, or you. I don't do blender, but I am very tempted to set everything up like you have in the video and repeat your experiment. Merry Christmas 😃
I have never used blender and have no interested in doing so but seeing what you are able to do here with the ai as a sparring partner is just incredible!! wow. I wonder when I will be able to use this type of workflow in music production.
it should be easier in music production tbh, knowing just how much of it is parametric. "Add this hihat stem in 3/4 for 2 bars and randomize velocity to have it sound less robotic", "sidechain this synth line with that kick" "layer that with a d minor line on the fender rhodes plugin, set tremolo to 120%", like you can keep rattling off lol.
well it looks like we'll be able to do this with any application, if you utilize his general technique to send all commands of the application to GPT or Gemini as a prompt, and then utilize his nifty little hack of tiny task record. until someone develops a portable operating system function that does this for every app, jerry-rigged solutions such as this might be necessary. but this is a amazing proof of concept, and it's a damn crying shame that none of those idiots in big tech came up with this one
You can probably output the generated code into your blend file through the Gemini API. But it would require you to write an extra script that's running on your computer to take in that data and put it into that blend file. You can probably ask Gemini about it and figure it out! 😉
I wrote a python code that brings XML to revise unbuilt apks and automatically writes it to the correct directory on the SD card. I used Gemini API, but I also use groq llama. llama is too fucking talkative, at least Gemini usually returns strict code. but as I recall, it was fucking crap.
llama always has to introduce itself and tell us some benign bullshit about what it has done for us. every time I say only produce strict code, it doesn't work, and then it has to tell me about itself and what it did for me. this is why content moderators and Zuckerberg and idiots need to stop being in charge of technology!
It's rather impressive if we look at what you are able to do with your voice AS WELL AS using fuzzy descriptions. This is very cool. This is A.I. useful.
Why aren't automation tools like this already mainstream? This proof-of-concept is a strong signal for what's to come in the field of creative acceleration... And to think this is just the beginning 🤯
bc something like this is nowhere close production and further dev (fine-tuning) needs lots of computing power, it's a matter of time and money. Profitability is also hard since i believe you would have to release under GPL as it'd have to be some sort of blender extension. Otherwise, you would have to sell your product to something like maya, cinema 4d or similar... Also you would be forced to use LLMs that allow commercial exploitability and would be limited by its license...
I followed many of your tutorials when i was learning, It always amazes me the 'out of the box' ideas you come up with. This is an excellent idea, you are just a year (or possibly 6month at the current roc) in-front of the tech.
This is fantastic. I tried to do a similar thing with Figma and their typescript language a few months ago. Your approach was a lot cleaner. Thanks for sharing!
Polyfjord, you literally made blender equivalent to autodesk software. Thinking out of the box dude. Love your videos and tutorials man. Thank you so much man. You are just awesome. You are doing a great help to lot of budding CG enthusiasts.
This is incredible. It totally automates the tedious manual bits of modeling and animating. I don't know that I see it being able to replace an artist any time soon, but as a supplemental tool this looks like it could possibly save a ton of time.
Subscribed i felt like i was living under a rock ,you are awesome this video was so interesting 😊 Hope to find more such interesting uses of efficient ai use :)
Very cool! I have been wondering for some years now why voice control of software apps is still in its infancy. You have shown very nicely (and creatively) that it should be possible to have better and useful tools for that task. Thank you for sharing!!!!!
I'm truly blown away by this! for all the negativity around the future of AI, this is what we artists should be looking forward to. It's almost like you have made Blender subservient and can make it your co-worker, one that is fast, prompt, doesn't challenge your instructions thinking it has a better way etc etc. If you ever create a course or training of some sort I would immediately sign up! And btw, I am nearly 50 years old. I should be afraid of this, cynical. But this I believe is amazing. Hearing To. Roosendthal speak of his desire to see Blender being more of a web-based collaborative tool you can see how this can really make that even more exciting. Imagine there's a few artists, from across the globe, discussing concepts, having the program create as the artists prompt it to come up with concepts.
Mark my words, this is video is going to go viral in no time. The possibilities you just unlocked here in everyone's imagination is nuts. I hope google/openAI etc see this and consider some sortof plug-in or integration with Blender similar to what you proposed. That would be incredible!
Wow! That's really amazing. It's early on but this is the future of 3D animation right here! Once this has been baked into the Blender or Maya or MAX and other many other programs, anyone can become a 2D/3D artist. I feel kind of badly for all the current artists, as I am afraid their jobs will be obsolete in the not so distant future.
Awesome! This is the early stages of exactly how I wish I could use a program like Blender! I hope the developers see this and make this kind of interactivity native in Blender.
Incredible, it's shocking that it can control Blender this way already but imagine how much better this would be with proper support for Blender with even the ability to generate full assets and what not, gonna be so wild.
wow. this is such an incredible unique use of ai, i can see this being a product soon if it isn't already. decimates barrier to entry for basic stuff (although doesn't repalce stuff)
If it automates every task, we're losing the jobs lol. I am fine now, thanks. While rejoicing you should consider that once good and reliable tool shipped, and it's going to do everything by itself, the 3D art scene is going to be so oversaturated that everything you do in blender will become just pointless and valueless. Though looking at o3 model it might the case the next year
@@iz5808 Money will be meaningless in the post-AGI world. The standard of living will drastically spike up as well. You will be able to do what you want, whether someone wants it or not.
@@mihirvd01 that's just a fantasy. The manual jobs are going to be there, the market won't go anywhere. Nobody will give you free stuff because you exist, it's opposite. The control AGI will provide for the government, somebody who runs it, or itself will make it easy controlling the populations which will pave the road for very centralized and dictatorship entities. Running everything by the robots is not sustainable long-term, once the oil becomes scarce, the high tech party is over. The only case when it's possible is when robots are going to be enhanced by flash, and gradually made into a carbon based creature. Which will likely to happen. Either way there is no bright future
Seems like a great way to automate mouse/keyboard interactions, but this is only really going to be powerful in the hands of a skilled operator that can help it along the way. Very cool use of that program, I would like to see more tooling in the form of screen/keyboard/mouse automation.
Really cool and inspirational! I am very sure the inter-app Integrations will come sooner than later! If I was a developer I wouldn't spend my time building such a solution. One would likely get steamrolled by the LLM providers. Rather focus on steering the model to output correct code more often. Thanks again for sharing this content!
Imagine empowering every individual with the perfect tool to articulate their innermost thoughts, transcending the need to be an artist yet creating as if they were
11:00 It crashed because you have failed to communicate your needs clearly. You went all apesheet with the monkeys on top of each other. Lol! Anyway really cool concept!
Imagine if this means of interacting with an application was concurrent with manual manipulation. Like you could select things with the mouse and then describe how you want to alter that selection. Or, manipulating a property manually to dial in the range you're looking for while using voice to alter/filter the selection, or vocally have an operation apply retroactively to things you specify manually. There's tons of potential there. I got my first taste of this many years ago working on a 3D model editing demo for VR. My office mate had brought in an Alexa, and I was using voice commands to control it while manipulating in VR, and realized that voice was a totally valid channel for augmenting my input. AI being aware of your working context makes that hugely potent. I am convinced that this is a glimpse into the future of human-computer interaction.
Whoa!! This will speed up setting a scene so fast and I can see an insanely efficient workflow here... All it needs is to be combined with autogen with local LLM's as ai agents to finetune the project.
this is exactly like my art director when he comes to my cubicle
It's so crazy, I am bad at coding nor have the patience or interest to learn it so using GPT or some other LLM is such a huge advantage, same goes with creating game assets with generative AI, things that would slow down or completely prevent progress in some game design ideas I have been working for fun, it's like a professional coder available 24/7 to help you through things
@@wesley6442 Red flag as fuck. Cool for your personal projects, terrible for working with others that do know how to code.
Anyone who knows how to code would clearly know that "like having a professional coder available" is a vast overstatement of its capabilities. The second you get into anything that isn't extremely well-documented, it reaches its limits. It also doesn't have a large enough context to deal with large projects, either. It can be handy for very simple tasks, though.
Thinks your an AI voice-command?
2million context
This is the wildest thing I have seen someone do with AI
They didn't even started yet😎
You haven't watched enough internet then.
LLM*
@ we get it
I have been doing much more than since gpt 3.5, learn more buddy
I think this is a brilliant proof of concept of how useful these things can be in the future. Really excited about this - thanks for the demonstration
Might be the first thing that I've seen someone do with AI that I find actually cool and useful.
@@HannesRadkethen you did not see much😂 ai is like the most helpful thing in our future. it will help humans in every way. not only to be more productive but also in medicine and unfortuneatly wars
@notlogic.the.second You're right, I didn't see much that I found interesting, yet.
I think protein folding simulations were very cool. In the medical sector I hope to see great improvements.
LLMs have great potential, but most of what I saw was just shit looking artwork.
@@HannesRadke yes, but we are at the beginning and improvements are made incredibly fast. in a year it will be really good and in a few years we will have a lot ai integrated in our everyday life
especially for people who cannot move limbs i think this is really exciting
Duuuuudeee I was expecting this kind of interaction with Apps years from now.
Amazing proof of concept!
I knew we were close to this for the last couple years, but was waiting to see it happen. This is what I've been dreaming of for a while now. Need to get this kind of interactive capability as a native feature in Blender, and hopefully many other programs/apps as well! This kind of stuff is the true revolution in use-cases for AI, IMO, which could totally transform how we interact with computers. Replacing all kinds of finnicky manual controls, navigating menus & sub-menus, buttons, sliders, hot-keys, etc... and instead, just talking or typing prompts to do pretty much anything.
@@AWSVids I can't wait to make my R2 unit actually useful in X-Wing alliance. Letting the AI handle the fiddly bits in old sims is going to be a game changer. Literally.
And we can finally have actual real-time strategy games. Because until now, they aren't really strategy, it's real-time tactics. You could have the AI run the tactics. The human can give strategic orders for the AI to execute. The only reason I don't play RTS games, is I'm not that good at the twitch controls. I can come up with a strategy, it's executing that's the problem.
Have the AI handle inventory management and old RPG games. Even remind you that you have that potion that would be perfect in this situation so you don't hold on to it until after the game is over.
Add voice to games that don't have voice.
So many quality of life improvements will make old games fantastic again.
@@AWSVids exactly! The more we interact with things as humans, the more seamless we will adopt them. The only drawback I see is, you cant work silently (for now) and there are some things you only discover by pressing stuff randomly and experimenting. Maybe in a future you could explain an idea and the AI should come up with different ways to solve it.
@@AWSVids better get a 5090 then or be stuck using cloud compute.
While this is amazing, this only works because the AI is putting out phyton code. And AIs are decent at putting out code so.
I remember around 2005 I was animating with Adobe Flash thinking how time-consuming this is and one day I will be able to speak into a microphone and the app will be able to animate based on what I say. Can you imagine the grin on my face watching this?
I swear flash was clunky asf. You could do the the same thing twice and get different results. I'm just starting my blender journey. I've been stuck in 2D with Adobe Illustrator for too long.
This is awesome! Early stages, but very impressive how you got this to work a little bit.
Yeah, that is already more than I can do in Blender. Hopefully the Blender geniuses maybe add a Gemini API or otherwise make it work how you suggested. Please post another video when you figure out more. Subbing.
Open source ai for blender is probably already in the works
This is the proof on how AI cannot replace us, engineers, designers, etc. It just makes our work process way more efficient. But still need our knowledge about that tool
exactly
Although it can't replace but it makes all the work thousand times easier due to which the workforce will get decreased drastically and one person will be needed in place of say 20 or 30 people which will automatically reduce almost 95% of the human workforce and people will hire only the best or the creative minds at maybe very low cost. But it might have many areas where new jobs will be created we just have to figure it out and wait and watch what comes next
For now hahaha
Yeah the main problem is counseness, it allows us to have actual unique creativity, completely new ideas, new perspectives, and a strong will to push for something better. It's why as of now AI cannot come even close to breakthroughs on the level of Einsteins theory of relativity
@3:40
🤖: What’s my purpose?
💁🏼♂️: You write Python programs for Blender 4.3
🤖: oh my god 😨😰
💁🏼♂️: Yeah, welcome to the club pal
Best use case 😂😂
😄😄
Use can pass the butter too yk
It's funny, I changed the voice to one of the female ones and I was messing around with it and I refreshed a session and starting streaming my screen again and said "hello, gemini" to make sure it was working and it gave the most human, annoyed response and sort of sighed a "hello.. how can I help you today" like it was not in the mood or just going through the motions, it was kinda funny, I know it doesn't have emotions or feelings (to our knowledge at least) but it just surprised me in that moment, kinda felt real for a second there
@@wesley6442 did you tell it it’s purpose?
This is the workflow I've been waiting for all my life. Its obviously very imperfect now, but in a year or two this will be so much more seamlessly integrated and fun to use.
now that was sci-fi episode.
Something to note: you can use the thing above called system instructions, and it basically makes it so that it is always following that, and it makes it so that it doesn’t crash!
It'd be cool to combine this live stream chat with AI computer control to have it do the work for you, you just sit back and make sure it's on the right track or tell it what you want done
@ Yes, you could use Claude’s computer use or something similar!
Nah it's crashing prob because of context limit. He should remove some messages or videos or use the api instead to delete the last n-5 prompt.
this is so cool and a great ad for google, I didn't know google had ai this impressive yet
That's actually impressive. As a developer I've been successfully experimenting with programming using chat and voice, and one thing I've encountered is that it's very important to formulate the commands correctly and precise, which could be a challenge for a non-english speaker, like myself, or you. I don't do blender, but I am very tempted to set everything up like you have in the video and repeat your experiment.
Merry Christmas 😃
I already like playing around in blender alot, this just makes it a thousand times more fun.
I have never used blender and have no interested in doing so but seeing what you are able to do here with the ai as a sparring partner is just incredible!! wow. I wonder when I will be able to use this type of workflow in music production.
it should be easier in music production tbh, knowing just how much of it is parametric. "Add this hihat stem in 3/4 for 2 bars and randomize velocity to have it sound less robotic", "sidechain this synth line with that kick" "layer that with a d minor line on the fender rhodes plugin, set tremolo to 120%", like you can keep rattling off lol.
@@RuthwikRao interesting take. The thing is, that it's all done with different plugins and not a standard interface. But yeah - probably it works!
well it looks like we'll be able to do this with any application, if you utilize his general technique to send all commands of the application to GPT or Gemini as a prompt, and then utilize his nifty little hack of tiny task record. until someone develops a portable operating system function that does this for every app, jerry-rigged solutions such as this might be necessary. but this is a amazing proof of concept, and it's a damn crying shame that none of those idiots in big tech came up with this one
This is absolutely amazing, that tiny task record feature, i really like how you did that man!
You can probably output the generated code into your blend file through the Gemini API.
But it would require you to write an extra script that's running on your computer to take in that data and put it into that blend file.
You can probably ask Gemini about it and figure it out! 😉
A chrome extension that would communicate with a local server could achieve something like that. And indeed let gemini create it for you!
I wrote a python code that brings XML to revise unbuilt apks and automatically writes it to the correct directory on the SD card. I used Gemini API, but I also use groq llama. llama is too fucking talkative, at least Gemini usually returns strict code. but as I recall, it was fucking crap.
llama always has to introduce itself and tell us some benign bullshit about what it has done for us. every time I say only produce strict code, it doesn't work, and then it has to tell me about itself and what it did for me. this is why content moderators and Zuckerberg and idiots need to stop being in charge of technology!
Try to Gemini making that code 😂 lol
It's rather impressive if we look at what you are able to do with your voice AS WELL AS using fuzzy descriptions. This is very cool. This is A.I. useful.
Why aren't automation tools like this already mainstream? This proof-of-concept is a strong signal for what's to come in the field of creative acceleration... And to think this is just the beginning 🤯
because they do exist :)
real Andy Liner
People are building it. Just give it time.
bc something like this is nowhere close production and further dev (fine-tuning) needs lots of computing power, it's a matter of time and money. Profitability is also hard since i believe you would have to release under GPL as it'd have to be some sort of blender extension. Otherwise, you would have to sell your product to something like maya, cinema 4d or similar... Also you would be forced to use LLMs that allow commercial exploitability and would be limited by its license...
It's already in programming with Github Copilot
I followed many of your tutorials when i was learning, It always amazes me the 'out of the box' ideas you come up with. This is an excellent idea, you are just a year (or possibly 6month at the current roc) in-front of the tech.
I love how at the end you say "this isn't impressive". No sir, this is the most impressive thing I've seen for quite some time 🤯
Almost like magic. This will have many uses for us 3D Artists. Thanks for sharing!
Not only you. Every such program really.
It was wild - learning coding and working with LLMs is a superpower!
This is fantastic. I tried to do a similar thing with Figma and their typescript language a few months ago. Your approach was a lot cleaner. Thanks for sharing!
This is so cool. Not a big fan of AI, but I see great potential in this use case. Really nice, good job!
man how do you always come up with such creative ideas
Thank you so much for updating this! I was alert when they said ai can control the user interface. This is a fantastic video! jawdrop 13:00
Polyfjord, you literally made blender equivalent to autodesk software. Thinking out of the box dude. Love your videos and tutorials man. Thank you so much man. You are just awesome. You are doing a great help to lot of budding CG enthusiasts.
Holy! That's a brilliant setup for voice command control😲
Imagine integrating this directly into Blender.
Mate, this is the best freakin' AI video so far. 100% will use this technique for coding. Mindblowing!
I'm so excited to see you do this again in 6 months!! This is going to get so good so fast
That was fantastic and just a glimpse of the future! OpenAI seen to be proposing something along these lines within their native app.
wow, that got seriously impressive fast! Great idea to use it like that!
This is completely incredible
If google are exposing this functionality to an API, a blender addon can handle all this inside of Blender. I think I can write that addon 😍
do it!
@@RuthwikRaoyou can do ir essyli , act like blender only code ouput,AND api, voice qnd resdy, all can do it
Habibi ! share with us your last experience
"This is not very impressive."
Haha, this is literally the most brilliant thing I've ever seen.
This is BONKERS.
Great basic idea that can and will be taken to the next level by the community. Well-done!
This is the future. I've been waiting for this idea to mature for a very long time. A few more years and we can all do it easily.
This is the best demonstration of this tech.
This is incredible. It totally automates the tedious manual bits of modeling and animating. I don't know that I see it being able to replace an artist any time soon, but as a supplemental tool this looks like it could possibly save a ton of time.
This is WILD! this is glimpse to what's coming for all software
what a time to be alive with all of these different technologies just getting better every day
That was hella awesome! I'm amazed at what you were able to do with AI. This is a great demonstration of how useful these things can be in the future.
Subscribed i felt like i was living under a rock ,you are awesome this video was so interesting 😊
Hope to find more such interesting uses of efficient ai use :)
This video is so frikkin amazing!!! I'm stoked
Very cool! I have been wondering for some years now why voice control of software apps is still in its infancy. You have shown very nicely (and creatively) that it should be possible to have better and useful tools for that task. Thank you for sharing!!!!!
i like that you were trying as much as possible with as many resource as you can use
This is freaking amazing...and scary! I have a feeling universal income is right around the corner as these features get better and better.
this absolutely fun to watch, thanks man
Bro this is actually insane, thank you for sharing this video!!!!!
The applications of this are almost limitless. And it's still eperimental. Crazy!
I'm truly blown away by this! for all the negativity around the future of AI, this is what we artists should be looking forward to. It's almost like you have made Blender subservient and can make it your co-worker, one that is fast, prompt, doesn't challenge your instructions thinking it has a better way etc etc. If you ever create a course or training of some sort I would immediately sign up!
And btw, I am nearly 50 years old. I should be afraid of this, cynical. But this I believe is amazing. Hearing To. Roosendthal speak of his desire to see Blender being more of a web-based collaborative tool you can see how this can really make that even more exciting. Imagine there's a few artists, from across the globe, discussing concepts, having the program create as the artists prompt it to come up with concepts.
please send prompts to my movie ideas and I will try to include your ideas in the collaborative production
This is INCREDIBLE! The future of workflows. It's here.
Mark my words, this is video is going to go viral in no time. The possibilities you just unlocked here in everyone's imagination is nuts. I hope google/openAI etc see this and consider some sortof plug-in or integration with Blender similar to what you proposed. That would be incredible!
Dude! this is so damn intriguing.
This is great. Keep going! I’ve been wanting to see agentic IDE's for Blender & OpenSCAD. This is the closest yet.
Wow! That's really amazing. It's early on but this is the future of 3D animation right here! Once this has been baked into the Blender or Maya or MAX and other many other programs, anyone can become a 2D/3D artist. I feel kind of badly for all the current artists, as I am afraid their jobs will be obsolete in the not so distant future.
This is amazing, ive never used blender never even wrote any code but with ai if things get more revolutionized it'll be such ease ❤
that's an amazing usage of AI. And also, I didnt know we could write python code for blender. It is also amazing! Very useful video, thanks!
my dude is pioneering a workflow over here.
Awesome! This is the early stages of exactly how I wish I could use a program like Blender! I hope the developers see this and make this kind of interactivity native in Blender.
crazy to see, very cool. also the monkey orbiting the tall cube was somehow deeply unsettling
This was really cool to watch. In a few years AI is gonna be at an insane level.
With this tool, you can drastically reduce the time to build. Really awesome. The future is bright
This is great, what I've been waiting for.
awsome workflow dude!
This is such a neat idea, having a dedicated LLM as a live blender guide.
Incredible, it's shocking that it can control Blender this way already but imagine how much better this would be with proper support for Blender with even the ability to generate full assets and what not, gonna be so wild.
This is absolutely insane. INSANE!!!!
wow. this is such an incredible unique use of ai, i can see this being a product soon if it isn't already. decimates barrier to entry for basic stuff (although doesn't repalce stuff)
this is revolutionary, imagine AI in just a year later, we can automate literally every task. Just seeing the ideas come to life is so satisfying.
If it automates every task, we're losing the jobs lol. I am fine now, thanks. While rejoicing you should consider that once good and reliable tool shipped, and it's going to do everything by itself, the 3D art scene is going to be so oversaturated that everything you do in blender will become just pointless and valueless. Though looking at o3 model it might the case the next year
@@iz5808 Money will be meaningless in the post-AGI world. The standard of living will drastically spike up as well. You will be able to do what you want, whether someone wants it or not.
@@mihirvd01 that's just a fantasy. The manual jobs are going to be there, the market won't go anywhere. Nobody will give you free stuff because you exist, it's opposite. The control AGI will provide for the government, somebody who runs it, or itself will make it easy controlling the populations which will pave the road for very centralized and dictatorship entities. Running everything by the robots is not sustainable long-term, once the oil becomes scarce, the high tech party is over. The only case when it's possible is when robots are going to be enhanced by flash, and gradually made into a carbon based creature. Which will likely to happen. Either way there is no bright future
@@mihirvd01 What insane blind faith.
Great Video!! Will try this with Web Dev but this is video is really informative and you just got a new subscriber !!
Seems like a great way to automate mouse/keyboard interactions, but this is only really going to be powerful in the hands of a skilled operator that can help it along the way. Very cool use of that program, I would like to see more tooling in the form of screen/keyboard/mouse automation.
Pretty cool, you are demonstrating it well. Star Trek computer vibes.
This is wild! Not too long before it's a seamless workflow.
Exciting! This will develop very soon into something truly amazing!
This what I've been waiting for for AI, can't wait!
That was so interesting. As a blender user, it really made me curious enough, to try it myself.
Thank you for sharing it!
Anything I say would be an understatement. Wow
Really cool and inspirational!
I am very sure the inter-app Integrations will come sooner than later!
If I was a developer I wouldn't spend my time building such a solution. One would likely get steamrolled by the LLM providers. Rather focus on steering the model to output correct code more often.
Thanks again for sharing this content!
Imagine empowering every individual with the perfect tool to articulate their innermost thoughts, transcending the need to be an artist yet creating as if they were
More sophisticated videos with this and you are a LEGEND ! :D Thanks
wow this is the future of 3D modelling. my mind is blown. i subscribed just to see how far u can take this in future videos. plz make an other one
Ideas sparking in my head!!! 🤯🎇✨
Great video. I can automate this further! :)
I love the way you limited the AI to not give you any fluff, just the stuff you asked for. If it's okay, I'm going to borrow this technique.
no you can't!
11:00 It crashed because you have failed to communicate your needs clearly. You went all apesheet with the monkeys on top of each other. Lol! Anyway really cool concept!
Really awesome, imagine what we will have in 30 years
This is fascinating! Excellent video dude.
Great proof of concept, that was fun to watch!
Dude this is ground breaking.
Imagine if this means of interacting with an application was concurrent with manual manipulation. Like you could select things with the mouse and then describe how you want to alter that selection. Or, manipulating a property manually to dial in the range you're looking for while using voice to alter/filter the selection, or vocally have an operation apply retroactively to things you specify manually. There's tons of potential there.
I got my first taste of this many years ago working on a 3D model editing demo for VR. My office mate had brought in an Alexa, and I was using voice commands to control it while manipulating in VR, and realized that voice was a totally valid channel for augmenting my input. AI being aware of your working context makes that hugely potent. I am convinced that this is a glimpse into the future of human-computer interaction.
Whoa!! This will speed up setting a scene so fast and I can see an insanely efficient workflow here... All it needs is to be combined with autogen with local LLM's as ai agents to finetune the project.
For the first time ever gemini did something great.
I didnt realize they had the "Multimodal Live API with Gemini 2.0" This is so cool
This is insane! Thanks you so much for making this video!
Man people with knowledge but lost hands or fingers to something would be happy to see this!!!
Nice update here, yeah that can be so helpful for artist that don’t know much about scripting, quite impressive thought for the simple demonstration!
The agentic era is going to be wild!
this is insane bro