I was just looking for a way to do this easily. You rock man. Thanks for sharing these workflows, I know how much time, effort and trial and error devising these methods takes.
This is something i was doing with SD 1.5 and with latent labs lora, but it was really low res (no 3d model, prompts only). The 360 panorama looked great on VR (i made anime environments only so it was easy to cheat the seam line) but projecting this on 3d model was something i was missing. This can turn out to be an efficient way to easily make 3d environment for VR games especially static shooter game like the house of the dead. Thank you very much for this wonderful tutorial.
Excellent and inspirational. I am a beginner Blender user with plenty of 360 VR and some immersive VR world building experience. I wonder if I can walk through this and create a base model to iterate on. Just decided to challenge myself, your video was the inspiration. Thank you.
Nice one! Only thing missing would be some kind of a thing that gets the information from mesh combined with texture that there is some stretching going on in certain regions. And then calculates the missing texture for that area so that it does not fall apart when changing perspective. Then use some displacement maps to get more detail. Binga bonga, nice scene!
The release of each of your videos gives me great expectations that of course are not disappointed. I share your interest in linking 3D modeling with AI. You are always at the forefront in this regard. Thank you very much for the workflows and your excellent tutorials.
It's videos like these that keep me going. There is a lot to learn on how to do all these things. Your workflow was great. Something I can follow. Blender is getting a bit easier, but that isn't saying a whole lot. Thank you for putting the work into this tutorial. You just got a subscribe from me!
Oh man. This is insane. Bravo. I'm going to try this. I find Blender painful but I need to push through to see this. I'm on a Mac, so wondering how I might see this inside my Quest 3?
For your upcoming project, you might recreate the Tiger Scene from "Gladiator" set in the Colosseum, featuring an animated tiger restrained by a chain and a crowd simulation
I was really stunned by that waterfall tho! I really like how the idea turned out, but can you please give us all a bit of info how to implement animated water?
16:16 I used the workflow shown here! It works in a very similar way to the SDXL workflow for example but it uses AnimateDiff to create a looped animation! Fog, Fire, Water and clouds work extremely well!
Nice work, thanks for sharing! I suggest Meshy 4 for 3d gen, it seems like is the best of options atm. (exporting out characters with animations too as glb or fbx)
Great video thanks for sharing. I've been trying to wrap my head around using this workflow to create true 32bit HDRI files (exr). So far I haven't seen any workflows for this. In theory you could use an i2i + controlnet to generate the panoramas at different exposures and merge them? I'm curious if you've explored this.
Fantastic workflow. Thanks for sharing this. I wonder if it would be possible to separate some of the visual by using that depth map. It would then be possible to better simulate the parallax effect due to distance. ?
Yeah AI is the future of VR. It's going to blow past everything else once we get fast enough GPU's and people bring SD style image generation into a controllable environment so you can walk around inside it. Then add AI controlled characters, and voice synthesis and you have the Holodeck. It's going to be INSANE.
My suspension of disbelief was destroyed when you made the knight two stories tall. Like... come on man... he's towering over all of those archways and openings.... 😅 -- Anyway, this is a super cool process. Thanks for putting these tutorials and workflows together.
@@FranzMetrixs I mean this unfortunate cooperation with Elon Musk. That makes Flux unusable for many users. Unfortunately, unfiltered images generated with Flux appear on his platform X with a very dubious message!
Damn, Mick. You're brilliant. So many cool things in one short vid. Some of the little Blender shader tips alone are worth the time of this video. And then you pile all the Comfy and Leonardo and other stuff in a tight, crisp presentation . . . excellent.
Wow, amazing, i will love to create the most amazing scenes, like distopic scenes, but i dont see how, i see your tutorials are pretty advanced, can you teach me with your patreon tier, to do something above the common flux images?
This is so good! Is it possible to import the meshes (buildings + sphere) and materials into Unreal? I imagine you have to do some sort of UV projection first? Thank you for sharing your work!
I thought the same. I think if you export (.fbx) the entire object from Blender with the texture and material applied it should work. I need to try it.
Wouldnt this method only texture the facing side of the objects, forcing the viewer to remain in the middle of the scene? For example if you were to go to the other side of the pillars, wouldnt you see nothing as nothing was projected there?
yeah, you just need to activate the “vr scene inspection” add-on and connect the headset (I used Steam VR for this) and click "start VR session" in blender. I was also surprised that it's so easy and will use it more often now!
@@mickmumpitz I've been dying to try that with unreal engine, but there's all this conflicting information about how they have changed, how it works or that it's not working lol unreal engine is already so confusing. I never even bothered
@@mickmumpitz It's super powerful as you can use the headset as a camera. For POV videos you can crawl under things and do all kinds of interesting angels that would be a pain with normal camera animating.
@@BabylonBaller TBH I am not sure. I have a feeling you probably can. Using the headset as a camera I did in unity and baking. I think you can bake the lights in blender too. Example: th-cam.com/video/MSRrpgVrOoQ/w-d-xo.html
The image rendered with the outline shader only displays the object I selected and not all the objects present in the project. Does anyone know the reason for this?
Are you still using your 2070 super? i use 3070 right now and thinking yo upgrade to 4090. but if you still using your old gpu i think im gonna wait the 5000 series to upgrade
Is there a program where I can just prompt for these outputs? If not, why not make that? Why even program anymore, llms should be able to generate the right code for this.
Probably taking advantage of the procedural nodes on blender and knowing a little bit about texturing and lighting you would have waaay more control and better quality. Or I dont know, just use unreal with megascans, etc lol. A little bit over complicated for the result you get...Good case study of a workflow but a weak one.
It looks like you forgot: you space for a follow-up link on your Patreon page for this a month or two ago but never posted it. Did you forget or change your mind?
So inspiring! Thank you! Did you play around with Lightcraft Jetset already? Not only the Cinema Version but als with the free iPhone app. Would be great to learn about a blender Jetset workflow. 👍
Still hoping one day we get all of this just integrated into an AI program so all I have to do is make a 3D scene, type some prompts, and ajust some values and bam whole finished artwork exactly the way I wanted.
Actually, why do you always use Blender for depth maps ? I saw you using that for the other video about set extensions or something... You would get much detailed depth maps with Depth Anything V2, with zero effort. So... WHY
I was just looking for a way to do this easily. You rock man. Thanks for sharing these workflows, I know how much time, effort and trial and error devising these methods takes.
This is something i was doing with SD 1.5 and with latent labs lora, but it was really low res (no 3d model, prompts only). The 360 panorama looked great on VR (i made anime environments only so it was easy to cheat the seam line) but projecting this on 3d model was something i was missing. This can turn out to be an efficient way to easily make 3d environment for VR games especially static shooter game like the house of the dead.
Thank you very much for this wonderful tutorial.
Excellent and inspirational. I am a beginner Blender user with plenty of 360 VR and some immersive VR world building experience. I wonder if I can walk through this and create a base model to iterate on. Just decided to challenge myself, your video was the inspiration. Thank you.
You could create a depth pass from the generate image in Comfy and use it as a displacement or bump map back in Blender.
Amazing work and value you are providing for free 🙏🏻
Nice one! Only thing missing would be some kind of a thing that gets the information from mesh combined with texture that there is some stretching going on in certain regions. And then calculates the missing texture for that area so that it does not fall apart when changing perspective. Then use some displacement maps to get more detail. Binga bonga, nice scene!
This looks insane, can't wait to start testing all these workflows out
This is pure insanity
Wow!!!! This is exactly what I been looking for!! Thank you so much.
Do you think it is also possible to mix the outlines on the Flux workflow?
Este tutorial foi incrível! Obrigado, parabéns e te desejo muito sucesso para os seus projetos!
The release of each of your videos gives me great expectations that of course are not disappointed. I share your interest in linking 3D modeling with AI. You are always at the forefront in this regard. Thank you very much for the workflows and your excellent tutorials.
It's videos like these that keep me going. There is a lot to learn on how to do all these things. Your workflow was great. Something I can follow. Blender is getting a bit easier, but that isn't saying a whole lot. Thank you for putting the work into this tutorial. You just got a subscribe from me!
Great work again Mick , you are genius my friend.
German brains at work 🙂
Thank you
Young Lurch
I "Rang"
And you definitely answered
Nice tutorial! Thanks again!
Wow, you're a genius!
Top notch sir! Thank you.
What an amazing workflow.
Great work man, thanks so much! I'll get you on Patreon!
Fantastic tutorial. Love the pacing as well
Great tutorial, thank you for sharing amazing video
Once again a super duper tutorial. thx a lot!
Thanks for all your wonderous workflows.
THIS IS AWESOME!!! Save so much time!!!
you're a genius!
Oh man. This is insane. Bravo. I'm going to try this. I find Blender painful but I need to push through to see this. I'm on a Mac, so wondering how I might see this inside my Quest 3?
That's really cool!
ok this is crazy af.
Great tutorial!! Thanks!!
damn, another subscription 😅really promising
For your upcoming project, you might recreate the Tiger Scene from "Gladiator" set in the Colosseum, featuring an animated tiger restrained by a chain and a crowd simulation
really amazing work! thank you for sharing. subbed! :)
thats pretty awesome
I was really stunned by that waterfall tho! I really like how the idea turned out, but can you please give us all a bit of info how to implement animated water?
16:16 I used the workflow shown here! It works in a very similar way to the SDXL workflow for example but it uses AnimateDiff to create a looped animation! Fog, Fire, Water and clouds work extremely well!
私もこのアニメーション部分の解説動画を期待しています😍
Nice work, thanks for sharing! I suggest Meshy 4 for 3d gen, it seems like is the best of options atm. (exporting out characters with animations too as glb or fbx)
So great!!!
Great video thanks for sharing. I've been trying to wrap my head around using this workflow to create true 32bit HDRI files (exr). So far I haven't seen any workflows for this. In theory you could use an i2i + controlnet to generate the panoramas at different exposures and merge them? I'm curious if you've explored this.
This is crazy
Oh My!! That looks really difficult for me, but it seems really easy for you :) 🤕🤤
Fantastic workflow. Thanks for sharing this. I wonder if it would be possible to separate some of the visual by using that depth map. It would then be possible to better simulate the parallax effect due to distance. ?
Yeah AI is the future of VR. It's going to blow past everything else once we get fast enough GPU's and people bring SD style image generation into a controllable environment so you can walk around inside it. Then add AI controlled characters, and voice synthesis and you have the Holodeck. It's going to be INSANE.
Superb tutorial, kudos. ❤️🔥
Leonardo allows to enable tiling btw. But anyways, AI equirect projection is usually not exactly equirect, but its better than nothing!
Yeah true, but it's more for textures and things like that, so unfortunately it doesn't really work with those images.
amazing!
Nice tutorial as usual ;)
I really like the fact that your hair getting shorter and shorter. Great video as always
woow its rl fresh technology for me
👋 Looking forward to watching this video
My suspension of disbelief was destroyed when you made the knight two stories tall. Like... come on man... he's towering over all of those archways and openings.... 😅 -- Anyway, this is a super cool process. Thanks for putting these tutorials and workflows together.
He clearly just has a macrophilia fetish and wanted his knight to give the Dwarven dwellers a big steppie. Come on its not that hard to believe.
Gotta drop runway gen3 vid to vid into your flow!
Flux certainly does a great job, but is only suitable for users who don't care about the background at all
What background are you referring to?
@@FranzMetrixs I mean this unfortunate cooperation with Elon Musk. That makes Flux unusable for many users.
Unfortunately, unfiltered images generated with Flux appear on his platform X with a very dubious message!
@@FranzMetrixs Same thing I'm wondering.
Nah u 😂trolling
Leonardo textures
He picked out they weren't from scratch
Amaizing!
waiting for the 3D environment non stretch texture trick
Working on that!
Hi @Mickmumpitz, are you considering a video on a "Sora"-like, or Runway ML tutorial and flow? Would love to try that on Comfy UI
You are a passionate artist. And it is contagious 😊
谢谢你的教程,期待看你后来怎么解决移动起来的问题。
вау вау вау, это так круто)
Damn, Mick. You're brilliant. So many cool things in one short vid. Some of the little Blender shader tips alone are worth the time of this video. And then you pile all the Comfy and Leonardo and other stuff in a tight, crisp presentation . . . excellent.
Wow, amazing, i will love to create the most amazing scenes, like distopic scenes, but i dont see how, i see your tutorials are pretty advanced, can you teach me with your patreon tier, to do something above the common flux images?
Is there any course dedicated to generative AI in depth? I wish I could actually understand what each setting means.
Amazing!!!!
Please also give some guidelines for Mac Users. A lot of things doesn't work and requires debugging for Mac Users with M1 M2 Chips
very nice! do you think you could generate trimsheets with ComfyUI to texture the 3D environment and assets
This is so good! Is it possible to import the meshes (buildings + sphere) and materials into Unreal? I imagine you have to do some sort of UV projection first? Thank you for sharing your work!
I thought the same. I think if you export (.fbx) the entire object from Blender with the texture and material applied it should work. I need to try it.
Have a look at Ian Hubert's Compify plugin to transform your environment texture from emission to principled BSDF shader
very cool!
Such awesome work bro ❤
Can you do this offline with something less involved than comfy ui? Easy diffusion or something else? Thanks.
'very cool - great work!
how can I export this to Unity? it seems that the equirectangular projection works only on blender
Klasse klasse!
Hi, if i have less than 24GB vram, my GPU has 10GB.. is it still doable?
Wouldnt this method only texture the facing side of the objects, forcing the viewer to remain in the middle of the scene? For example if you were to go to the other side of the pillars, wouldnt you see nothing as nothing was projected there?
you basically see that in the video and he also said it in video when looking through it with VR
did you just throw on an oculus and zapped into the blender scene? wow, i didnt know it supported that, I have a bunch of headsets lying around lol
yeah, you just need to activate the “vr scene inspection” add-on and connect the headset (I used Steam VR for this) and click "start VR session" in blender. I was also surprised that it's so easy and will use it more often now!
@@mickmumpitz I've been dying to try that with unreal engine, but there's all this conflicting information about how they have changed, how it works or that it's not working lol unreal engine is already so confusing. I never even bothered
@@mickmumpitz It's super powerful as you can use the headset as a camera. For POV videos you can crawl under things and do all kinds of interesting angels that would be a pain with normal camera animating.
@@resemblanceai niceee. Are you allowed to build out the scene while in VR with your controllers or is it just for viewing??
@@BabylonBaller TBH I am not sure. I have a feeling you probably can. Using the headset as a camera I did in unity and baking. I think you can bake the lights in blender too. Example: th-cam.com/video/MSRrpgVrOoQ/w-d-xo.html
Es posible hacer el render de pases como el de profundidad y de lineas en comfyUI..
Can anyone tell best resources to learn about stable diffusion,loras,models,control nets etc. Any TH-cam channel?
The image rendered with the outline shader only displays the object I selected and not all the objects present in the project. Does anyone know the reason for this?
Amazing... I'll stick with my NVDIA stock.... cause I tried FLUX needs way high end GPU, at least my RTX2080 is very slow on that model.
can we use these modeles in maya?
Are you still using your 2070 super? i use 3070 right now and thinking yo upgrade to 4090. but if you still using your old gpu i think im gonna wait the 5000 series to upgrade
would a 3060 ti run the local ai to do this?
The exact same prompt didn't gave me 360° pictures at all, unfortunately.
Any idea about this ?
so how can we import this HDRI 3d environment in unity ????
its really important please help
Is there a program where I can just prompt for these outputs? If not, why not make that? Why even program anymore, llms should be able to generate the right code for this.
Into CogvideoX ?
your workflow.json is no longer linked in the google docs document?
It's attached to the Patreon post!
Danke!❤😮
Danke DIR! 😊
wow
Probably taking advantage of the procedural nodes on blender and knowing a little bit about texturing and lighting you would have waaay more control and better quality. Or I dont know, just use unreal with megascans, etc lol.
A little bit over complicated for the result you get...Good case study of a workflow but a weak one.
now skybox AI by blockade labs become makes sense. That's awesome approach
So, how did the Oculus Quest part work? Just conecting the quest? And why do you see a 3d effect. This makes no sence to me.
It looks like you forgot: you space for a follow-up link on your Patreon page for this a month or two ago but never posted it. Did you forget or change your mind?
So inspiring! Thank you!
Did you play around with Lightcraft Jetset already? Not only the Cinema Version but als with the free iPhone app. Would be great to learn about a blender Jetset workflow. 👍
i do have a vr headset but my laptop doesn’t work with it… kinda sucks…
Bruh this needs a full video not 5 seconds 😭 16:28
Still hoping one day we get all of this just integrated into an AI program so all I have to do is make a 3D scene, type some prompts, and ajust some values and bam whole finished artwork exactly the way I wanted.
let's hope we'll never get there :P
he must has a good gpu. @_@
Actually, why do you always use Blender for depth maps ? I saw you using that for the other video about set extensions or something... You would get much detailed depth maps with Depth Anything V2, with zero effort. So... WHY
Thomas Linda Rodriguez Richard Martinez Deborah
Flux + Runway, is baked in 3 minutes, not is 3d, but this result isn't either