hey, i am a long-term videomapper and it's cool to see how you implement AI into mapping in a very hands-on approach. in my opionen the 'creative fusion' of prome-ai is the key, bc it respets the givin' proportions of the image, which you need for using it in mapping. so far I haven't figured out a way to use stable diffion (or similiar) to work in similar ways. have you?
Thank you! I tried some workflows with Stable Diffusion and ControlNets, but couldn't get high quality results like with PromeAI yet. Right now I'm doing a lot of research tho for a video to video workflow in SD and am getting closer to what I want. Might do a tutorial in future if I figure it out properly!👍
@@lenas6192 No need for 2 monitors. Maybe you click on open as perform window instead of open as second window? Perform mode will only have the output running to save resources, but if you just open as second window you can keep working in the editor.
Thank you! You could for example use data from a depth camera to drive the switch, so the images projected changes depending on the distance of persons. Or composite the projection with a noiseTOP, which is connected to depth data that drives the transform parameter of the noise or something like that. You can literally use external data on almost every operator by referencing the value of your incoming data to a parameter of the network, almost no limits there :-)
It's a NEC ME372W with a NP47LP lamp. I got it used from ebay for ~300$. It is not FullHD, but for hobby stuff or smaller parties it is really good, as it has a brightness of 3700lumen.
ich wette dass man das mit comfyui mit weniger aufwand hinbekommt. mein ansatz wäre zb. das pikachu bild ins comfyui laden dort erstellt controlnet die maske die du dann auch automatisch speichern lassen kannst und an hand der maske erstellt comfyui im selben atemzug die bilder bzw kannst du auch gleich ein video draus machen lassen somit musst du dann in touch designer nur das vieo laden anstelle der einzelnen bilder.
Das ist eine Möglichkeit mit der ich mich auf jeden Fall mehr auseinander setzen möchte, sodass externe Tools wie PromeAI evtl garnicht mehr nötig sind. Wäre nice alles komplett in TD erstellen zu können.
externe tools sind in manchen fällen nicht verkehrt zb hab kann man comfyui mit krita oder protoshop verbinden und wenn man ein bild malt wird mit comfyui aus deinem gemalten in realtime ein bild generiert oder du verbindest eine webcam mt comfyui wo dann über die helleren und dunkleren bereiche des webcam bildes die bilder in comfyui beeinflusst werden wie zb posen von personen oder so @@reflekkt_net
This is an awesome concept and good for quick projects
Please continue sharing projection mapping tutorials!
This is exactly what I was looking for! Thank you so much!!
Thank you for this super video! Wanna try to do something like this.
This is an amazing tutorial man, thankyou so much.
wooow! Really great tutorial
super underrated channel!!
Thank you, I appreciate your words! It's the little things like this that pushes my motivation to work on more videos 🙂
That is so cool!
This is awesome! Many thanks for making this video. Really appreciated and provides me with tons of creative input. Kudos!
Wow! Subscribed. 👍
clean and simple - thanks for sharing!
so relaxed and so cool. thanks.
Nicely done. Clean.
Great tutorial! Thanks!!!
please more videos!!! thank you very much!
Thank you! Next tutorial is already in progress 🙂
Amazing and accurate thank you
Thank you!
👍👍👍
hey, i am a long-term videomapper and it's cool to see how you implement AI into mapping in a very hands-on approach. in my opionen the 'creative fusion' of prome-ai is the key, bc it respets the givin' proportions of the image, which you need for using it in mapping. so far I haven't figured out a way to use stable diffion (or similiar) to work in similar ways. have you?
Thank you! I tried some workflows with Stable Diffusion and ControlNets, but couldn't get high quality results like with PromeAI yet.
Right now I'm doing a lot of research tho for a video to video workflow in SD and am getting closer to what I want. Might do a tutorial in future if I figure it out properly!👍
Thanks you so much 4 share
Thanks!
awesome \m/
Hey! Quick question: Do I need two monitors for this? I somehow cant output the window and work in the editor at the same time :(
@@lenas6192 No need for 2 monitors. Maybe you click on open as perform window instead of open as second window? Perform mode will only have the output running to save resources, but if you just open as second window you can keep working in the editor.
thats great. How can we connect this to depth camera so the projection changes with human motion?
Thank you! You could for example use data from a depth camera to drive the switch, so the images projected changes depending on the distance of persons. Or composite the projection with a noiseTOP, which is connected to depth data that drives the transform parameter of the noise or something like that. You can literally use external data on almost every operator by referencing the value of your incoming data to a parameter of the network, almost no limits there :-)
What projector are you using?
It's a NEC ME372W with a NP47LP lamp. I got it used from ebay for ~300$. It is not FullHD, but for hobby stuff or smaller parties it is really good, as it has a brightness of 3700lumen.
ich wette dass man das mit comfyui mit weniger aufwand hinbekommt. mein ansatz wäre zb. das pikachu bild ins comfyui laden dort erstellt controlnet die maske die du dann auch automatisch speichern lassen kannst und an hand der maske erstellt comfyui im selben atemzug die bilder bzw kannst du auch gleich ein video draus machen lassen somit musst du dann in touch designer nur das vieo laden anstelle der einzelnen bilder.
Das ist eine Möglichkeit mit der ich mich auf jeden Fall mehr auseinander setzen möchte, sodass externe Tools wie PromeAI evtl garnicht mehr nötig sind.
Wäre nice alles komplett in TD erstellen zu können.
externe tools sind in manchen fällen nicht verkehrt zb hab kann man comfyui mit krita oder protoshop verbinden und wenn man ein bild malt wird mit comfyui aus deinem gemalten in realtime ein bild generiert oder du verbindest eine webcam mt comfyui wo dann über die helleren und dunkleren bereiche des webcam bildes die bilder in comfyui beeinflusst werden wie zb posen von personen oder so @@reflekkt_net