- 80
- 144 637
Grae n
Canada
เข้าร่วมเมื่อ 21 ก.ย. 2011
AI Art + Augment Reality, I want to explore all the possibilities.
One UI for both Quest 3 and Mobile.
Let's make some UI for mobile AR and passthrough AR at the same time!
Code
(part of a larger project) github.com/graemeniedermayer/live-3d-gen-basic-game/blob/main/core/ui_components/ui_components.js
UI library used
github.com/pmndrs/uikit
Animation
github.com/tweenjs/tween.js
Code
(part of a larger project) github.com/graemeniedermayer/live-3d-gen-basic-game/blob/main/core/ui_components/ui_components.js
UI library used
github.com/pmndrs/uikit
Animation
github.com/tweenjs/tween.js
มุมมอง: 204
วีดีโอ
3 Simple Augmented Reality Projects (Quest webxr)
มุมมอง 4075 หลายเดือนก่อน
Be careful with sunlight and quest 3. Today we're going to be doing 3 different quest 3 projects. Timeline 0:00 Intro 0:28 Gaussian Splats 3:52 Throwing app 5:59 Porting SimonDev Project 1 Gaussian Splats Sources github.com/mkkellogg/GaussianSplats3D (library used slightly modified example) repo-sam.inria.fr/fungraph/3d-gaussian-splatting/ Splat videos sources th-cam.com/video/brr8oO37lkQ/w-d-x...
Text to Augmented Reality
มุมมอง 3407 หลายเดือนก่อน
Code github.com/graemeniedermayer/3d-depth-gen/ Here we are building a prompt to 3d app for quest 3. Very rudimentary but exciting for the future! Heavily uses monodepth big thanks to thygate semjon00 Method for creating 3d models th-cam.com/video/_6gD_pEis58/w-d-xo.html Academic Sources @misc{ke2023repurposing, title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation}...
The Quest 3 to See Magnetic Fields!
มุมมอง 2.5K8 หลายเดือนก่อน
We are using Quest 3, websockets, and a phone to see magnetic fields! The idea is to use the magnetometer on the phone to send measurements into a quest 3 with websockets. Image tracking is necessary to match the reference frames. Chapters 0:00 intro 1:09 reference frame transform 1:50 magnetic examples 3:37 code Code here: github.com/graemeniedermayer/ArExperiments/blob/main/javascript/websock...
Marigold depth is unique
มุมมอง 2.5K9 หลายเดือนก่อน
Marigold is a finetunning of stable diffusion for monodepth. This algorithm is an amazingly flexible for synthetic data. This can be used to create high quality 3d models. Marigold github.com/prs-eth/Marigold My noise edits to Marigold github.com/graemeniedermayer/Marigold-Inpainting/tree/main Depthmap marigold github.com/thygate/stable-diffusion-webui-depthmap-script/tree/marigold Academic Cit...
Gaussian Splats + Augmented Reality (webxr)
มุมมอง 1.5K11 หลายเดือนก่อน
Here we are extending JavaScript gaussian splats renderers to webxr. It works rather well on faster phones. With a few more optimisations for fps it should look very realistic. My Fork github.com/graemeniedermayer/ar-example-gsplat.js Gaussian splat JavaScript libraries github.com/dylanebert/gsplat.js (library used) github.com/antimatter15/splat github.com/mkkellogg/GaussianSplats3D (maybe the ...
Dreamgaussian (3d gaussian splatting + dream fusion)
มุมมอง 20Kปีที่แล้ว
Repo being used github.com/dreamgaussian/dreamgaussian doi.org/10.48550/arXiv.2309.16653 Super awesome repo. Big thanks to the authors! This repo/paper combines a large number of ideas including 3d gaussian splatting, 3d generative ai, single image reconstruction, and a number of other numerical wizardry. Neural/Differential rendering are taking off much faster than I expected! Very useful thre...
AI Tangent Space Normal Maps
มุมมอง 1.5Kปีที่แล้ว
Here we are creating tangent space normal maps using ai gen and zoedepth. The goal is to reduced an mono depthmap to a lower resolution while maintaining the 3d visual quality. Tangent normal maps also have the advantage of working with deformable meshes. Code is here (I'm thinking about finding a better place for it): github.com/graemeniedermayer/stableScripts/blob/main/tangent_normalmaps.py E...
Stylized AI Normal Maps
มุมมอง 894ปีที่แล้ว
Today we are using stable diffusion to edit normal maps! Timeline Intro 0:00 Manual Stylizing 0:30 Gen Normals 1:05 Materials 2:17 3d Normals 2:57 Shadows 3:22 Original Video th-cam.com/video/s8N00rjil_4/w-d-xo.html (youtube version) www.tiktok.com/@codygindy/video/7264345365950795050?lang=en Impasto Lora civitai.com/models/82582/thick-impasto-painting
Image to 3d, the Han Solo Method
มุมมอง 1.1Kปีที่แล้ว
We're exploring a useful workflow for turning images into 3d models. This uses the boolean modifier and I'm really happy with how close it is to full automation. Timeline 0:00 Intro 0:33 Img tips 2:00 Depth tips 4:35 Blender 6:45 Mixamo Depthmaps github.com/thygate/stable-diffusion-webui-depthmap-script Add detail lora civitai.com/models/58390 Other img-to-3d videos th-cam.com/video/tBLk4roDTCQ...
Drag3d has arrived.
มุมมอง 1.6Kปีที่แล้ว
Drag3d github.com/ashawkey/Drag3D Acknowledgements github.com/XingangPan/DragGAN github.com/nv-tlabs/GET3D Super excited open source repo! You can drag 3d models around and they will be constrained to the original object type.
Optical Illusions for Depth AI
มุมมอง 324ปีที่แล้ว
There's a lot of ambiguity that arises from mono depth. Are we talking about physical depth or mental depth. Here we'll explore the edge cases of mono depth such as mirrors, refraction, projections, and rainbows. This was put together using github.com/thygate/stable-diffusion-webui-depthmap-script Projectionist / Visuals / Musician _fiatlvx?hl=en 1000joules?hl=en
Zoe depth is impressive.
มุมมอง 13Kปีที่แล้ว
Zoe depth is a relatively new monocular depth algorithm. It combines really nicely with other algorithms. Here we are using it with sadtalker to fun animated meshes! Resources Academic Papers Zoedepth: arxiv.org/abs/2302.12288 github.com/isl-org/ZoeDepth Sadtalker: arxiv.org/abs/2211.12194 github.com/Winfredy/SadTalker TH-cam Mickmumpitz www.youtube.com/@mickmumpitz Prompt muse channel www.yout...
ControlNet to 3d with Nerfs.
มุมมอง 2.3Kปีที่แล้ว
Here we are outlining an improved way to get ai images to 3d. It's getting so close to high quality now! Also Dreambooth3d is exciting! timestamps 0:00 intro 1:29 Google's Dreambooth3d 2:45 My pipeline 3:38 My results 5:18 Emergent 3d? Sources academic -dreambooth3d arxiv.org/abs/2303.13508 opensource -mikubull's controlnet extension github.com/Mikubill/sd-webui-controlnet -controlnet repo - gi...
Inpainting in Augmented Reality
มุมมอง 599ปีที่แล้ว
This is an example of AI inpaint within augmented reality (webxr). This is a messy space but also fascinating. If you have any suggestions let me know! Code (very messy): github.com/graemeniedermayer/augmented_reality_SD/blob/main/frontend/js/autoInpainting.js
Setting up an AI Art and Augmented Reality app
มุมมอง 643ปีที่แล้ว
Setting up an AI Art and Augmented Reality app
Remaking 3D Maze with all the buzzwords! (AI art, Augmented Reality, AI Depth, and more)
มุมมอง 251ปีที่แล้ว
Remaking 3D Maze with all the buzzwords! (AI art, Augmented Reality, AI Depth, and more)
AI Art Scripting (Clothing Masks, Background Removal, and Automatic1111 api)
มุมมอง 2.3Kปีที่แล้ว
AI Art Scripting (Clothing Masks, Background Removal, and Automatic1111 api)
Walking into AI Art (using monocular depth sensing)
มุมมอง 636ปีที่แล้ว
Walking into AI Art (using monocular depth sensing)
3d to 3d ai art with stable diffusion and NeRFs.
มุมมอง 2.6Kปีที่แล้ว
3d to 3d ai art with stable diffusion and NeRFs.
Combining the Depth API and Stable Diffusion
มุมมอง 7302 ปีที่แล้ว
Combining the Depth API and Stable Diffusion
Combining Stable Diffusion and Particle Systems
มุมมอง 9942 ปีที่แล้ว
Combining Stable Diffusion and Particle Systems
AI Word Art and Fonts with Stable Diffusion
มุมมอง 4.9K2 ปีที่แล้ว
AI Word Art and Fonts with Stable Diffusion
Re-texturing Myself with Stable Diffusion, Photogrammetry, and a little PIFuHD
มุมมอง 1.1K2 ปีที่แล้ว
Re-texturing Myself with Stable Diffusion, Photogrammetry, and a little PIFuHD
Re-texturing Photogrammetry with Stable Diffusion
มุมมอง 5K2 ปีที่แล้ว
Re-texturing Photogrammetry with Stable Diffusion
I use forge and the depth tab is missing, and I have no idea how to get it back...
So I did make a pull request for forge-ui github.com/thygate/stable-diffusion-webui-depthmap-script/pull/467 but you'll need to know git to switch. It should be implemented in the main branch the next few weeks.
ai horsecrap.
Is it ok if I take the code for my physic project?
Use anything you find useful! Ill try to get something onto github later this week.
Thanks for the feedback <3 Followed you for a long time, and I love your experiments, so it's super awesome to see you using pmndrs/uikit <3
Exciting progress! Love to hear about the project you want to make with this
Does the installation on Github still work for now? I got everything setup, but nothing happens when I hit Enter AR. I launched proxy pass server and it says "Application startup complete." Automati1111 launched with api also.
I thought it was in a functional state, but the webxr experiments do have a habit of breaking. I'll check it in the next few days. If you need more help an email or issue on github are better ways of doing back and forths.
@@grae_n Thanks bro for your super fast response, I will run another test at a different network environment. Take note of the issue on Github.
@@grae_n created an new issue on Github😅
can i save image and depth map in png? I try something, but image always empty Any help
how you doing :)
Very Impressive 👍🏻👍🏻 Love the energy from the video! 🤩🤩 Looking forward to more of ur content <3
Could you recommend some threejs tutorials for web xr learning
Webxr specific channels are surprisingly hard to find. I know @WawaSensei , @NikLever have some content. There's a lot more content for Unity XR like @AliveStudios_ . There are also some dead channels that are interesting like @AdaRoseCannon (does VR/AR at Apple now) and @JeromeEtienne (developed arjs) Some more general 3js channels are generally are @simondev , @akella_ . I've found focusing on more advanced threejs projects can help a lot.
Hi, Your work is great. How can I do this light estimation in WebAR project?
I respect your positive Energy you really make my day better and you inspire me to be more productive and build my own projects. You're a real inspiration to me. You are awesome. ❤
love the content!
And that's the problem with all this AI stuff is always a multi step process you guys gotta fix this with Stable diffusions bells and whistles but with ease of use like with Comic AI
Surprisingly, how should I use this script to generate normal maps?
Good stuff
Mate, impressive work! Is there a way to get in contact?
Thanks for Video. I love it!
0:36
cute enthusiastic voice :3
0:11
difficult to get even boilerplate code working Right? Tell me all about it ... Wait That's a javascript file? We're not by chance going to be using webxr's planar detection?
I think you could try to create voxels storing the direction and strength of the magnetic field. Then you could do a visualization by moving particles through that field following the direction - effectively a particle simulation using the voxels as flow field. But not sure whether that is better than drawing arrows. ^^'
The main thing I'm struggling with the voxel method is how to approach empty voxels. If you don't have enough measurements it might be pretty sparse. Maybe Gauss's law for magnetism would be useful, but the answer is currently alluding me.
@@grae_nI assume you could fill the empty voxels by flooding with interpolations of the filled neighbours. Then filter the whole grid in the end. Obviously if the data needs to be accurate, the only option is more samples. ^^
this is awesome!
what a great idea. new sub!
this is so cool good work bro
this is fuckin awesome dude, subbed
Good stuff
This is an awesome idea, I'd like to see more high tech augmented reality visualizations. Has a lot of potential for intuitively understanding these things. Definitely with some engineering you could make this a lot more usable.
Another option would be to 3D print a mount with known measurements that you can clip onto the controller. Then you can track the position using the controller + the knows offsets. Or maybe something more crazy, use hand tracking somehow. so you can hold it, but that probably would be wonky. Awesome work!
These are some great ideas. I think have a virtual initialisation box might be an option too. The hand tracking is definitely high enough quality to use for something like this.
Super interesting. I wonder if the Quest 3 and cell phone combination could be used to detect cameras (as in hidden cameras). BTW, be careful with sunlight on the Quest 3. It can damage your lenses.
lmao!!! you're so smart.
@@itwasntmebro2669 thanks bro!
There's soo many possibilities. Thanks for the sunlight warning!
Hey you got a quest!! excited to see all the projects you make with it!
ooooooh. now. i get it (frozen in awe. dont mind me flame: fast ligthweight mesh estimation. im feeling that might be similar? )
voxelization of a grid through differential equation solver. interesting. is that a differentiable manifold ? i keep feeling i need to aim at lie groups and somehow utilize a delauny triangulation mesh? to maybe more efficiently establish a mesh? rhetoric questions. thinking out loud
you truly are amazing and definitly one of a kind!! everything you've put together on your channel blows my mind. ESPECIALLY this video. i'm having to let it sink in for a few. but in the time being, i got a question for you. monocular depth and pose estimation, without any sensorfusion or arcore. without any ai or deep learning (zoe depth is pretty slick) just javascript and some good old opencv, maybe threejs to project estimation depth maps onto a mesh with shifting zdepth values. do you have any advice or direction you can point me in? i hate to be the random user on YT asking for your source or github link but, i WILL accept code examples you might insist i consider i was thinking maybe some point tracking of planar like objects and then try to calacute just those points rate of distance change over time giving me a motion parralax expression i could estimate their relational depth. i think i get the concepts now, but the math, still eludes me to be honest. (personally i think there's just as easy if not easier geometric expressions, within this virtual frusturm that if i happen to nail correctly i can skip a major of computations.) help me grea n you're my only hope (of course, without any, say alti and pitch, i cannot assume any sort of ground truths of distance, but thats fine for now) (i do make sense with the terms i use yes? been on my own for 2 years with no one to communicate what i learn, with. i might actually make no sense and not know it)
Came for the flower, stayed for the algorithm.
i get error Uncaught TypeError: Failed to resolve module specifier "three". Relative references must start with either "/", "./", or "../".
I think I uploaded the wrong index.html file. It was missing the importmap. I've updated it so you should be able to switch it out the index.html file. I'll double check all the code in the next few days.
@@grae_n thanks I'm learn with you,i m waiting your fix
Hey dude can you please make video on this again on how to make videos like this please
I tried to install the game through vscode, I couldn't
Hey bro! Impressing work! Is there any way I can consult with you on how to use that depth map for my project?
Bro you laugh so much while making video, why is that, I found it so funny, Great video btw
will not work for me
Hi, I tried it in Hugging Space and generated a splat file. Thing is, I've never seen that format before and I'm searching for the Blender plugin that would allow me to import them, with no luck. In this video you mention "Marching Cubes" but I Googled and can't find anything about that either haha only that is an algorithm, but nowhere to download or run online. I feel dumb
There is a conversion method in the GitHub repo. I believe it converts it to a PLY file but it reduce the quality.
How did you edit noise for Marigold?
It's a little bit of a mess right now, but I've added an example here github.com/graemeniedermayer/Marigold-Inpainting/tree/main
@@grae_n thanks so much. I go through that code. I love your channel and enthusiasm. Watched every video you post.
What i dont understand is that by now why dont we have the ability to have it guess at at least some of the backs of the objects, or to separate the subject from the background, inpaint the background automatically, and take a guess at the back of the subject?
There's are some attempts at this, MVDream and zero123 are both very interesting. But some of their examples seem cherrypicked.
how about 6 angles / images from 6 synced kinect azures ? including depth (rgbd) 30fps
Can you use sad talker and Zoe depth in davinci
You should be able to. I'm not too familiar with combining depth and DaVinci.
great ☺️but less coffee 😮
is there a way to get this done on ComfyUI?
You sound alot like chris griffin