I can't even imagine how versatile a tool this could be in planning and designing emergency flooding measures, low tech irrigation planning for poorer regions, etc. Whatever pops in to your head while you're thinking, you can almost instantaneously try out without needing knowledge of some complex program. Bravo, dude/guy/lady/people/whoever. Bravo.
Back in the late 1960s, we had a teacher who was "Overhead Projector Crazy". She had filters and all kinds of stuff. She showed off one day, and somehow she had the illusion of movement being projected. I don't know, maybe the had a motor causing light to be refracted, or something. This then, reminds me of that long-ago demonstration. Love your work with low cost Kinects!!!
This is using a single Kinect looking straight down. The Kinect delivers a surprisingly good depth map when looking at a surface with rather gentle slopes, like the sand. But there's some heavy processing involved in getting the surface to be as still as it is, and getting all the "holes" filled in. The 1-second delay between actions and responses is the baseline of the averaging filter I'm using.
Saint-Venant is a reduced, as in depth-integrated, version of Navier-Stokes. The only simplification is that vertical water motion is assumed to be negligible. I also saw the full 3D solver you mention, but in honesty, this one was simpler to implement. I only had a few days to do it and couldn't be picky.
This is based on the Saint-Venant shallow water equations, which are basically depth-averaged Navier-Stokes equations for free boundary surfaces. You can change viscosity, but to really simulate the mantle, where vertical convection flow is a big component of movement, you'd use the full Navier-Stokes equations, which fortunately also can be simulated in real time.
The topographic coloring and the contour lines are piece of cake. The water, not so much. That took some heavy lifting with GLSL, and it's bringing a GeForce 580 to its knees.
The calculations run at 30 frames per second, but the 1 second delay is necessary to filter the depth images coming from the Kinect into a form that's usable for contour line rendering and water simulation, and to be able to detect and ignore hands, tools, and other body parts.
It doesn't actually run slow, per se. Rendering is at 30 frames per second, look at the water moving. However, there's a one-second delay in the topography update to filter out tools and hands.
Please have another look. This is a realistic water simulation, but at a scale of 1:100, which is why the water looks 100 times more viscous than in an actual sandbox. The simulation is based on the standard equations of shallow water flow used everywhere in hydrology; more details are on the web page.
There are more details in the other video's description, and on the project page. It's running on a Linux PC with a Core i7 CPU and a Geforce 580. There are two sources of lag: one is an always-on intentional 1-second delay in surface updates, to filter out hands and tools. The other is temporary slowdown in the water simulation computation when parts of the water mass are moving too fast. I chose the resolution of the water grid a bit too high for this rig.
Oh, sorry. The camera is a Kinect, and the projector is a BenQ MX511 or a similar model, but any projector would do. You could run the flow simulation entirely on a normal screen (that's how I developed it), but that's not much fun, and it's harder to create terrain.
Great question. The Kinect only sees the sand from above, and the water simulation is based on a heightmap, meaning for each position (x, y), there can only be one elevation value z. This means the Kinect couldn't see a tunnel (I assume that's what you mean by hole), and the water simulation couldn't represent it. For that you'd have to use a different method of scanning -- maybe several Kinects looking from several directions -- and a fully 3D water simulation.
You mean the shader that runs the water flow simulation, or the shader that renders the virtual water surface? Well, I made them both myself, but the former is based on the paper I cite in the other video's description, and the latter is completely ad-hoc.
It's not slow per se, the system renders at 30 frames per second, but there's a 1 second delay to filter out tools, hands, and other body parts. That delay has nothing to do with processing power, but is required by the Kinect's inherent noise. However, the system is at its limit simulating the water flow. Getting a more powerful graphics card would take some load off that. The sand is indeed white. Sandtastik white play sand.
Based on elevation. There's a valid range for topography, and a valid range for "rain clouds." I hadn't considered that, but yes, if you build a mountain whose top reaches into the cloud level, then it would rain down on it constantly.
There is noise in the virtual sand surface, caused by the Kinect. That noise agitates the water surface. If I temporarily turn off surface updates, the water quiets down as you'd expect. I'm not showing that in this video, though.
No matlab. The driver code is C++, and the heavy lifting for the water flow simulation happens on the graphics card, programmed as a set of OpenGL shading language shaders. The wave equations are computationally tough, indeed. They bring a Geforce 580 to its knees, which according to some measurements delivers up to 850 GFlops.
It doesn't really look much different from wherever you look at it. There is a tiny bit of distortion in the surface waves if you don't look straight down (that's the optimal viewpoint), but most people don't notice it unless it's pointed out to them.
I asked on the other video if erosion could be shown. It looks as though the different shades of green and brown may be showing depth and height of the land. It would be really cool to have a voice over describing some of this information. It's cool. What ideas do you have for applications?
There's more information on the method and the paper it's based on on my web site. The way the water is rendered is decidedly non-photorealistic (I personally like it this way), but the viscosity should match real water if you accept that the sandbox is at a 1:100 scale.
There's a bit towards the end in the other video, where I have a small lake and are holding my hand over it. During that time the Kinect can't update the surface, and the water comes to rest. The moment I take my hand away, it starts bubbling again. Visually, I like the busy water. If you want a physically accurate simulation, you'll have to disable surface updates once the sand surface is done.
When u were filling the water, how did the water look so high up? Wouldn’t the water just look as if it is projected flat on the bottom with a little on the walls?
... what I mean by that is that the reservoir in the video is about 60cm across in the real sandbox, so it's 60 meters across as far as the water simulation is concerned.
I tried that, but the problem is that if you move your hands or tools into the projector's light path, they change color due to the projection, so the detection mechanism fails. Even matte black surfaces reflect too much light to be reliably identified as black.
It will soon come to a science museum near you (well, if you live in the Bay Area, around Lake Tahoe, or around Lake Champlain); otherwise, you'd have to do what we did and build this yourself. The software driving it will be available in a short while.
Looking at this, I'm wondering if you could create a topographical map using the sandbox and save its state in a digitized form. Using the digitzed topographical information, maybe then you could utilize that information in a program that simulates the effects of water on a geographical area. This could be used to predict the movements and effects of water on geographical areas over large amounts of time or could be used to determine the effectiveness of man-made structures such as dams.
wait- oh. It's life scale- that makes sense now. I thought it was supposed to represent a larger scale- but if you think of it literally as putting water in a sandbox (That doesn't turn to mud) this works perfectly. Thanks for clearing that up. @Patt
Erosion can't be shown in this setup because there is no way for the software to manipulate the sand (unless -- nanobots!). But if the topography is viewed virtually, meaning on a computer screen, it could be modified according to water flow.
Hi, amazing project. instead of using a delay to filter out hands and tools, why not set it to Ignore a certain colour, eg Orange, then wear Orange gloves and use Orange trowels?
What made you decide against Navier-Stokes, or some reduced version of it? I imagine this setup might be well suited for something like Chentanez & Muller's tall cell grid solver.
thats kind of disorienting watching you mold the sand and the projection catching up a half second later. but all in all this is one of the most amazing things ive ever seen and i can see it implemented into real life for landscape archetecture or something of that nature
The main problem I see with these simulations is that you can't simulate the affect that water has on the landscape without separating the digital environment from the irl environment :( It's badass none the less. Good show.
How did you get the kinect to sense topographical data that dense? I thought it's grid was rather wide. Are you using two kinects and pulsing the grid patters like you did in previous videos?
Hi there! I'm currently building my own AR Sandbox with a colleague of mine and we can not for the life of us get the rainmaker and water simulator to work. I was curious could you post your start up command prompt? The topographical map shows up perfectly when we enter this prompt: "./bin/SARndbox -uhm -fpv" but sadly the rainmaker/water doesn't seem to be working... All the Best, Aedan
Really nice! I think you should have one tool for only shoveling soil, though. You kind of make a mess with the water since that is accidentally shoveled too.
Are you using matlab? I'm no whiz, but that would be easy to code for. The water, well I'm thinking the wave equation would be really really computationally tough. Solving the wave equation, particularly when it comes to seismic tomography uses some of the most powerful computers in the world! Really, really impressive!
Ah yes, of course. How about a simple geometric shape painted on the back of a glove, like a triangle or star. like the way AR cards work with the 3ds, or is that getting too complex to program?
well, to me, it actually looks like a real liquid. I think it has to do with how you managed to make it actually look 3D in the pool. I would expect it to wrap around the hole and I would see the depth, but I can't see any depth in the pool at all.
Could I suggest you try to get this to be able to process some kind of 'wave'. So you can have realistic water reactions, tsunamis and not geloten blobbing.
i would like to see this done using iron sand and magnetic fields i rekon you could have a virtualesque simulation created via manipulation of magnetics
I'd say it's a pc with a kinnect attached to it running unity with a projector over a sand box if you push the sand up into the shape of a mountain then the kinnect will register that height difference...holding the hands out, above the box makes it rain in a sense.....very cool! now lets do better.
Not something so simple. The underlying framework's original work, and it's called Vrui. Sure, you can explain it relatively simply, but consider all the factors involved: the software can't just figure out where the Kinect and projector are relative to each other by magic; plus, they're not in the same location, and there's a lot math to figure out how to draw the image in virtual-space so that it looks correct when projected - and when it's projected onto a non-flat surface you have to factor that in as well, and suddenly it's not so simple to do better.
LeviBackgammon Agreed but I think the comment was to have the engineers not rest on their laurels but to keep creating! I thought it was the Vrui toolkit possibly running on ubuntu It's an amazing piece of work and the application for it is far reaching in many different fields I can actually see a lot of kids wanting to go into the stem fields because of stuff like this and lastly I have to give a shout out to all the math nerds! Kids, this is what you create when you study math in school!
you used sand to get the layers feeling to it , why not use a gas inside of a glass container to make it look like a hologram ? i don't know much about this but i think if any sci fi like hologram is possible it would be through experimenting with how light projects on different gases , like the effect a cigarette smoke leaves on security laser beams
I can't even imagine how versatile a tool this could be in planning and designing emergency flooding measures, low tech irrigation planning for poorer regions, etc. Whatever pops in to your head while you're thinking, you can almost instantaneously try out without needing knowledge of some complex program. Bravo, dude/guy/lady/people/whoever. Bravo.
Back in the late 1960s, we had a teacher who was "Overhead Projector Crazy".
She had filters and all kinds of stuff. She showed off one day, and somehow she had the illusion of movement being projected. I don't know, maybe the had a motor causing light to be refracted, or something.
This then, reminds me of that long-ago demonstration.
Love your work with low cost Kinects!!!
This is using a single Kinect looking straight down. The Kinect delivers a surprisingly good depth map when looking at a surface with rather gentle slopes, like the sand. But there's some heavy processing involved in getting the surface to be as still as it is, and getting all the "holes" filled in. The 1-second delay between actions and responses is the baseline of the averaging filter I'm using.
Saint-Venant is a reduced, as in depth-integrated, version of Navier-Stokes. The only simplification is that vertical water motion is assumed to be negligible. I also saw the full 3D solver you mention, but in honesty, this one was simpler to implement. I only had a few days to do it and couldn't be picky.
This is based on the Saint-Venant shallow water equations, which are basically depth-averaged Navier-Stokes equations for free boundary surfaces. You can change viscosity, but to really simulate the mantle, where vertical convection flow is a big component of movement, you'd use the full Navier-Stokes equations, which fortunately also can be simulated in real time.
The topographic coloring and the contour lines are piece of cake. The water, not so much. That took some heavy lifting with GLSL, and it's bringing a GeForce 580 to its knees.
The calculations run at 30 frames per second, but the 1 second delay is necessary to filter the depth images coming from the Kinect into a form that's usable for contour line rendering and water simulation, and to be able to detect and ignore hands, tools, and other body parts.
It doesn't actually run slow, per se. Rendering is at 30 frames per second, look at the water moving. However, there's a one-second delay in the topography update to filter out tools and hands.
imaging how artistic you could be with this
Please have another look. This is a realistic water simulation, but at a scale of 1:100, which is why the water looks 100 times more viscous than in an actual sandbox.
The simulation is based on the standard equations of shallow water flow used everywhere in hydrology; more details are on the web page.
There are more details in the other video's description, and on the project page. It's running on a Linux PC with a Core i7 CPU and a Geforce 580. There are two sources of lag: one is an always-on intentional 1-second delay in surface updates, to filter out hands and tools. The other is temporary slowdown in the water simulation computation when parts of the water mass are moving too fast. I chose the resolution of the water grid a bit too high for this rig.
Oh, sorry. The camera is a Kinect, and the projector is a BenQ MX511 or a similar model, but any projector would do. You could run the flow simulation entirely on a normal screen (that's how I developed it), but that's not much fun, and it's harder to create terrain.
Great question. The Kinect only sees the sand from above, and the water simulation is based on a heightmap, meaning for each position (x, y), there can only be one elevation value z. This means the Kinect couldn't see a tunnel (I assume that's what you mean by hole), and the water simulation couldn't represent it. For that you'd have to use a different method of scanning -- maybe several Kinects looking from several directions -- and a fully 3D water simulation.
You mean the shader that runs the water flow simulation, or the shader that renders the virtual water surface? Well, I made them both myself, but the former is based on the paper I cite in the other video's description, and the latter is completely ad-hoc.
Kudos man, this is AWESOME! It's amazing how far technology has come.
Well considering that it is being projected onto sand, it looks bloody good.
It's not slow per se, the system renders at 30 frames per second, but there's a 1 second delay to filter out tools, hands, and other body parts. That delay has nothing to do with processing power, but is required by the Kinect's inherent noise.
However, the system is at its limit simulating the water flow. Getting a more powerful graphics card would take some load off that.
The sand is indeed white. Sandtastik white play sand.
Based on elevation. There's a valid range for topography, and a valid range for "rain clouds."
I hadn't considered that, but yes, if you build a mountain whose top reaches into the cloud level, then it would rain down on it constantly.
There is noise in the virtual sand surface, caused by the Kinect. That noise agitates the water surface. If I temporarily turn off surface updates, the water quiets down as you'd expect. I'm not showing that in this video, though.
No matlab. The driver code is C++, and the heavy lifting for the water flow simulation happens on the graphics card, programmed as a set of OpenGL shading language shaders.
The wave equations are computationally tough, indeed. They bring a Geforce 580 to its knees, which according to some measurements delivers up to 850 GFlops.
And here I sit and watch and keep thinking "stop it, stop pouring water, it's gonna flow over the top!". Impressive.
It doesn't really look much different from wherever you look at it. There is a tiny bit of distortion in the surface waves if you don't look straight down (that's the optimal viewpoint), but most people don't notice it unless it's pointed out to them.
I asked on the other video if erosion could be shown. It looks as though the different shades of green and brown may be showing depth and height of the land. It would be really cool to have a voice over describing some of this information. It's cool. What ideas do you have for applications?
There's more information on the method and the paper it's based on on my web site. The way the water is rendered is decidedly non-photorealistic (I personally like it this way), but the viscosity should match real water if you accept that the sandbox is at a 1:100 scale.
There's a bit towards the end in the other video, where I have a small lake and are holding my hand over it. During that time the Kinect can't update the surface, and the water comes to rest. The moment I take my hand away, it starts bubbling again.
Visually, I like the busy water. If you want a physically accurate simulation, you'll have to disable surface updates once the sand surface is done.
You could make realy nice strategy games with that :D
When u were filling the water, how did the water look so high up? Wouldn’t the water just look as if it is projected flat on the bottom with a little on the walls?
... what I mean by that is that the reservoir in the video is about 60cm across in the real sandbox, so it's 60 meters across as far as the water simulation is concerned.
Not particularly. It's what the Kinect was designed to do, and I've had code to do that since it came out.
I tried that, but the problem is that if you move your hands or tools into the projector's light path, they change color due to the projection, so the detection mechanism fails. Even matte black surfaces reflect too much light to be reliably identified as black.
This is pure genius.. Props to you sir
It will soon come to a science museum near you (well, if you live in the Bay Area, around Lake Tahoe, or around Lake Champlain); otherwise, you'd have to do what we did and build this yourself. The software driving it will be available in a short while.
That's a really nifty project, but how come it runs so slow? The sampling rate of the Kinect is very high, but the surface updates seem sluggish.
I could have used CUDA, but I implemented the simulation as a sequence of GLSL shaders instead. Each shader corresponds to wha would be a CUDA kernel
Looking at this, I'm wondering if you could create a topographical map using the sandbox and save its state in a digitized form. Using the digitzed topographical information, maybe then you could utilize that information in a program that simulates the effects of water on a geographical area. This could be used to predict the movements and effects of water on geographical areas over large amounts of time or could be used to determine the effectiveness of man-made structures such as dams.
wait- oh. It's life scale- that makes sense now. I thought it was supposed to represent a larger scale- but if you think of it literally as putting water in a sandbox (That doesn't turn to mud) this works perfectly. Thanks for clearing that up.
@Patt
Erosion can't be shown in this setup because there is no way for the software to manipulate the sand (unless -- nanobots!). But if the topography is viewed virtually, meaning on a computer screen, it could be modified according to water flow.
This is really... really mind blowing...
Expecting the releasing. This is like light saber!
Hi, amazing project. instead of using a delay to filter out hands and tools, why not set it to Ignore a certain colour, eg Orange, then wear Orange gloves and use Orange trowels?
What made you decide against Navier-Stokes, or some reduced version of it? I imagine this setup might be well suited for something like Chentanez & Muller's tall cell grid solver.
thats kind of disorienting watching you mold the sand and the projection catching up a half second later. but all in all this is one of the most amazing things ive ever seen and i can see it implemented into real life for landscape archetecture or something of that nature
could this be used to model fluids of different viscosities, like... the mantle?
The main problem I see with these simulations is that you can't simulate the affect that water has on the landscape without separating the digital environment from the irl environment :(
It's badass none the less. Good show.
what would happen if you poked a hole in the dam? Would the physics engine be able to comprehend the hole and let water through it?
You are an insanely cool individual.
Does the water only look 3D from the perspective of the camera, or does it look right from any side you are standing around the table?
It's gorgeous!
Are you using the new Kinect or the X-Box one?
Not right now; I still need to package it for release.
Really cool! Could you make a video where you move the camera around? It's be nice to see what it looks like from a changing perspective.
I'd love to play a game similar to From Dust using this! Projected villagers!
i have a question...when you use the shovel, i noticed stuff moving, is that real sand or a matted surface?
You should upgrade the machine,s and camera,s procesors becuz its kind of slow but i see huge progress in here. Oh is that white sand your,e using?
So that's a real sand box, but there's a dynamic topomap projected on top of it?
How did you get the kinect to sense topographical data that dense? I thought it's grid was rather wide. Are you using two kinects and pulsing the grid patters like you did in previous videos?
I love it when you put your hands down to create water! Its like "let there be WATER!!!" lol like playing GOD.
The Waterbending Guy! Teach me Master!!!!
how do you get the water to look so realistic. The pool looks like an actual pool of water.
What the hell dude, this is great work!
Sir, That is really cool
will this be released to public? as in, I could use your program to create this same sim at my house?
The projector is a BenQ, forgot the exact model number. It's about $800 retail.
Was it hard to get data in from Kinect about the height of the sand in different places?
And on the third second, Okreylos said, let there be water!! and there was.
How does the water apear to be above the sand if its just a projection??
Hi there!
I'm currently building my own AR Sandbox with a colleague of mine and we can not for the life of us get the rainmaker and water simulator to work.
I was curious could you post your start up command prompt?
The topographical map shows up perfectly when we enter this prompt: "./bin/SARndbox -uhm -fpv" but sadly the rainmaker/water doesn't seem to be working...
All the Best,
Aedan
Omg That so nice, I see some real use in the futur, like game maping and in School
Very cool. I wanna play! I wish it modeled the erosive effect of the floodwater.
Really nice! I think you should have one tool for only shoveling soil, though. You kind of make a mess with the water since that is accidentally shoveled too.
Are you using matlab? I'm no whiz, but that would be easy to code for. The water, well I'm thinking the wave equation would be really really computationally tough. Solving the wave equation, particularly when it comes to seismic tomography uses some of the most powerful computers in the world! Really, really impressive!
I need to package it first, which will take some time.
Ah yes, of course. How about a simple geometric shape painted on the back of a glove, like a triangle or star. like the way AR cards work with the 3ds, or is that getting too complex to program?
well, to me, it actually looks like a real liquid. I think it has to do with how you managed to make it actually look 3D in the pool.
I would expect it to wrap around the hole and I would see the depth, but I can't see any depth in the pool at all.
Seriously though, this is AMAZING! Amazing. I need to get better at coding.
It may be calculated on the GPU. With the CPU's help. I bet your computer could. What is your specs?
where can i get the instructions and software for this? the link isnt working
this is really cool!
Oh, and add more cameras to the sides so the projector could see the holes on the bulidings or something
This is going viral! I hope you have adsense set up.
Not sure what you're talking about - this is a Geforce 580.
Now build a living augmented reality city next to it and I'm sold!
Well, not really. The rig it's running on has a Core i7 CPU, and a Geforce 580. Pretty much standard gamer equipment.
This is dam impressive.
not a gamer at all, just a hardware engineer giving u my vision of the future
How much does the projector( I think it is) cost? I would have a lot of fun with it. XD
Is this simulated by a PC/Mac or the Xbox?
Could you please tell me what software do you use?
Im glad i subscribed to this
Yes.
Could I suggest you try to get this to be able to process some kind of 'wave'. So you can have realistic water reactions, tsunamis and not geloten blobbing.
What about giving the water a mirror effect?
I would like to try this with a Playsurface (see Kickstarter). Is the code available?
Is the software publicly available somewhere? The download section on the Project home page does not work :(
@okreylos hell yeah!
i would like to see this done using iron sand and magnetic fields i rekon you could have a virtualesque simulation created via manipulation of magnetics
Get the new 690 when it comes out in SLI, it would be much easier with that much raw power.
This is an xbox one. I haven't tried the PC Kinects yet.
I'd say it's a pc with a kinnect attached to it running unity with a projector over a sand box if you push the sand up into the shape of a mountain then the kinnect will register that height difference...holding the hands out, above the box makes it rain in a sense.....very cool! now lets do better.
Not something so simple. The underlying framework's original work, and it's called Vrui. Sure, you can explain it relatively simply, but consider all the factors involved: the software can't just figure out where the Kinect and projector are relative to each other by magic; plus, they're not in the same location, and there's a lot math to figure out how to draw the image in virtual-space so that it looks correct when projected - and when it's projected onto a non-flat surface you have to factor that in as well, and suddenly it's not so simple to do better.
LeviBackgammon Agreed but I think the comment was to have the engineers not rest on their laurels but to keep creating! I thought it was the Vrui toolkit possibly running on ubuntu It's an amazing piece of work and the application for it is far reaching in many different fields I can actually see a lot of kids wanting to go into the stem fields because of stuff like this and lastly I have to give a shout out to all the math nerds! Kids, this is what you create when you study math in school!
you used sand to get the layers feeling to it , why not use a gas inside of a glass container to make it look like a hologram ? i don't know much about this but i think if any sci fi like hologram is possible it would be through experimenting with how light projects on different gases , like the effect a cigarette smoke leaves on security laser beams
Very cool!