20fps for 1000 particles in Processing is actually not bad! Kudos to you. From here on the only optimization left would be to change languages I guess. This in C++ or Rust should be over 100fps for sure (considering I coded a worse implementation in Rust and it handles 4000 at 60fps)
You're still limited. The actual solution is cpu cache friendly structures, subdivide the grid into chunks, assign each chunk into a worker thread, consolidate all worker thread results at the end of the frame. Congrats, you can now do 1m+ body simulations at 60fps
In Unity, using low poly objects, the Unity job system and burst compiler, about 4,000 objects before noticeable jitter. I’ve a fairly powerful GPU but the CPU isn’t anything special. And that could be improved since the force for each pair is calculated twice: difficult to update the force for the other object in the job system since there’s no way of knowing if it’s being updated from another thread without seriously impacting performance. Ps, code is in C#. And it’s all brute force. I know little else.
Thanks. This is fascinating as I'm building exactly this kind of simulation. I'm using Metal and the GPU (no collisions yet). Given the GPU I can do the brute force O(N^2) on 10,000 bodies at 60 fps and I'm amazed its this good, but I'd like to do 100K or 1M bodies if possible. I tried rendering to an texture, and then generating mip-map with the idea you then sample mass in regular sized grids (just using the centre of the cell) at appropriate resolutions based on distance away. I haven't yet got this working well. Thanks for sharing Barnes-Hut. But doing stuff like this purely on the GPU is very difficult, so I'll probably continue playing around possible doing all close gravity calculations and random sub-sampling those further away. I'll share some videos some time. Good video. Keep it up!
10 หลายเดือนก่อน +3
You can probably do something better using something like the FFT. Basically, each particle has a gravitational potential. You can dissect that potential with a Fourier transform (which you actually only have to do once properly, because the 'frequencies' are all the same for all particles, only the phases are different). You can get the total gravitational potential by adding up individual potentials. Then you evaluate that total gravitational potential function for each particle's position to figure out the forces on them. (Well, plus a bit of math to go from potential to its derivative, the force. But you can do that symbolically.) As a refinement, you can stick to low frequency components only, and use a hashmap (or a tree) to look up the precise impact of nearby particles. See also en.wikipedia.org/wiki/Multipole_expansion
Would be interesting to simulate relativistic gravity instead using like a vector field to represent the combined curvature of all the bodies gravitational forces, could be pretty efficient on a gpu using an image
@@WilliamYFeng If you want the mathematical background of general relativity, then any introductory level textbook would be a good place to start (assuming you have some mathematical background in tensors and such) EinsteinPy is a python package which implements some GR principles quite well, and it is open source too
@@WilliamYFeng you can calculate 1/gravitational time dilation as sqrt(1f-2f*G*mass/distance), or 1/sqrt(1-2Gm/r) for the less readable physics version you can then just multiply the acceleration by that number, and you now have general relativity gravity so that's cool (but don't take my word for it, i could be incorrect, this is just the most accurate seeming thing i could find)
I think perhaps a 2d spatial hash would be a lot faster than a quadtree for this kind of thing, especially as it's 2d. Bonus you can effectively multithread most of hash generation and query/update, believe it or not. This is a big win once you get over a certain number of particles. Anyway nice sim.
Hi just have a question. Since n-body problems (for n > 2) are known to be chaotic, isnt the barnes hut algorithm unsuitable for long time scales? The error accumulation may be significant right?
that is nice. but i would just make some constant grid, calculate mass in each cell, and then i will be able to use this cell masses for distant points. at least it would be much easier to implement )
This won't work. You need to know where each particle is to make a grid that will only divide space as necessary, unless you divide space across the whole grid evenly so that there's only room for on particle and that would be horribly wasteful. If you're not convinced, try implementing this and you'll immediately see the issue.
Big O ignores constants like the division by two, but yeah that's an optimization in fact, big O is too stupid sometimes: if your algorithm had a fixed amount of particles and kept all of them somewhere very far away until you chose to spawn them, but ran the quadratic algorithm to calculate the gravity, then it would be O(1) because the running time would be constant (as in, constantly the worst lol)
This is very coherent explanation. However, at 10:40 you say that clustering bodies "approximates" the effects of their individual gravitational pulls. Isn't it literally the exact same effect? That is, combining their total mass and calculating pull towards their center of gravity ought to be identical to calculating all the individual gravitational vectors and summing them. No?
That's certainly a very intuitive thought, but it turns out that it doesn't work out in practice. The reason you can't treat a cloud of mass as just a single point mass concentrated at the center is because of the inverse-square law: bodies that are closer to you tend to exert a much greater force than bodies that are far away. For instance, consider how the gravitational force on the Moon from the Sun and the Earth. The center of mass between the Sun and Earth is very, very heavily biased towards the Sun. However, the Moon experiences a greater gravitational effect from the Earth because it's so much closer to us-that's why the Moon orbits the Earth, and not the Sun. Thanks for asking this question! I imagine it's a thought many people have-certainly something I was confused with for a long time.
@@WilliamYFengwhile you are generally right, it is interesting that it is possible to say that the Moon actually orbits the Sun on the orbit very close to Earth's rather than orbits the Earth.
You explain things in such a good way
20fps for 1000 particles in Processing is actually not bad! Kudos to you. From here on the only optimization left would be to change languages I guess. This in C++ or Rust should be over 100fps for sure (considering I coded a worse implementation in Rust and it handles 4000 at 60fps)
On c++ i got like 110 frames
You're still limited. The actual solution is cpu cache friendly structures, subdivide the grid into chunks, assign each chunk into a worker thread, consolidate all worker thread results at the end of the frame. Congrats, you can now do 1m+ body simulations at 60fps
@@JacobNaxdo you have any resources on this?
In Unity, using low poly objects, the Unity job system and burst compiler, about 4,000 objects before noticeable jitter. I’ve a fairly powerful GPU but the CPU isn’t anything special. And that could be improved since the force for each pair is calculated twice: difficult to update the force for the other object in the job system since there’s no way of knowing if it’s being updated from another thread without seriously impacting performance.
Ps, code is in C#. And it’s all brute force. I know little else.
@@idiot528 I used opengl to render a simulation of 10000 particles, ran at 90 fps
Thanks. This is fascinating as I'm building exactly this kind of simulation. I'm using Metal and the GPU (no collisions yet). Given the GPU I can do the brute force O(N^2) on 10,000 bodies at 60 fps and I'm amazed its this good, but I'd like to do 100K or 1M bodies if possible. I tried rendering to an texture, and then generating mip-map with the idea you then sample mass in regular sized grids (just using the centre of the cell) at appropriate resolutions based on distance away. I haven't yet got this working well. Thanks for sharing Barnes-Hut. But doing stuff like this purely on the GPU is very difficult, so I'll probably continue playing around possible doing all close gravity calculations and random sub-sampling those further away. I'll share some videos some time. Good video. Keep it up!
You can probably do something better using something like the FFT.
Basically, each particle has a gravitational potential. You can dissect that potential with a Fourier transform (which you actually only have to do once properly, because the 'frequencies' are all the same for all particles, only the phases are different).
You can get the total gravitational potential by adding up individual potentials. Then you evaluate that total gravitational potential function for each particle's position to figure out the forces on them. (Well, plus a bit of math to go from potential to its derivative, the force. But you can do that symbolically.)
As a refinement, you can stick to low frequency components only, and use a hashmap (or a tree) to look up the precise impact of nearby particles.
See also en.wikipedia.org/wiki/Multipole_expansion
this idea is coherent as a lagrange point
Would be interesting to simulate relativistic gravity instead using like a vector field to represent the combined curvature of all the bodies gravitational forces, could be pretty efficient on a gpu using an image
true, I'd love to learn about relativistic gravity. do you have any suggestions for resources?
@@WilliamYFeng If you want the mathematical background of general relativity, then any introductory level textbook would be a good place to start (assuming you have some mathematical background in tensors and such)
EinsteinPy is a python package which implements some GR principles quite well, and it is open source too
@@WilliamYFeng you can calculate 1/gravitational time dilation as sqrt(1f-2f*G*mass/distance), or 1/sqrt(1-2Gm/r) for the less readable physics version
you can then just multiply the acceleration by that number, and you now have general relativity gravity so that's cool
(but don't take my word for it, i could be incorrect, this is just the most accurate seeming thing i could find)
I think perhaps a 2d spatial hash would be a lot faster than a quadtree for this kind of thing, especially as it's 2d. Bonus you can effectively multithread most of hash generation and query/update, believe it or not. This is a big win once you get over a certain number of particles. Anyway nice sim.
Nicely done. More sims?
Hi just have a question. Since n-body problems (for n > 2) are known to be chaotic, isnt the barnes hut algorithm unsuitable for long time scales? The error accumulation may be significant right?
getting a result better than that it's beyond improving order of complexity. you'll need to get into parallelizing the code.
that is nice. but i would just make some constant grid, calculate mass in each cell, and then i will be able to use this cell masses for distant points. at least it would be much easier to implement )
This won't work. You need to know where each particle is to make a grid that will only divide space as necessary, unless you divide space across the whole grid evenly so that there's only room for on particle and that would be horribly wasteful. If you're not convinced, try implementing this and you'll immediately see the issue.
Yes!
How was timings for tree building and traversing?
Tree building is O(N log N) and so is traversing.
helpful video!
03:09 it's not exactly true because you can remove similar pairs like "1 - 2" and "2 - 1" so you will get O(n^2 / 2)
Big O ignores constants like the division by two, but yeah that's an optimization
in fact, big O is too stupid sometimes: if your algorithm had a fixed amount of particles and kept all of them somewhere very far away until you chose to spawn them, but ran the quadratic algorithm to calculate the gravity, then it would be O(1) because the running time would be constant (as in, constantly the worst lol)
Hey william, what program is it you use to run the code?
I used processing: processing.org
This is very coherent explanation. However, at 10:40 you say that clustering bodies "approximates" the effects of their individual gravitational pulls. Isn't it literally the exact same effect? That is, combining their total mass and calculating pull towards their center of gravity ought to be identical to calculating all the individual gravitational vectors and summing them. No?
That's certainly a very intuitive thought, but it turns out that it doesn't work out in practice. The reason you can't treat a cloud of mass as just a single point mass concentrated at the center is because of the inverse-square law: bodies that are closer to you tend to exert a much greater force than bodies that are far away.
For instance, consider how the gravitational force on the Moon from the Sun and the Earth. The center of mass between the Sun and Earth is very, very heavily biased towards the Sun. However, the Moon experiences a greater gravitational effect from the Earth because it's so much closer to us-that's why the Moon orbits the Earth, and not the Sun.
Thanks for asking this question! I imagine it's a thought many people have-certainly something I was confused with for a long time.
@@WilliamYFeng That makes sense! Thanks for the explanation.
@@WilliamYFengwhile you are generally right, it is interesting that it is possible to say that the Moon actually orbits the Sun on the orbit very close to Earth's rather than orbits the Earth.
Cool - Q: Does this account gravity traveling at speed of light?
without even looking at the code...no
in real life it does, which is part of the reason we get gravitational waves which also move at the speed of light
nice - thanks!
I feel really stupid for still not understanding how this works