That's so cool! I really like how dynamic that all looks, how the bunny deforms in weird ways, but still keeps it characteristics, like the ears on the inflated bunny balloon xD
I really like the review of the derivatives of the vector functions. Really well done. Wish there were more readable intros to the topic like this one.
Thank you, glad you like that part. The percentage of viewers descends a bit in that section. Do you mean readable, like the way the information is conveyed?
@@blackedoutk oh wow, ok, that is interesting. I think it's the part with the most merit. Maybe viewers not familiar with the topic don't see how it's motivated. But I really like that part.
Dude this video series is crazy! finally someone who explains stuff in detail and doesn't make you feel like everything is so obviouse and you are just too stupid to understand (for instance including imgui can be a challenge the first time) thank you for being so thorough!
Dude this is soooo helpful I finished the previous video and it worked really well. I'm gonna have fun trying to implement this one! thanks for the guide!
Nice, glad you're having some fun imeplementing this stuff:D You are welcome to join my discord server and share your progress, I'm definitely interested. The link is in the description of some of the videos, for example episode six
Haha 😄 It was a bit fast, I agree. Though most of the time I just turned the formulas into code. Are there any specific parts I should cover in more detail or just in general?
Hi! Is there a good place to get a good stanford bunny obj model? I noticed that some files aren't really suitable for simulation due to its modeling state.. Also I noticed that in 18:16 you're using substepping? Is it recommended to do this, rather than implementing the XPBD paper's pseudocode (where it does multiple iterations instead of substepping)?
Hi, the source of the bunny model is a good question, I just spent half an hour trying to find out lol. Turns out somewhere on the standford edu website is an obj model of it (not the ply ones from the tar gz file) which you can find by typing "stanford bunny obj" into google. Alternatively I now put a link to it in the video description. The model has some issues likes holes in the bottom but I fixed those manually in blender. Thanks for mentioning this, it's a good addition for the video description As for the substepping, yes I implemented both substepping and iterations to make the algorithm more general so I can experiment with its parameters. The authors of the xpbd papers recommend to use only a single iteration and do multiple substeps instead. The video you linked in your first comment on my first episode talks about this in the chapter Iterations vs. Sub-steps
Good work! Second video is a lot better than first one in terms of presentation. Did you use only surface of a bunny? I guess maybe if you put points inside and tetrahedralize it can behave better?
Hii, first thank you so much for these videos and explanations ! I'm a bit confused though about the use of Lambda and Delta Lambda. I've read some papers of Matthias Muller et al, and watched some of his tutorials in Ten Minutes Physics, and he doesnt use a Delta Lambda most of the time. So here in 15:03 , from my previous understanding we have Lambda = - C / (sum(w * grad) + alpha / dt^2). Do I still keep this to calculate Delta Lambda ? Or am I mixing things up and in this case, what is lambda at the first iteration ? It would help a lot to have an answer, thanks !
Hi, yes this can be a bit confusing. The reason why Matthias sometimes doesn't use delta lambda is because he just uses a single solver iteration. In this first iteration the lambda in the delta lambda equation can be ignored since it is zero initialized and because there only is one iteration there is no need to store lambda or delta lambda. They observed this at some point and published it in "Small Steps in Physics Simulation", with a demo th-cam.com/video/at6S8RkXQhw/w-d-xo.html showing it. After the first iteration lambda and delta lambda are the same which I assume is why he turns the delta lambda equation into a lambda equation and drops any delta lambda. I didn't do it that way because I wanted to keep it more general and try stuff like this out myself. In the previous episode there was a question related to yours by @SphealIcecream where he asked why the equations at 14:04 (of my first episode) look different than the ones here th-cam.com/video/jrociOAYqxA/w-d-xo.html (in that video Matthias also explains why he prefers more substeps over solver iterations). Because my answer to his question contains a bit more details I will append parts of it here, in case he deletes his comment: "there are a few reasons why they look different. First, the algorithms in the Ten Minute Physics video only does a single solver iteration. This means that any λ in the Δλj equation here can be ignored, since before the solver runs, each λ is initialized to zero. As to why, this is explained in section "Iterations vs. Sub-steps" and specifically at 7:05 he mentions that λ is not needed anymore. The consequence is that the part "- alpha j tilde λij" of equation 18 can be ignored. Secondly, because we are only dealing with point masses, the inverse mass matrix is still a (block) diagonal matrix. So the term "∇Cj M^-1 ∇Cj^T" is really just the dot product between ∇Cj with itself, where each summand is weighted by the corresponding mass. The absolute value lines | | are used because each of his ∇Ci are still vectors. Also, note that the index j in my video does not correspond to the indices of C in the denominator of his video, because he lists these equations for a single constraint, whereas here the equations take the whole system of constaints into account for which ∇C is actually a matrix (the jacobian matrix). So the λ in his video equals the Δλj here and ∇C1 and ∇C2 in his video are entries in the vector ∇Cj, meaning ∇Cj1 and ∇Cj2. Next, the alpha tilde j is not present in his denominator because he set it to zero, meaning the springs are infinitely stiff. Equation 18 is a really general way to describe the algorithm. You wouldn't implement it this way if your constraint was really simple, like a linear spring. So if you simplify equation 18 with all the knowledge of what you have at about linear spring constaints, set alpha tilde to zero and ignore λ (not Δλj), you end up with the definition of λ in the Ten Minute Physics video. For linear springs the gradient vectors of the constraints of the relevant point masses are of length one, so their dot product or squared length is 1. Lastly for Δx it's very much the same reasons. In constrast to Δλj however, Δx is a vector that contains the delta positions of all point masses. So even more general that the Δλj equation. If you were to multiply everything out and only look at a single constraint Cj and a single point mass x1, you would end up with the equations in his video. I hope this helps, it's a bit difficult conveying this through text. If you didn't understand some part of it don't hesitate to ask" You too are welcome to ask if there is something that is not understandable. Note: I deleted the my original answer here which confused delta lambda and lambda in some parts and replaced it with this corrected one.
How are collisions detected/resolved in PBD? I'd imagine you use SAT (or something that works for concave objects) for detection, and resolve colliding objects by adding constraints until they are no longer colliding. Is there a more efficient way? Very interesting video btw!
Thanks, happy to hear that. From what I've seen, it's essentially what you have in mind. Though PBD doesn't really say how to detect collisions, rather only how to resolve them. For a single point for example, you would define a stiff constraint that projects the point to the surface along the normal direction. I am not sure how the resolve step could be done more efficiently. However, there are probably lots of ways to create efficient collision detections. Like using bounding volume hierarchies and SAT only at the end for triangle triangle tests. Are you thinking of something specific to be more efficient?
@@blackedoutk Yeah I'm sure the standard broad-phase collision detection algorithms apply. I was asking if there's any way to do say continuous collision detection efficiently using constraints, but it seems not. Cases where many objects are close enough to potentially collide would likely be very slow as the constraint matrix would be large.
There might be ways to do what you are asking, but I don't know, sorry. In a non CCD scenario I would suggest to only add constraints for actual collisions.
Perhaps the rotation issue from high pressures can be resolved by storing a swapchain of buffers for the mesh. Consider an implementation of the Game of Life cellular automata. If we went through "for x in rows, for y in columns" pixel-by-pixel and updated each cell onto the same image buffer, we would see artifacts. The solution here is to store the transformation of an old buffer onto a new buffer. This does use more memory, but makes it order-independent. Applying this to the physics engine, instead of applying the constraints directly to the vertices, we build a new mesh in another buffer to do so. This will eliminate the artifacts introduced by the behaviour's dependance on vertex-order. We can then process this through a swapchain of two or three or so, depending on your needs. Apologies if I've misunderstood and you are using multiple mesh buffers already.
That‘s a nice idea, I am not currently doing this. So you mean I shouldn‘t apply the positional delta to the current mesh directly but rather to a copy of the mesh?
@@blackedoutk I guess my idea is just that writing the calculations directly to the mesh makes the state of the mesh change during the calculation and thusly dependent on the order you go through the mesh. You read the current mesh which is an old state and then apply that to the other mesh buffer which is a new state. When you're done with the building of this new mesh, swap the pointers. This does have some overhead for memory bandwidth, but makes it such that the behaviour doesn't depend on the order in which you compute the vertices.
Sorry that was bad grammar I used there. I will rephrase in caveman. Rabbit shape. Rabbit shape want to stay rabbit shape. But mouse cursor throw rabbit shape. This okay. We have computer think how make shape stay rabbit shape. But when computer think how to change rabbit shape, it changes rabbit shape. Uh oh, the rabbit shape is winding around in a circle, rabbit starting to rotate. We tell computer think of two rabbit shape. When figure out how to keep rabbit shape rabbit shape, think of old rabbit as the rabbit is, then use old rabbit to know how new rabbit should be. Separate rabbit shape buffer mean rabbit stronger, more stable. Rabbit no longer victim of its topology. Computer might take slightly longer to figure out rabbit shape, but this okay. New computer mind have more L3 cache resulting in less cache misses.
@@blackedoutk but yeah I think you've got the idea, applying the changes to the original mesh whilst you're using the original to figure out how to change it is how you're getting artifacts. Applying the results of the deltas to the second buffer would build the mesh in that second buffer. That buffer would just be vertices I think (if you have tris stored by vertex indices). Ideally by swapping two buffers.
@@PhoebeBennett-vw9xb Lol. I see why this might solve the problem. But at the same time, without having thought about this too long, it sounds like a jacobi iteration rather than a gauss seidel one. And the gauss seidel was praised for being able to propagate changes directly after each constraint solve. So maybe it does also have some disadvantages. Now I wonder what would happen if I would solve the constraints directly still but in a forward backward order. I will try it out sometime, thanks for your suggestions!
I haven‘t open sourced the code sorry :( But there is some public code linked in the videos on the Ten Minute Physics channel. It should even run in the browser very easily
I'm curious about the rotation problem when P is too large, could it be resolved without jittering by instead resolving the constraints in an order given by a space filling curve mapping (Hilbert or Morton z order)?
Interesting, I didn‘t know such things existed. What makes you think they could solve the issue? Also how would you create these orders for a triangle mesh?
@@blackedoutk well they are typically used for their property of maintaining locality in scales and transformations. So basically points that are close together will be uniformly iterated in all directions, referential to the previous point.. At least this is my intuition. You basically treat the coordinates of each vertex point as an index, normalize them and then you pass them to the specific space filling curve algo to get the mappings and then when you iterate you take these mappings in order. Also ask chatgpt, I did something similar to this for another project
@@_XoR_ Ah I see. The iterated in all directions part could indeed help. Though it's just a surface mesh contrary to the 3d curve. When I used a constant random constraint order the model still rotated, just around another axis. So I assume it really has to solve in all directions uniformly. Another thing I didn't mention in the video was the guy who suggested randomization also suggested something like graph coloring where all constraints that do not share any points are solved iteratively. And now that I think about it sounds quite promising as well. I wrote your suggestions down to try it out sometime, thanks
@@_XoR_ I think a space filling curve makes sense if you assume the mesh could be represented by a regular grid when "unwrapped." Unfortunately, my guess would be that since most meshes cannot be represented this way (without some mapping to a totally different mesh with more suitable topology or something equally messy), it probably wouldn't equalize.
@@delphicdescant I mean you can also think of it spatially in 3D, since I was thinking of a 3d space filling curve initially. The basic idea is to divide the volume of the mesh (or more specifically the smallest cuboid that encases the mesh) into smaller cubes, the density of cubes would correlate to the space filling curve resolution. Even if you don't perfectly align the vertex point to the generated curve coordinate I imagine you could take the nearest neighbour (closest point) for the mapping. You would just need a way to establish the space filling curve resolution in regards to the number of vertices to get "a good enough tolerance"
Have you programmed before? That is probably the most time consuming thing to learn. There are a lot of free resources (for example TH-cam series) so it's very accessible, but it requires a lot of practice to get a feel for what to do when. I would recommend sticking to the basics. One can get lost very easily in all of the C++ features or crazy build systems that exist imo. As for resources I mostly watch TH-cam videos because I don't like reading that much. Some things I remember are the Handmade Hero video series where Casey Muratori (basically an expert) writes a game from scratch. It's just really long, I watched that back when I had enormous amounts of free time (and still didn't watch it all, like the first 200 videos or something). For OpenGL I remember the video series by ThinMatrix, Jamie King and maybe TheCherno. For the math it's a bit different. I spent a lot less time with that. I remember two years ago I almost felt hopeless because I didn't understand anything of higher level calculus or physics. It doesn't require as much time as programming I would say, since high school gave a pretty good introduction to most of these topics. But it still feels a lot harder because it's unclear what exactly to learn. So most of this stuff I learned in university where you are provided with (seemingly pointless) exercises. I am not sure what to recommend here and I am still at the beginning of learning. Maybe reading papers and stopping at new math concepts to learn could work. I hope that doesn't sound discouraging. The most important thing is that you have fun and dedication doing all of it.
@@blackedoutk hey, yeah thx for this elaborate answer. I already know how to program. Even programmed a raytracer and currently doing a rigid body course. For me the physics and calculus aspects are the challenge. I'm considering to do "fluid dynamics" but the smooth body concepts seem also very interesting. Your renderer seems very nice this must have taken a lot of time to write.
@@deltapi8859 Oh, that's good. What rigid body course are you doing? Deformable bodies can be easier or harder, depending on the physical accuracy you want to achieve. The mass spring system I use here is on the easier side I would say, but can still get nice results. More physical deformable bodies seem to be even harder than fluid dynamics from what I have seen. My renderer took a bit of time but it's actually just a phong lighting model minus the specular part with shadows.
Yes that may possibly be a bit confusing, thanks for mentioning it. In the first episode there were: the stiffness D the inverse stiffness α = 1/D weird α or more formally α tilde (~ on top) = α/Δt/Δt In this episode the function parameter Alpha refers to α tilde. Additionally, any of these might have a subscript j, indicating they belong to a specific constraint Cj
@@blackedoutk actually this message was more for general public, maybe it would help somebody. yes, I found the D in the first episode, is was easyer to make sence of, compared to weirdA/Alpha I think i've not seen "inverse stiffness α = 1/D" 0_0
That's so cool!
I really like how dynamic that all looks, how the bunny deforms in weird ways, but still keeps it characteristics, like the ears on the inflated bunny balloon xD
Yeah, the inflated bunny looks quite funny. Reminds me of one of those exercise balls haha. Glad you liked it
I really like the review of the derivatives of the vector functions. Really well done. Wish there were more readable intros to the topic like this one.
Thank you, glad you like that part. The percentage of viewers descends a bit in that section. Do you mean readable, like the way the information is conveyed?
@@blackedoutk oh wow, ok, that is interesting. I think it's the part with the most merit. Maybe viewers not familiar with the topic don't see how it's motivated. But I really like that part.
Dude this video series is crazy! finally someone who explains stuff in detail and doesn't make you feel like everything is so obviouse and you are just too stupid to understand (for instance including imgui can be a challenge the first time) thank you for being so thorough!
Thank you, glad it's understandable :D
Wow dude this is so much better than college. You give an explanation of all your steps, you make the code public and it's fun.
Thank you, glad you're having fun with it. College might be a bit dry at times yeah
No animal was hurt during these recordings 😀
😅😅
But were any of the programmers hurt?..
@@Nikage23 🥲
Dude this is soooo helpful I finished the previous video and it worked really well. I'm gonna have fun trying to implement this one! thanks for the guide!
Nice, glad you're having some fun imeplementing this stuff:D You are welcome to join my discord server and share your progress, I'm definitely interested. The link is in the description of some of the videos, for example episode six
Laughed so hard for ''i swear i like animals''
I like the video but i wish you give more time explaining the code in more details
Haha 😄
It was a bit fast, I agree. Though most of the time I just turned the formulas into code. Are there any specific parts I should cover in more detail or just in general?
@@blackedoutk can u give me your email i'm working on similar field
@@Xphy You can look up my email in the commit records of my github repositories 😁
Hi! Is there a good place to get a good stanford bunny obj model? I noticed that some files aren't really suitable for simulation due to its modeling state.. Also I noticed that in 18:16 you're using substepping? Is it recommended to do this, rather than implementing the XPBD paper's pseudocode (where it does multiple iterations instead of substepping)?
Hi, the source of the bunny model is a good question, I just spent half an hour trying to find out lol. Turns out somewhere on the standford edu website is an obj model of it (not the ply ones from the tar gz file) which you can find by typing "stanford bunny obj" into google. Alternatively I now put a link to it in the video description. The model has some issues likes holes in the bottom but I fixed those manually in blender. Thanks for mentioning this, it's a good addition for the video description
As for the substepping, yes I implemented both substepping and iterations to make the algorithm more general so I can experiment with its parameters. The authors of the xpbd papers recommend to use only a single iteration and do multiple substeps instead. The video you linked in your first comment on my first episode talks about this in the chapter Iterations vs. Sub-steps
Good work! Second video is a lot better than first one in terms of presentation. Did you use only surface of a bunny? I guess maybe if you put points inside and tetrahedralize it can behave better?
Thanks, appreciate your comment. Yes, I only used surface models here. Your suggestion is what I have planned for the next episode.
Hii, first thank you so much for these videos and explanations !
I'm a bit confused though about the use of Lambda and Delta Lambda. I've read some papers of Matthias Muller et al, and watched some of his tutorials in Ten Minutes Physics, and he doesnt use a Delta Lambda most of the time.
So here in 15:03 , from my previous understanding we have Lambda = - C / (sum(w * grad) + alpha / dt^2).
Do I still keep this to calculate Delta Lambda ? Or am I mixing things up and in this case, what is lambda at the first iteration ?
It would help a lot to have an answer, thanks !
Hi, yes this can be a bit confusing. The reason why Matthias sometimes doesn't use delta lambda is because he just uses a single solver iteration. In this first iteration the lambda in the delta lambda equation can be ignored since it is zero initialized and because there only is one iteration there is no need to store lambda or delta lambda. They observed this at some point and published it in "Small Steps in Physics Simulation", with a demo th-cam.com/video/at6S8RkXQhw/w-d-xo.html showing it. After the first iteration lambda and delta lambda are the same which I assume is why he turns the delta lambda equation into a lambda equation and drops any delta lambda.
I didn't do it that way because I wanted to keep it more general and try stuff like this out myself.
In the previous episode there was a question related to yours by @SphealIcecream where he asked why the equations at 14:04 (of my first episode) look different than the ones here th-cam.com/video/jrociOAYqxA/w-d-xo.html (in that video Matthias also explains why he prefers more substeps over solver iterations). Because my answer to his question contains a bit more details I will append parts of it here, in case he deletes his comment:
"there are a few reasons why they look different. First, the algorithms in the Ten Minute Physics video only does a single solver iteration. This means that any λ in the Δλj equation here can be ignored, since before the solver runs, each λ is initialized to zero. As to why, this is explained in section "Iterations vs. Sub-steps" and specifically at 7:05 he mentions that λ is not needed anymore. The consequence is that the part "- alpha j tilde λij" of equation 18 can be ignored.
Secondly, because we are only dealing with point masses, the inverse mass matrix is still a (block) diagonal matrix. So the term "∇Cj M^-1 ∇Cj^T" is really just the dot product between ∇Cj with itself, where each summand is weighted by the corresponding mass. The absolute value lines | | are used because each of his ∇Ci are still vectors.
Also, note that the index j in my video does not correspond to the indices of C in the denominator of his video, because he lists these equations for a single constraint, whereas here the equations take the whole system of constaints into account for which ∇C is actually a matrix (the jacobian matrix). So the λ in his video equals the Δλj here and ∇C1 and ∇C2 in his video are entries in the vector ∇Cj, meaning ∇Cj1 and ∇Cj2.
Next, the alpha tilde j is not present in his denominator because he set it to zero, meaning the springs are infinitely stiff.
Equation 18 is a really general way to describe the algorithm. You wouldn't implement it this way if your constraint was really simple, like a linear spring. So if you simplify equation 18 with all the knowledge of what you have at about linear spring constaints, set alpha tilde to zero and ignore λ (not Δλj), you end up with the definition of λ in the Ten Minute Physics video. For linear springs the gradient vectors of the constraints of the relevant point masses are of length one, so their dot product or squared length is 1.
Lastly for Δx it's very much the same reasons. In constrast to Δλj however, Δx is a vector that contains the delta positions of all point masses. So even more general that the Δλj equation. If you were to multiply everything out and only look at a single constraint Cj and a single point mass x1, you would end up with the equations in his video.
I hope this helps, it's a bit difficult conveying this through text. If you didn't understand some part of it don't hesitate to ask"
You too are welcome to ask if there is something that is not understandable.
Note: I deleted the my original answer here which confused delta lambda and lambda in some parts and replaced it with this corrected one.
Looking forward to your future videos, I'd love a more thorough explanation of the math behind these processes.
Glad to hear that. Do you mean an even more thorough explanation?
How are collisions detected/resolved in PBD? I'd imagine you use SAT (or something that works for concave objects) for detection, and resolve colliding objects by adding constraints until they are no longer colliding. Is there a more efficient way? Very interesting video btw!
Thanks, happy to hear that. From what I've seen, it's essentially what you have in mind. Though PBD doesn't really say how to detect collisions, rather only how to resolve them. For a single point for example, you would define a stiff constraint that projects the point to the surface along the normal direction.
I am not sure how the resolve step could be done more efficiently. However, there are probably lots of ways to create efficient collision detections. Like using bounding volume hierarchies and SAT only at the end for triangle triangle tests.
Are you thinking of something specific to be more efficient?
@@blackedoutk Yeah I'm sure the standard broad-phase collision detection algorithms apply. I was asking if there's any way to do say continuous collision detection efficiently using constraints, but it seems not. Cases where many objects are close enough to potentially collide would likely be very slow as the constraint matrix would be large.
There might be ways to do what you are asking, but I don't know, sorry. In a non CCD scenario I would suggest to only add constraints for actual collisions.
Finally made it through this episode😵💫👍
Nice
I'd love to see this with the Utah teapot full of tea.
Now I want to see it too
Perhaps the rotation issue from high pressures can be resolved by storing a swapchain of buffers for the mesh. Consider an implementation of the Game of Life cellular automata. If we went through "for x in rows, for y in columns" pixel-by-pixel and updated each cell onto the same image buffer, we would see artifacts. The solution here is to store the transformation of an old buffer onto a new buffer. This does use more memory, but makes it order-independent. Applying this to the physics engine, instead of applying the constraints directly to the vertices, we build a new mesh in another buffer to do so. This will eliminate the artifacts introduced by the behaviour's dependance on vertex-order. We can then process this through a swapchain of two or three or so, depending on your needs. Apologies if I've misunderstood and you are using multiple mesh buffers already.
That‘s a nice idea, I am not currently doing this. So you mean I shouldn‘t apply the positional delta to the current mesh directly but rather to a copy of the mesh?
@@blackedoutk I guess my idea is just that writing the calculations directly to the mesh makes the state of the mesh change during the calculation and thusly dependent on the order you go through the mesh. You read the current mesh which is an old state and then apply that to the other mesh buffer which is a new state. When you're done with the building of this new mesh, swap the pointers. This does have some overhead for memory bandwidth, but makes it such that the behaviour doesn't depend on the order in which you compute the vertices.
Sorry that was bad grammar I used there. I will rephrase in caveman. Rabbit shape. Rabbit shape want to stay rabbit shape. But mouse cursor throw rabbit shape. This okay. We have computer think how make shape stay rabbit shape. But when computer think how to change rabbit shape, it changes rabbit shape. Uh oh, the rabbit shape is winding around in a circle, rabbit starting to rotate. We tell computer think of two rabbit shape. When figure out how to keep rabbit shape rabbit shape, think of old rabbit as the rabbit is, then use old rabbit to know how new rabbit should be. Separate rabbit shape buffer mean rabbit stronger, more stable. Rabbit no longer victim of its topology. Computer might take slightly longer to figure out rabbit shape, but this okay. New computer mind have more L3 cache resulting in less cache misses.
@@blackedoutk but yeah I think you've got the idea, applying the changes to the original mesh whilst you're using the original to figure out how to change it is how you're getting artifacts. Applying the results of the deltas to the second buffer would build the mesh in that second buffer. That buffer would just be vertices I think (if you have tris stored by vertex indices). Ideally by swapping two buffers.
@@PhoebeBennett-vw9xb Lol. I see why this might solve the problem. But at the same time, without having thought about this too long, it sounds like a jacobi iteration rather than a gauss seidel one. And the gauss seidel was praised for being able to propagate changes directly after each constraint solve. So maybe it does also have some disadvantages.
Now I wonder what would happen if I would solve the constraints directly still but in a forward backward order. I will try it out sometime, thanks for your suggestions!
Do you have a link to a Github repo where we could play with this?
I haven‘t open sourced the code sorry :( But there is some public code linked in the videos on the Ten Minute Physics channel. It should even run in the browser very easily
I'm curious about the rotation problem when P is too large, could it be resolved without jittering by instead resolving the constraints in an order given by a space filling curve mapping (Hilbert or Morton z order)?
Interesting, I didn‘t know such things existed. What makes you think they could solve the issue? Also how would you create these orders for a triangle mesh?
@@blackedoutk well they are typically used for their property of maintaining locality in scales and transformations. So basically points that are close together will be uniformly iterated in all directions, referential to the previous point.. At least this is my intuition. You basically treat the coordinates of each vertex point as an index, normalize them and then you pass them to the specific space filling curve algo to get the mappings and then when you iterate you take these mappings in order. Also ask chatgpt, I did something similar to this for another project
@@_XoR_ Ah I see. The iterated in all directions part could indeed help. Though it's just a surface mesh contrary to the 3d curve. When I used a constant random constraint order the model still rotated, just around another axis. So I assume it really has to solve in all directions uniformly.
Another thing I didn't mention in the video was the guy who suggested randomization also suggested something like graph coloring where all constraints that do not share any points are solved iteratively. And now that I think about it sounds quite promising as well.
I wrote your suggestions down to try it out sometime, thanks
@@_XoR_ I think a space filling curve makes sense if you assume the mesh could be represented by a regular grid when "unwrapped."
Unfortunately, my guess would be that since most meshes cannot be represented this way (without some mapping to a totally different mesh with more suitable topology or something equally messy), it probably wouldn't equalize.
@@delphicdescant I mean you can also think of it spatially in 3D, since I was thinking of a 3d space filling curve initially. The basic idea is to divide the volume of the mesh (or more specifically the smallest cuboid that encases the mesh) into smaller cubes, the density of cubes would correlate to the space filling curve resolution. Even if you don't perfectly align the vertex point to the generated curve coordinate I imagine you could take the nearest neighbour (closest point) for the mapping. You would just need a way to establish the space filling curve resolution in regards to the number of vertices to get "a good enough tolerance"
I want to learn to do this. Where can you learn this? And how much times do you need? What resources did you use? :-)
Have you programmed before? That is probably the most time consuming thing to learn. There are a lot of free resources (for example TH-cam series) so it's very accessible, but it requires a lot of practice to get a feel for what to do when. I would recommend sticking to the basics. One can get lost very easily in all of the C++ features or crazy build systems that exist imo.
As for resources I mostly watch TH-cam videos because I don't like reading that much. Some things I remember are the Handmade Hero video series where Casey Muratori (basically an expert) writes a game from scratch. It's just really long, I watched that back when I had enormous amounts of free time (and still didn't watch it all, like the first 200 videos or something). For OpenGL I remember the video series by ThinMatrix, Jamie King and maybe TheCherno.
For the math it's a bit different. I spent a lot less time with that. I remember two years ago I almost felt hopeless because I didn't understand anything of higher level calculus or physics. It doesn't require as much time as programming I would say, since high school gave a pretty good introduction to most of these topics. But it still feels a lot harder because it's unclear what exactly to learn. So most of this stuff I learned in university where you are provided with (seemingly pointless) exercises.
I am not sure what to recommend here and I am still at the beginning of learning. Maybe reading papers and stopping at new math concepts to learn could work.
I hope that doesn't sound discouraging. The most important thing is that you have fun and dedication doing all of it.
@@blackedoutk hey, yeah thx for this elaborate answer. I already know how to program. Even programmed a raytracer and currently doing a rigid body course. For me the physics and calculus aspects are the challenge. I'm considering to do "fluid dynamics" but the smooth body concepts seem also very interesting. Your renderer seems very nice this must have taken a lot of time to write.
@@deltapi8859 Oh, that's good. What rigid body course are you doing? Deformable bodies can be easier or harder, depending on the physical accuracy you want to achieve. The mass spring system I use here is on the easier side I would say, but can still get nice results. More physical deformable bodies seem to be even harder than fluid dynamics from what I have seen.
My renderer took a bit of time but it's actually just a phong lighting model minus the specular part with shadows.
are you working with a surface?
What do you mean exactly? The mesh is a closed surface, no inner points
Alpha = weirdA = inverceStiffness / detlaTime / deltaTime
just so you know
Yes that may possibly be a bit confusing, thanks for mentioning it.
In the first episode there were:
the stiffness D
the inverse stiffness α = 1/D
weird α or more formally α tilde (~ on top) = α/Δt/Δt
In this episode the function parameter Alpha refers to α tilde. Additionally, any of these might have a subscript j, indicating they belong to a specific constraint Cj
@@blackedoutk actually this message was more for general public, maybe it would help somebody.
yes, I found the D in the first episode, is was easyer to make sence of, compared to weirdA/Alpha
I think i've not seen "inverse stiffness α = 1/D" 0_0
@@desine_ Well if D is the stiffness then 1/D must be the inverse stiffness right?