I really like the illusion that rhombuses can look like an isometric projection of a collection of cubes Edit: Congratulations on 100k subs!!! You deserve millions though, I can't wait for that
I feel like it’s cheating a tad bit when talking about the toy there are “three different colors” then when switching to the simulation there are three different orientations, and the color is based on the orientation. This definitely is kinda explained in the video, so maybe I’m just the weird one, but it reminds confused me for a bit.
One thing I noticed is that “removing a cube” is the same as rotating that “vertex” by 180°! This is only allowable when all three colors are intersecting at the “vertex”!
DUDE! The craziest thing happened with this video! I had an exam today. Yesterday night I decided that "screw it, I'll stop studying now and I'll relax a bit - let's see what's on TH-cam". I clicked on your video, and you talked about the Monte Carlo method I realized that is was a (niche) thing that the professor explained in class, but I didn't review: I opened my notes and reviewed it. Today at the exam, first question: "explain the Monte Carlo method, what it is and when it is used" my jaw dropped soooo thank you for helping me pass my exam! I wouldn't have answered that question if it wasn't for your video!
7:04 “a lot of the time, being aesthetic, or interesting, or beautiful, is the only thing you need to make learning about something worth it” holy crap, this is exactly why i like this channel without understanding most of the stuff in it.
It reminds me a lot of a video from the french math youtuber Mickaël Launay called "La puissance organisatrice du hasard - Micmaths" ("The organizing power of randomness") It's about how seamingly organized randomness can sometimes be. And there is a specific part discussing the Aztec Diamond which reminds me a lot about how the liquid region of the rombuses is a circle but in the video in question the size of it was significant enough so the circle was blatantly obvious. It's really interesting if you don't mind french subtitles or auto translated english ones.
one of my favorite courses I have taken was a grad course in monte carlo so always excited to see it brought up in interesting applications like this one!
Losanges is what a rhombus is called in french. Also, this is something we should send to cgp grey. Also, this reminds me of those IQ tests where you're supposed to determine the number of cubes with a 2D view.
So ... This is kinda funny. TH-cam's algorithm just randomly proposed this video out of nowhere. And while I don't work with this stuff in a *long* time, my thesis started on planar partitions - the thingies that @vye9431 pointed to - and whenever they could be mapped to quantum mechanics / condensed matter problems. Mainly, we used the planar partition (and rhombus tiling) equivalence to dimers on an hexagonal lattice. The latter problem has an equivalent system in quantum physics / condensed matter - double carbon bonds on a graphene sheet - and we wanted to see if there was a way to exploit this equivalence to study the quantum problem. In the end, the "quantum" version of these tilings has some interesting properties - like how you can recover something similar to the Arctic circle, but with some "ripple" effects. But the hexagonal shape of the full tiling add some *really* strong boundary conditions, so AFAIK you can't really use them to model a graphene sheet. And, well, maybe (surely) I'm not good enough on math to build the whole equivalence 😅. We ended up pivoting to doing numerical simulations on more "boring", periodic graphene sheets / tilings. But those have some mathematically interesting phase transition properties! Anyways, that was a fun walk down memory lane 🙂
I have a few questions about the sampling method: if each of your moves is some local change of the state its not clear why this should generate a uniform distribution, especially when starting from such a special place in the state space. surely at early times this random walk will only fill some corner of the whole set of possible states. so, is it that you have to take many samples over a long time to generate a representative sample? even just starting at a random-enough initial state would make it more believable to me. otherwise, i would think you need to make a truly random permutation of all possible states to get a uniform distribution rather than only transitions which only differ in one position. this is probably well understood in mcmc but wasnt clear to me from the video.
The most interesting result from Markov chains for us is that if a Markov chain is finite and any state can be reached by some number of transitions, then that chain has an "invariant distribution" that remains the same under transitions. The invariant distribution is the limit of applying transitions many times to any starting distribution. I wish I could give some intuition on why that is, but I just can't (that's why it's not in the video!) so I'll leave it up to a proof in a statistics textbook. Once we have this fact, we just need to prove that the invariant distribution of the Markov chain that I described in the video is the uniform distribution, and that proof is on page 231 of the PDF of Gorin's book in the description. It's not too hard, so you should give a look! If anybody has a more intuitive way of describing invariant distributions in Markov Chains (or maybe you've seen a video), please share it!
@@physicsforthebirds The "intuition" that I have for mcmc is the following: let's say you use some scheme where you jump from state to state to randomly sample from a probability distribution. Characterizing the scheme consists in giving the probability given the current state s_i to go to any ohter state s_j. Let's call this probability p(s_i -> s_j) = p_ij. If you succesfully managed to sample your target probability distribution P, then in the long run you should sample any state k with a certain probability P_k (which, because you did a good job, is equal to the target probability that you're trying to simulate). For this "empirical" (ie experimentally obtained) distribution to be stationnary, you want that for any states s_i and s_j, P_i*p_ij = P_j*p_ji. This corresponds to the fact that for this distribution to be stationnary, you should have "as many particles" (in the case of many independant experiments running in parallel) going from state s_i to s_j as the other way around (rigorously speaking, this is a sufficient but not necessary condition, also called "detailed balance condition", the true necessary condition, aka "global balance condition" is that the incoming flux of "particles"/probability entering s_i is equal to the outgoing flux, ie sum_k(P_k*p_ki) = sum_k(P_i*pik) for all i. The detailed balance condition is sufficient but not necessary but it is also often much easier to implement because it only involves balancing the fluxes between any pair of two states instead of having to sum over any accessible states). In the case of a uniform target distribution (all P_k's are equal), this "detailed balance condition" simplifies as p_ij = p_ji, which is to say as long as, when you choose a move with some probability, you have the exact same probability to return from the new state to the initial state, this satisfies the above condition and the experimental distribution converges towards the target one (but of course you should still wait long enough that you have completely forgotten your initial state, which might take a long time). The last thing that you have to be careful about is the case when some moves are not allowed, as described in the video. Starting from the state with no cubes at all (s_0), you only have one accessible state, the one with exactly one cube in the corner (s_1). Since this is the only available state from s_0, you might be tempted to move every time from s_0 to s_1 (with probability 1). But if you do the same reasoning with s_1, you now have 4 accessible states (either return to s_0, or add one cube in 3 possible locations), which means that if you only pick from these accessible states at random, the probability to return to s_0 is only 1/4. This breaks the detailed balance condition, as you would rather want to go back to s_0 with probability 1 (since you moved from s_0 to s_1 with probability 1). The solution here is to "play dumb" and try to add or remove a cube from anywhere in the hexagon without using the fact that most of these moves are impossible. This is very wasteful starting from s_0, as most moves will be rejected immediately, in which case you should sample s_0 again! This might make the simulation less efficient as you could repeat positions many times, but that is the easiest way to ensure that you sample from the correct (uniform) distribution. Of course for a "typical" state, which is very far from s_0 in general, many more moves are allowed so this is less of a problem.
I think you can get an intuition on the MCMC algorithm by using a simpler game like snakes and ladders (with a board that loops back to the start). To find the standing distribution, you could play a billion games and watch where each end, or you could play one long game and watch where you're likely to end up in that game. The latter is a lot more efficient because you don't have to re-simulate the moves leading up to the final position
@@physicsforthebirds I don't remember who's video it was; but I have the memory of some weeks ago watching a video that used this markov chain distribution thing to calculate where it's more likely you would find the ghosts in the maze of the Pacman game when they get in their frightened state. edit: found it, it's by Retro Game Mechanics Explained
whu....what....dude.....Arctic curve vocabulary is so perfect. it just make so much sense you are so very good at educating. I hope you teach at a university or something.
Classic MCMC can only approximate the stationary distribution, and it can be very difficult to predict how many steps we have to run to ensure convergence to a certain degree. But in some problems including this lozenge tiling one, there is an elegant technique called coupling from the past that allows relative efficient *exact* sampling from the stationary distribution of a Markov chain! Basically the idea is you run the Markov chain from past infinity. This is possible when the state space is finite using the following trick: You randomly sample single step transition function of the MC, going backwards in time (sample transition t=-1 -> 0, then t=-2 -> -1, etc.), and you compute the composition of these step transitions. When you see the transition function from t=-N -> 0 becomes a constant function (which will happen almost surely), you know all the transition steps before t=-N will not matter, since precomposing a constant function with any function gives the same constant function. So your MC has "converged from past infinity", and the sampling will be exact. Of course, the space demand for storing a complete step transition function over the entire state space will be intractable if the state space is huge. But here comes another trick: In some problems, like the lozenge tiling one, the state space is actually partially ordered (a tiling is above another one if its corresponding cube filling contains the latter's), and we can sample the step transitions in such a way that preserves the ordering (Pick a vertex and flip a coin. If it's head then fill in a cube at that vertex if possible, if tail then remove a cube if possible. All these step decisions will be stored). The partial order has a top element (all cubes filled in) and a bottom element (no cube filled in), and we only need to track their destinations after all the sampled single step transitions. When they rendezvous, we know the order preserving single step transitions must have squeezed all other states into that state as well, so the whole chain has converged. This allows relatively efficient sampling for a certain class of MCs with this order structure. For details, check out the original paper Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics.
Thanks for the lovely and informative video❤❤❤ I know the maths displayed at 0:34 is probably quite easy but it'd be amazing if you explained them a bit😅
This notation isn't always taught in school, so sorry for not explaining it! Capital Pi at 0:34 means "the product of all of the terms on the right from a=1 to a=A", so it works the same way as capital Sigma does for sums. That means you do the fraction with a=1, b=1, and c=1, then you multiply it by the fraction again with a=1, b=1, c=2, and you keep on going until you've gone through every combination of a, b, and c. That's a lot of terms to multiply, so it gets big really fast.
This is a great video. Have you ever spotted that the unbounded rhombus tiling maps onto the triangular ising antiferromagnet. This is why the middle of the model is 'liquid' (it maps onto the spin liquids).
I'm about to start a PhD on occupation processes, which are in bijective correspondence with lozenge tilings. Isn't it strange when you just learn about something and then you start to notice it everywhere? Great video!
Hello again heisenberg! For those who don't know, Heisenberg is the fresh account of the "NMRIH is a great source mod" which was banned for botting/violating TH-cam TOS -Same mannerisms, Over 800+ subs to only popular/viewed channels, popped up right when the previous account was banned about four months ago, this account is a bot that spams and like baits channel's comment sections for subs.
Really liked the video. It kinda feels like it could have been longer though. Feels like there's so much that has been touched but not really expanded upon. Also I didn't quite understand the connections between the tilings and physics. I mean I'm really interested in learning more so that part you did get brilliantly. Overall love your channel. Please keep making videos!
6:55 this is true even in other fields. engineers sometimes have Legos and other toy sets as quick stand ins for things. not often, but it's done all the same. it gives a person another perspective to a problem they had been considering.
Hello I coded myself a little toy-program, that has a 16 * 16 * 16 units tiling Hexagon. : can make it look entire concav or convex or random scrambled. and i can manually flip tile-triples, where they all meed with a whide corner and combine into a small hexagon, what looks like adding or removing a fully visible cube in the stack 5:35 My randomizing and manual modifications are exactly, what is described from here on. I coded my thing after watching another video, that not told about this method, but by accident i figured out just this method my self. My random-scramble-algorythm takes about 2-5 seconds to finish I think a 20 * 20 * 20 scould take about twice as long as this to scramble.
The first thought that leapt into my mind to get a uniformly arbitrary tiling would be to 1: work up an algorithm that uniquely maps a tiling number to a tiling output (EG in the side length 3 case maps 0..979 -> a tiling output such that each number gives a different output, no other restriction to ordering required) and 2: select uniformly arbitrary number in the range of possible numbers, which I already know how to do with time and space complexity of roughly O(log(N)). 3: use algo to transform random number into random layout 4: sip margaritas on the beach at sunset. Optionally start with point 4 and do the rest on a laptop while fending off the gulls, but if I make my rhombuses out of tortillas and make extra I can toss those to keep 'em busy. Wait what were we talking about again? 😎
After watching, I would like to add, the cubes can be inverted I my mind to where it’s not an empty cube, but the outside of one being chipped away. I hope others can see this because I thought it was odd.
If I recall, this sort of behaviour is very much analogous to Random Matrices, possibly even related For example, if you take a particular random matrix, known as a GUE (Gaussian Unitary Distribution) The matrix can have many different eigenvalue distributions. However, the typical behaviour of the eigenvalue distribution is in the form of a semicircle distribution. Its very spooky
Ive always intuitively known that there'd be the same number of rhombuses cause i image them as the number of "ceiling units" and number of "wall units". Painting any one side will always only require that number of losanges
is this just a popular topic now suddenly or were you in the same public lecture as me recently... the coincedence is coincedencing... anyway a good video to recap since the last time i heard about this was quite late at night :D
could you do a an optimization for the big ones where you start with a low res one then subdivide and the markov chain then subdivide and repeat until you have high detail, but still random even at both low and high frequencies?
-(Yeah same. And)- I learned the real definition of isometric projection now! That's actually so cool I might consider more for game design or something like that. What do you guys think? (Was my reply to one of the top comments btw,)
when you say that a particular solution "took a while" to generate, do you mean that you had to run MCMC for a long enough period of time to diffuse away from some low-entropy solution and reach a representative sample of the uniformly distributed state space?
It's all well and good when you see it as a room iwth boxes pushed into the corner, but sometimes it looks like a cube with smaller cubes carved out, and I get dizzy.
Would you please give an intuition of why doing the random steps you mentioned is (approximately) equivalent to generating a uniformly sampled arrangement?
Something about this feels off to me. If you are making a random tiling of the hexagon, you should also be able to "rotate" and the colors of the rhombi which could break the cube illusion. Or, with sufficiently large enough hexagon allow points where the minor angles meet which also breaks the illusion of cubes.
Is there some way of knowing you've done enough iterations to be suitably random? Because if I was running this on my own without knowing about the frozen region, I would just assume I cut it off before changes could propagate out to the edges
That's a very good question, and there is. If you keep track of the chain that you generate and then you apply that sequence of steps starting at both the "empty cube" tiling and the "full cube" tiling, and if that sequence leads to the exact same sample once it's been applied starting at both of those tilings, then you are sampling from the uniform distribution exactly. You can read about it in the last chapter of Gorin's book in the description. For the sake of runtime though, I just plugged in a big number of steps and ran it (because I already know what a random sample should look like. Not a good way of doing science, but a good way of making pictures!)
Proving that equation for the number of valid tilings is not at all an easy problem, so I left it out of the video. If you want to see the proof, go to page 11 of the "great textbook on lozenge tilings" PDF I linked in the description. It takes the whole second chapter to derive it!
Although you might not find literal lozenges in very many places, random tilings are useful toy models for other systems in statistical mechanics. For example, simple models of water molecules in ice or electron spins in magnets share many of the same "freezing" and "melting" features as lozenges.
Does markov chain monte carlo not encourage the behaviour of the deadspots? Since all new cubes start from the center and not the edges. And the reason behind the deadspaces is also not very clear to me. if every space has an equal chance of being swapped, why do we see deadspaces around the circle?
i was confused at the part with the 3 rhombuses being cubes because i thought the cube was a removed part of a bigger cube and it looked soo weird when another cube apeared
A lot of the time when I watch videos like these I get very easily lost. This time I followed you the entire time. Great job
Same, this was really easy to follow
Nice
@@alongal407the formula threw me off for a sec after reading this lol but yeah
271 1
@@alongal4071st like
I really like the illusion that rhombuses can look like an isometric projection of a collection of cubes
Edit: Congratulations on 100k subs!!! You deserve millions though, I can't wait for that
Yeah. It can also look like you are looking at the cubes from the bottom, and tons of other weird configurations.
I feel like it’s cheating a tad bit when talking about the toy there are “three different colors” then when switching to the simulation there are three different orientations, and the color is based on the orientation. This definitely is kinda explained in the video, so maybe I’m just the weird one, but it reminds confused me for a bit.
There must be a mapping between the 2D tile state space and a 3D cube state space o.o
It's the math of the universe conspiring against you
aops logo
Mathologer taught me about the arctic circle, but thank you for introducing me to other arctic curves and for discussing the generation strategy.
Mcdonalds is way better.
One thing I noticed is that “removing a cube” is the same as rotating that “vertex” by 180°! This is only allowable when all three colors are intersecting at the “vertex”!
Interesting. That actually showed me a lot ahaha
Oooohhhhhh. Nice lol
90 1
90th like
@@popodood1st like
DUDE!
The craziest thing happened with this video!
I had an exam today. Yesterday night I decided that "screw it, I'll stop studying now and I'll relax a bit - let's see what's on TH-cam". I clicked on your video, and you talked about the Monte Carlo method
I realized that is was a (niche) thing that the professor explained in class, but I didn't review: I opened my notes and reviewed it.
Today at the exam, first question: "explain the Monte Carlo method, what it is and when it is used"
my jaw dropped
soooo thank you for helping me pass my exam! I wouldn't have answered that question if it wasn't for your video!
That's awesome! Thank you so much for telling this story!
@@physicsforthebirds One could probably calculate the odds of this happening, but I wouldn't bother
pretty low anyway😅
7:04
“a lot of the time, being aesthetic, or interesting, or beautiful, is the only thing you need to make learning about something worth it”
holy crap, this is exactly why i like this channel without understanding most of the stuff in it.
One of the best new science channels on YT. Love it!
I may have not unterstood everything, but... I love your voice. It's so calm, so understanding, so... I just can't describe it
It reminds me a lot of a video from the french math youtuber Mickaël Launay called "La puissance organisatrice du hasard - Micmaths" ("The organizing power of randomness")
It's about how seamingly organized randomness can sometimes be.
And there is a specific part discussing the Aztec Diamond which reminds me a lot about how the liquid region of the rombuses is a circle but in the video in question the size of it was significant enough so the circle was blatantly obvious.
It's really interesting if you don't mind french subtitles or auto translated english ones.
congrats on 100k. Well deserved :)
Yesss yesss yessss more fun random math and physics to add to my repertoire of "Toast, How do you know this stuff????"
I love the conclusion to this video, appreciating something nice is enough reason to do anything
one of my favorite courses I have taken was a grad course in monte carlo so always excited to see it brought up in interesting applications like this one!
Those colors hurt my eyes 😄
"Look at these solutions"
I can't!
I have no clue what I was supposed to learn from this, but the visuals were fun.
Losanges is what a rhombus is called in french. Also, this is something we should send to cgp grey.
Also, this reminds me of those IQ tests where you're supposed to determine the number of cubes with a 2D view.
You are one of my favorite TH-cam channels right now
So ... This is kinda funny.
TH-cam's algorithm just randomly proposed this video out of nowhere. And while I don't work with this stuff in a *long* time, my thesis started on planar partitions - the thingies that @vye9431 pointed to - and whenever they could be mapped to quantum mechanics / condensed matter problems.
Mainly, we used the planar partition (and rhombus tiling) equivalence to dimers on an hexagonal lattice. The latter problem has an equivalent system in quantum physics / condensed matter - double carbon bonds on a graphene sheet - and we wanted to see if there was a way to exploit this equivalence to study the quantum problem.
In the end, the "quantum" version of these tilings has some interesting properties - like how you can recover something similar to the Arctic circle, but with some "ripple" effects. But the hexagonal shape of the full tiling add some *really* strong boundary conditions, so AFAIK you can't really use them to model a graphene sheet. And, well, maybe (surely) I'm not good enough on math to build the whole equivalence 😅.
We ended up pivoting to doing numerical simulations on more "boring", periodic graphene sheets / tilings. But those have some mathematically interesting phase transition properties!
Anyways, that was a fun walk down memory lane 🙂
I have a few questions about the sampling method: if each of your moves is some local change of the state its not clear why this should generate a uniform distribution, especially when starting from such a special place in the state space. surely at early times this random walk will only fill some corner of the whole set of possible states. so, is it that you have to take many samples over a long time to generate a representative sample? even just starting at a random-enough initial state would make it more believable to me. otherwise, i would think you need to make a truly random permutation of all possible states to get a uniform distribution rather than only transitions which only differ in one position. this is probably well understood in mcmc but wasnt clear to me from the video.
The most interesting result from Markov chains for us is that if a Markov chain is finite and any state can be reached by some number of transitions, then that chain has an "invariant distribution" that remains the same under transitions. The invariant distribution is the limit of applying transitions many times to any starting distribution. I wish I could give some intuition on why that is, but I just can't (that's why it's not in the video!) so I'll leave it up to a proof in a statistics textbook. Once we have this fact, we just need to prove that the invariant distribution of the Markov chain that I described in the video is the uniform distribution, and that proof is on page 231 of the PDF of Gorin's book in the description. It's not too hard, so you should give a look!
If anybody has a more intuitive way of describing invariant distributions in Markov Chains (or maybe you've seen a video), please share it!
@@physicsforthebirds
The "intuition" that I have for mcmc is the following: let's say you use some scheme where you jump from state to state to randomly sample from a probability distribution. Characterizing the scheme consists in giving the probability given the current state s_i to go to any ohter state s_j. Let's call this probability p(s_i -> s_j) = p_ij.
If you succesfully managed to sample your target probability distribution P, then in the long run you should sample any state k with a certain probability P_k (which, because you did a good job, is equal to the target probability that you're trying to simulate). For this "empirical" (ie experimentally obtained) distribution to be stationnary, you want that for any states s_i and s_j, P_i*p_ij = P_j*p_ji.
This corresponds to the fact that for this distribution to be stationnary, you should have "as many particles" (in the case of many independant experiments running in parallel) going from state s_i to s_j as the other way around (rigorously speaking, this is a sufficient but not necessary condition, also called "detailed balance condition", the true necessary condition, aka "global balance condition" is that the incoming flux of "particles"/probability entering s_i is equal to the outgoing flux, ie sum_k(P_k*p_ki) = sum_k(P_i*pik) for all i. The detailed balance condition is sufficient but not necessary but it is also often much easier to implement because it only involves balancing the fluxes between any pair of two states instead of having to sum over any accessible states).
In the case of a uniform target distribution (all P_k's are equal), this "detailed balance condition" simplifies as p_ij = p_ji, which is to say as long as, when you choose a move with some probability, you have the exact same probability to return from the new state to the initial state, this satisfies the above condition and the experimental distribution converges towards the target one (but of course you should still wait long enough that you have completely forgotten your initial state, which might take a long time).
The last thing that you have to be careful about is the case when some moves are not allowed, as described in the video. Starting from the state with no cubes at all (s_0), you only have one accessible state, the one with exactly one cube in the corner (s_1). Since this is the only available state from s_0, you might be tempted to move every time from s_0 to s_1 (with probability 1). But if you do the same reasoning with s_1, you now have 4 accessible states (either return to s_0, or add one cube in 3 possible locations), which means that if you only pick from these accessible states at random, the probability to return to s_0 is only 1/4. This breaks the detailed balance condition, as you would rather want to go back to s_0 with probability 1 (since you moved from s_0 to s_1 with probability 1).
The solution here is to "play dumb" and try to add or remove a cube from anywhere in the hexagon without using the fact that most of these moves are impossible. This is very wasteful starting from s_0, as most moves will be rejected immediately, in which case you should sample s_0 again! This might make the simulation less efficient as you could repeat positions many times, but that is the easiest way to ensure that you sample from the correct (uniform) distribution. Of course for a "typical" state, which is very far from s_0 in general, many more moves are allowed so this is less of a problem.
I think you can get an intuition on the MCMC algorithm by using a simpler game like snakes and ladders (with a board that loops back to the start). To find the standing distribution, you could play a billion games and watch where each end, or you could play one long game and watch where you're likely to end up in that game. The latter is a lot more efficient because you don't have to re-simulate the moves leading up to the final position
@@physicsforthebirds I don't remember who's video it was; but I have the memory of some weeks ago watching a video that used this markov chain distribution thing to calculate where it's more likely you would find the ghosts in the maze of the Pacman game when they get in their frightened state.
edit: found it, it's by Retro Game Mechanics Explained
@@shadamethyst1258 I'll be stealing that
I may not understand things 100%, but seeing your animations, the bird and hearing your voice is calming
I could smell and feel the foam instantly. What a throwback
the most underrated youtuber right now, i always expect your videos or channel to have millions of views or subscribers
whu....what....dude.....Arctic curve vocabulary is so perfect. it just make so much sense you are so very good at educating. I hope you teach at a university or something.
Happy at 3:42
Watching your videos really wish I had kept it together long enough in uni to get my math minor.
Classic MCMC can only approximate the stationary distribution, and it can be very difficult to predict how many steps we have to run to ensure convergence to a certain degree. But in some problems including this lozenge tiling one, there is an elegant technique called coupling from the past that allows relative efficient *exact* sampling from the stationary distribution of a Markov chain!
Basically the idea is you run the Markov chain from past infinity. This is possible when the state space is finite using the following trick: You randomly sample single step transition function of the MC, going backwards in time (sample transition t=-1 -> 0, then t=-2 -> -1, etc.), and you compute the composition of these step transitions. When you see the transition function from t=-N -> 0 becomes a constant function (which will happen almost surely), you know all the transition steps before t=-N will not matter, since precomposing a constant function with any function gives the same constant function. So your MC has "converged from past infinity", and the sampling will be exact.
Of course, the space demand for storing a complete step transition function over the entire state space will be intractable if the state space is huge. But here comes another trick: In some problems, like the lozenge tiling one, the state space is actually partially ordered (a tiling is above another one if its corresponding cube filling contains the latter's), and we can sample the step transitions in such a way that preserves the ordering (Pick a vertex and flip a coin. If it's head then fill in a cube at that vertex if possible, if tail then remove a cube if possible. All these step decisions will be stored). The partial order has a top element (all cubes filled in) and a bottom element (no cube filled in), and we only need to track their destinations after all the sampled single step transitions. When they rendezvous, we know the order preserving single step transitions must have squeezed all other states into that state as well, so the whole chain has converged. This allows relatively efficient sampling for a certain class of MCs with this order structure.
For details, check out the original paper Exact Sampling with Coupled Markov Chains
and Applications to Statistical Mechanics.
Thanks for the lovely and informative video❤❤❤
I know the maths displayed at 0:34 is probably quite easy but it'd be amazing if you explained them a bit😅
This notation isn't always taught in school, so sorry for not explaining it! Capital Pi at 0:34 means "the product of all of the terms on the right from a=1 to a=A", so it works the same way as capital Sigma does for sums. That means you do the fraction with a=1, b=1, and c=1, then you multiply it by the fraction again with a=1, b=1, c=2, and you keep on going until you've gone through every combination of a, b, and c. That's a lot of terms to multiply, so it gets big really fast.
@@physicsforthebirds thank you sooooo much🙏🌹🌹🌹
This is a great video. Have you ever spotted that the unbounded rhombus tiling maps onto the triangular ising antiferromagnet. This is why the middle of the model is 'liquid' (it maps onto the spin liquids).
I'm about to start a PhD on occupation processes, which are in bijective correspondence with lozenge tilings. Isn't it strange when you just learn about something and then you start to notice it everywhere? Great video!
Good science communicator, ive been binging all now!
Thanks for knowledge, will be great to pass it down to grandkids in the future
Hello again heisenberg!
For those who don't know, Heisenberg is the fresh account of the "NMRIH is a great source mod" which was banned for botting/violating TH-cam TOS
-Same mannerisms, Over 800+ subs to only popular/viewed channels, popped up right when the previous account was banned about four months ago, this account is a bot that spams and like baits channel's comment sections for subs.
Realy cool video, comenting for algorithm. More people should see this 😊
Really liked the video. It kinda feels like it could have been longer though. Feels like there's so much that has been touched but not really expanded upon. Also I didn't quite understand the connections between the tilings and physics. I mean I'm really interested in learning more so that part you did get brilliantly. Overall love your channel. Please keep making videos!
6:55 this is true even in other fields. engineers sometimes have Legos and other toy sets as quick stand ins for things. not often, but it's done all the same. it gives a person another perspective to a problem they had been considering.
This is an excellent topic for a video and there is definitely a million view youtube video within this concept. Good work.
Markov chain. A nice call back to a stat mech coding homework.
I use MCMC for work and simulations! Cool to see it come up here
It seems kinda intuitive that the corners become frozen but I think you left out an actual explanation why that happens.
Man, I love your videos, are really fascinating and educative at the same time. Keep it up!!
Greetings from Argentina 😊
First I've seen from you, fantastic work and remarkably gripping, thank you
Hello
I coded myself a little toy-program, that has a 16 * 16 * 16 units tiling Hexagon.
: can make it look entire concav or convex or random scrambled.
and i can manually flip tile-triples, where they all meed with a whide corner and combine into a small hexagon, what looks like adding or removing a fully visible cube in the stack
5:35
My randomizing and manual modifications are exactly, what is described from here on.
I coded my thing after watching another video, that not told about this method, but by accident i figured out just this method my self.
My random-scramble-algorythm takes about 2-5 seconds to finish
I think a 20 * 20 * 20 scould take about twice as long as this to scramble.
When you say lozenges I can't help but think of cough drops.
The first thought that leapt into my mind to get a uniformly arbitrary tiling would be to 1: work up an algorithm that uniquely maps a tiling number to a tiling output (EG in the side length 3 case maps 0..979 -> a tiling output such that each number gives a different output, no other restriction to ordering required) and 2: select uniformly arbitrary number in the range of possible numbers, which I already know how to do with time and space complexity of roughly O(log(N)). 3: use algo to transform random number into random layout 4: sip margaritas on the beach at sunset. Optionally start with point 4 and do the rest on a laptop while fending off the gulls, but if I make my rhombuses out of tortillas and make extra I can toss those to keep 'em busy. Wait what were we talking about again? 😎
New physics for the bird video released!!!!!
After watching, I would like to add, the cubes can be inverted I my mind to where it’s not an empty cube, but the outside of one being chipped away. I hope others can see this because I thought it was odd.
you make incredible videos man. i seriously appreciate it
I love your videos so so much, thank you for taking the time to make them!!
I didn't realise this was a physics video until you mentioned 'frozen' and I got statphys flashbacks, lol.
YEAH NEW VIDEO YEAH!!!!!!!!!!
awesome video! keep making this stuff, it really makes my day
This is amazing!! Thank you for sharing
this is amazingly interesting, thank you for sharing this!
Is there an easy explanation for why the total number of rhombus tilings is the product of all permutations of (a+b+c-1)/(a+b+c-2)?
congrats on 100k! :D
Great content as always!
This is so beautiful!!
so cool
If I recall, this sort of behaviour is very much analogous to Random Matrices, possibly even related
For example, if you take a particular random matrix, known as a GUE (Gaussian Unitary Distribution)
The matrix can have many different eigenvalue distributions. However, the typical behaviour of the eigenvalue distribution is in the form of a semicircle distribution. Its very spooky
congratulations on 100k
Great video!
The thumbnail ❤🥰
Love these videos, keep making 'em!
Ive always intuitively known that there'd be the same number of rhombuses cause i image them as the number of "ceiling units" and number of "wall units". Painting any one side will always only require that number of losanges
The visual here reminds me a lot of Minecraft terrain when it's generating.
Funny how you said at 5:35 you add cubes to an empty cube while to me it looked like removing cube from a full cube the whole time
Which tool do you use for your math/ drawing sections?
is this just a popular topic now suddenly or were you in the same public lecture as me recently... the coincedence is coincedencing... anyway a good video to recap since the last time i heard about this was quite late at night :D
Now I wonder how do you get that fomular to count possible arrangements. I should try it out later...
could you do a an optimization for the big ones where you start with a low res one then subdivide and the markov chain then subdivide and repeat until you have high detail, but still random even at both low and high frequencies?
Thank you mister bird
-(Yeah same. And)- I learned the real definition of isometric projection now! That's actually so cool I might consider more for game design or something like that. What do you guys think?
(Was my reply to one of the top comments btw,)
when you say that a particular solution "took a while" to generate, do you mean that you had to run MCMC for a long enough period of time to diffuse away from some low-entropy solution and reach a representative sample of the uniformly distributed state space?
tetriiiiiis yeeee YEEET
this video was on my watch later tsice release
It's all well and good when you see it as a room iwth boxes pushed into the corner, but sometimes it looks like a cube with smaller cubes carved out, and I get dizzy.
Would you please give an intuition of why doing the random steps you mentioned is (approximately) equivalent to generating a uniformly sampled arrangement?
Now I'm just waiting for Qbert to hop into that cube shape and start bouncing around.
Now tile the hexagon using lozenges of 3 different colors in such a way that no same colors touch.
You see an empty cube in the initial state.
I see an upside down "filled" cube.
We are not the same.
Something about this feels off to me. If you are making a random tiling of the hexagon, you should also be able to "rotate" and the colors of the rhombi which could break the cube illusion. Or, with sufficiently large enough hexagon allow points where the minor angles meet which also breaks the illusion of cubes.
Is there some way of knowing you've done enough iterations to be suitably random? Because if I was running this on my own without knowing about the frozen region, I would just assume I cut it off before changes could propagate out to the edges
That's a very good question, and there is. If you keep track of the chain that you generate and then you apply that sequence of steps starting at both the "empty cube" tiling and the "full cube" tiling, and if that sequence leads to the exact same sample once it's been applied starting at both of those tilings, then you are sampling from the uniform distribution exactly. You can read about it in the last chapter of Gorin's book in the description. For the sake of runtime though, I just plugged in a big number of steps and ran it (because I already know what a random sample should look like. Not a good way of doing science, but a good way of making pictures!)
Waiting the MattParker comment, Hey Matt!
MORE PHYSICS FOR THE BIRBS
I love nerdy youtube
Can you elaborate on the combinatorial calculation in the beginning please
Proving that equation for the number of valid tilings is not at all an easy problem, so I left it out of the video. If you want to see the proof, go to page 11 of the "great textbook on lozenge tilings" PDF I linked in the description. It takes the whole second chapter to derive it!
@@physicsforthebirds Can't believe you replied, thank you 👑 sounds like a valid question in my combinatorics final a couple years back
I'm curious to see how this concept could be linked to entropy
“this is a baby toy”
_proceeds to explain math functions and probability distribution_
The human mind is so weird sometimes, I can’t believe that the rhombuses look like cubes so easily.
That's a lot of cancelling in fractions
4:31 can you explain why this shows the same behaviour?
As a furry I love hearing the teachings of a math bird 🐦❤
I watched this video before!
I want a room made like that
YES!!!!
I love toys for babies
Where would you even find lozenge tilings in physics?
Although you might not find literal lozenges in very many places, random tilings are useful toy models for other systems in statistical mechanics. For example, simple models of water molecules in ice or electron spins in magnets share many of the same "freezing" and "melting" features as lozenges.
@@physicsforthebirds Thanks!
Why, even in the limit, would your random add/remove procedure approach uniform across all tilings?
Does markov chain monte carlo not encourage the behaviour of the deadspots? Since all new cubes start from the center and not the edges.
And the reason behind the deadspaces is also not very clear to me. if every space has an equal chance of being swapped, why do we see deadspaces around the circle?
Cool👍
this could be a way to procedurally generate terrain
i was confused at the part with the 3 rhombuses being cubes because i thought the cube was a removed part of a bigger cube and it looked soo weird when another cube apeared