Great video! I'm currently studying a book on stochastic dynamics, then came across the monte carlo method. Your video helped me deepen my intuition and learn other applications to the method. Thanks!
@@CompuFlair If I understood correctly, you estimate P(mu_current/data) using the Bayes rule. The RHS has two terms, P(data/mu_current) and P(mu_current). The first term is the likelihood that can be computed based on Gaussian distribution and using mu_current in the equation of f(x). That leaves us with the computation of P(mu_current). How is this term computed?
The clip I showed in the video for computing the value of pi is an actual simulation with a computer. When we generate a new point, we know its location, we can check if it is inside or outside.
That is a great point! While the method assumes true random events, our computers generate sudo-random. It seems generating true random points with classic computers is quite challenging. Maybe quantum computers (due to their intrinsic random nature) solve this issue. Until then, we might need to stick to pseudo-random events.
@@CompuFlair The problem originates from coordinate system neusis with real number line metric, which postulates empirically impossible instantly infinite measurement resolution. In term's of probability theory, when picking a random point from the real line, with probability 1 that point is non-computational what ever that can't be given any unique mathematical name that could serve as an input for a computation. The supposed "randomness" is thus a self caused issue by anti-scientific theory of mathematics which is also a metaphysical impossibility. In empirical reality computing more resolution is a temporal process, not timeless instant. Attempt to exclude the mathematical theory applied to from the "measurement apparatus" has lead to most unfortunate deep confusion. I share your confidence that natural quantum metric can solve the issue, and in fact, my foundational hobby has made good progress on that front. :) Computation theory provides strong computational proofs at least for deterministic undredictability of the undecidability of Halting problem and the computational irreducibility that Wolfram has been talking about. Maybe we have been confusing logical, computational and physical unpredictability with "randomness" all the time.
For most of these methods you can prove that they will work with real randomness. Some algorithms produce numbers behaving so chaotically that it turns out that simply using them instead of real random numbers empirically produces results as good as the theoretical results of random numbers.
@@keypey8256 The real question is, does ontological randomness really exist, and if so, how exactly? Stephen Wolfram's discussion of computationally irreducibility has helped to clarify and sharpen the question. The main lesson is that deterministic computation by very simple algorithms can produce output that is not necessarily predictable by any other algorithm and thus essentially unique. Deterministic non-predictability and indeterministic randomness are ontologically different categories. Wolfram has studied "small scale" simple programs, on the holistic scale of computing we meet the undecidability of the Halting problem, which states that there can't exist an Oracle that could predict for all possible programs whether they terminate or not. Also the Halting problem result is in it's way a deterministic logical computation implying holistic unpredictability. As such it does not give support to non-deterministic randomness, but only to un upredictability. As a metaphysical postulate, ontological genuine randomness qualifies as a magical wand ex nihilo argument. Computational irreducibility and the Halting problem present empirical and logical proofs for ontological epistemic limitations of computation, leaving open the ontological question whether we live in a wholly or partially computable Matrix.
Great video! I'm currently studying a book on stochastic dynamics, then came across the monte carlo method. Your video helped me deepen my intuition and learn other applications to the method. Thanks!
Glad it was helpful!
Excellent intro! Waiting for the next episode.
Really great video! Would you be interested in connecting to random matrix models & stochastic processes?
Thanks for the feedback. We are planning to gradually cover these concepts!
Thank You ❤❤❤❤❤
My pleasure!
Perfect 👍
Thanks! I appreciate it.
At 11:43, how is the probability value of p(mu_current) calculated? Is it assumed to be 1?
You mean P(mu_current | data) in the Bayse equation at top-right equation?
@@CompuFlair If I understood correctly, you estimate P(mu_current/data) using the Bayes rule. The RHS has two terms, P(data/mu_current) and P(mu_current). The first term is the likelihood that can be computed based on Gaussian distribution and using mu_current in the equation of f(x). That leaves us with the computation of P(mu_current). How is this term computed?
we use the Gaussian function at the bottom left (which is our prior belief for mu distribution)
@@CompuFlair Thank you :)
If you are running a simulation, how do you know if the point is inside the circle? Don’t you need to know pi a priori to determine that
The clip I showed in the video for computing the value of pi is an actual simulation with a computer. When we generate a new point, we know its location, we can check if it is inside or outside.
i wander why it was deleted then reuploaded
Thanks for asking. We decided to remove the background music as it annoyed some people.
As a conspiracy theorist my answer is they remove the bits that gives you the key to build an atomic bomb yourself and win every poker game ever.
Ah, you figured it out :)
Why on Earth people keep on calling computationally deterministic generation of combinatorics "randomness"?
That is a great point! While the method assumes true random events, our computers generate sudo-random. It seems generating true random points with classic computers is quite challenging. Maybe quantum computers (due to their intrinsic random nature) solve this issue. Until then, we might need to stick to pseudo-random events.
@@CompuFlair The problem originates from coordinate system neusis with real number line metric, which postulates empirically impossible instantly infinite measurement resolution.
In term's of probability theory, when picking a random point from the real line, with probability 1 that point is non-computational what ever that can't be given any unique mathematical name that could serve as an input for a computation.
The supposed "randomness" is thus a self caused issue by anti-scientific theory of mathematics which is also a metaphysical impossibility.
In empirical reality computing more resolution is a temporal process, not timeless instant. Attempt to exclude the mathematical theory applied to from the "measurement apparatus" has lead to most unfortunate deep confusion.
I share your confidence that natural quantum metric can solve the issue, and in fact, my foundational hobby has made good progress on that front. :)
Computation theory provides strong computational proofs at least for deterministic undredictability of the undecidability of Halting problem and the computational irreducibility that Wolfram has been talking about.
Maybe we have been confusing logical, computational and physical unpredictability with "randomness" all the time.
For most of these methods you can prove that they will work with real randomness. Some algorithms produce numbers behaving so chaotically that it turns out that simply using them instead of real random numbers empirically produces results as good as the theoretical results of random numbers.
@@keypey8256 The real question is, does ontological randomness really exist, and if so, how exactly?
Stephen Wolfram's discussion of computationally irreducibility has helped to clarify and sharpen the question. The main lesson is that deterministic computation by very simple algorithms can produce output that is not necessarily predictable by any other algorithm and thus essentially unique.
Deterministic non-predictability and indeterministic randomness are ontologically different categories. Wolfram has studied "small scale" simple programs, on the holistic scale of computing we meet the undecidability of the Halting problem, which states that there can't exist an Oracle that could predict for all possible programs whether they terminate or not.
Also the Halting problem result is in it's way a deterministic logical computation implying holistic unpredictability. As such it does not give support to non-deterministic randomness, but only to un upredictability.
As a metaphysical postulate, ontological genuine randomness qualifies as a magical wand ex nihilo argument. Computational irreducibility and the Halting problem present empirical and logical proofs for ontological epistemic limitations of computation, leaving open the ontological question whether we live in a wholly or partially computable Matrix.
First
Hope you enjoyed it as well ;)
ai voice
🦾🚫🚫🚫🚫🚫FGAP an FGAR mandatory to access 🌍🦾forever live ai forever live 🦾live 🌍live 🤖FGAP and FGAR mandatory to access 🌍forever live