You only have to test each number for divisibility by four prime numbers, and three of those are trivial tests (evens end in 0,2,4,6,8: multiples of 5 end in 0 or 5: multiples of 3 have digits that sum to a multiple of 3) so it becomes easier than it sounds, and you'd also start testing each time with the smallest prime, 2, and work up, so that only rarely would you rule out 2, 3 & 5 as co-factors, at which point if you've also ruled out one or both being primes then the only option left is that they'd share a prime factor of 7.
Having gone to the 'all the rolls' video, then picked a time point about Matt's emotional state from the comments underneath it, I was immediately shown a pair of rolls that proved my last comment incorrect, as it showed a pair whose only common prime factor was 23. I now think that you would need to test for divisibility by all primes up to 37, but for each prime factor you would rule out a large-ish percentage of the remaining possible combinations, so you'd only got to the largest primes a vanishingly small percentage of the time (Assuming you'd memorised and eliminated any actual primes first). I make it 37 because you'd need them to be coprime but not actually primes, so they must have at least one other prime factor each. To make it to the largest factor then the other prime factors that they don't share must be as small as possible, so keep to one prime factor each, and make them the smallest two primes available, so that one is a multiple of 2 and the other is a multiple of 3, but both are multiples of some largest prime. The maximum limit is thus 1/3 of 120, or 40, with the largest prime under that limit being 37. So yeah it was potentially more impressive than I first thought.
In my original, and somewhat incorrect, comment I had assumed that any combination where one or both were primes to be essentially trivial cases, so had calcuated the largest prime test only for a situation where two composite numbers had been rolled. A pair of primes are automatically co-prime, but I hadn't fully considered the situation where one was prime and one was composite. Ok so add an extra rule that says that for all instances with one prime being 41, 43, 47, 53 or 59 double check that the other number isn't twice the value of the prime. If your one prime is 29, 31 or 37 you need to check both twice and thrice the number. If it's 19 or 23 you check twice, thrice and five times. If it's 11, 13 or 17 you need to check twice, thrice, five times and seven times. But for all combinations where neither number is itself prime the largest prime test (I think) is 37. And for those where one is prime, it's easier to multiply the prime up (by the multiples mentioned above) and see if that matches the composite one, rather than simply trying sucessive prime tests on the composite one.
a suggestion for next year: 1. at a constant rate, dig a tunnel through the center of the earth 2. at the same rate, walk along a path formed by a great circle around the earth 3. divide the walking time by the digging time you can expect some error but it would make for great video
mthielssalvo Your method is flawed!! Considering the fact that the Earth is flat surrounded by the great circle of iceberg wall, how would digging a tunnel through the center of the earth help?? 😂
Your Pal Nurav Well in that case, walk around along that ice ridge and then walk from the ridge to the center of the circle. But of course, the Earth is neither flat nor round, but is in fact a figment of your imagination. Wake up, child. You've slept for so long.
When you're doing a time lapse like this in the future, can you please put an analog wall clock in the frame behind you so we can see it whirl around when you're working?
"I probably made some mistakes... I'm kinda hopping the mistakes have averaged out." It's good coming back to older videos and seeing that the spirit of Matt hasn't changed one bit in essence.
There are 120^2=14400 possible pairs of numbers between 1 and 120. 8771out of them are coprime pairs. This gives a value of pi~sqrt(6*14400/8771)~3.13857.
@@f0kes32 You can use the excel method that Matt uses in his video but instead of stopping at 1000, you keep going til you hit 14400. Also, instead of having it give you 14400 random numbers, you have it give you one of every possible option. Count them up and, voila.
If you bump this up to 200-sided dice, there are 40,000 combinations 24463 of them are coprime This gives you pi = 3.132, which interestingly is even less accurate.
As Pierre Roux suggested, I have used the digits of Pi as a source of random numbers from which to calculate pi! Taking the first 10^6 digits of pi, and splitting them into 10^5 integers of length 10, I got 5*10^4 pairs of random integers (assuming the digits of pi are truly random - this is unproven). Comparison of these integers yields the estimate 3.14099105107796, which is accurate to a 0.00019% error :) PI-Ception!
I'm probably not the first person to mention this, but in Excel you can have it just count a column in its entirety. You don't need to define a set range, you can just put COUNT(C:C) and then you can add more rows at will without having to edit that function.
You can also use more complex systems to to more complex things references to the maximum cell or simply have a counter and use thats maximum as the reference. In the end with index() you can do pretty much anything.
O.K. I did the whole column method, worked out a value of pi and compared it to the actual value of pi and took the difference. I then ran the thing 100 times (selected in advance so as to not bodgy up the value to be close as explained in the actual video). My total difference from pi was -1.8*10^-5. This used 1,048,575 rows and a maximum integer of 1,000,000.
No@@malharr9993 , you don’t get it. 31*3=93, now I know you are probably a little child (when you made this) so I’ll tell you. 31*3= 30*3+(1*3)=90+3=93. He said 63. 31*3=93, not 61.
Fun fact: Euler got the right answer by using a completely incorrect method. As I recall, he wrote the sine function as a product of its roots, which isn't accurate, but it worked out correctly. Since then, we've proven it several other ways and concurred that it is actually (pi^2)/6
For those following along at home, these are the methods of calculation he has used in order of decreasing error: calculating the first terms of an infinite series (off by 3.1%), rolling two dice 500 times (off by 2.8%), timing a pendulum of known length (off by 0.433%), weighing a cardboard circle (off by 0.31%), and measuring a circle using pies (off by 0.1035%).
Gravity Falls called. The infinity sided dice is a type of apeirohedron die that was brought back by Ford from another dimension. These type of dice are outlawed in 9,000 other dimensions due to the properties of their roll. Because of their infinite sides, anything can happen when one of these dice is rolled. Our faces could melt into jelly. The world could turn into an egg! Or, you could just roll and eight. Who knows?
that reminds me, I created what I call parker cosine(cosp): cosp[subscript n](x) = 2 * integral from -n/2 to ∞ of (x^2T)cos(πT)/(2T)! dT At n=1, it is just ever so slightly off!
Another simpler way: - Take random pairs of numbers in [0, n] and interpret them as x and y coordinates within an n*n square - For each pair, calculate the distance to the origin. - Count the number of pairs for which that distance is less than or equal to n Effectively, you've just determined whether or not that point is inside of a quarter-circle centered at the origin, with a radius n. So, the probability that a pair will be in that group is: p = ((area of circle) / 4) / (area of square) = ((πn^2) / 4) / n^2 = π / 4
Actually, the random function gives you a number in the interval [0;1). So the random number is between zero and one and can take on the value 0, but not the value 1. When you multiply by 120, you get a number in [0;120). *The correct way to proceed* at this point is to *round down and add one*. This gives a number in the range [0;119] plus 1, which is in the interval [1;120]. If on the other hand, you round up, you mess with the probabilities, because in case the random number is 0, you get roundup(0·120)=roundup(0)=0, which is outside of your desired range. Also, drawing 120 is slightly less likely because the event of the "direct hit" (where no rounding is required) is missing (which is kind of a sloppy explanation, but I hope y'all get what I mean).
Mathematician: But the measure of any finite set is zero! So the probability of hitting 0 is zero Engineer: According to the IEEE 754 specification, there are exactly 2^52 double-precision floating point values in the range [1, 2). This range is used for the initial rand() computation, since they all fall between two consecutive powers of 2, and therefore all have the same “exponent” under the standard (unlike numbers from 0 to 1, which must worry about cross-over points such as 1/2, 1/4, 1/8, etc… on their way down to zero, with the smallest possible exponent being -1022) This initial random number is then adjusted by subtracting 1, which gives a uniformly pseudorandom number in the range [0, 1). Thus the probability of returning exactly zero is 1 in 2^52 (certainly far smaller than the experimental variance in our calculation of pi, but it’s still worth writing correct code)
The probability of 2 random numbers being coprime is only 6/π² if you consider _ALL_ numbers, like, from 1 to infinity. If you only consider numbers within a specific range (such as from 1-120 or 42-63360 or whatever), the probability will be slightly higher than 6/π², and you'll get an experimental approximation that likely undershoots pi. Not sure if that was mentioned or not. Edit: yes it was 8:41
If your happened to get really lucky and select exactly 304 dice, it would give an approximation for pi about: 3.1414043... which is astronomically close :).
As soon as you said 6/π², I thought of the Basel problem, but I couldn't prove it before you got to that point in the explanation. But once again, we see that familiar numbers do indeed indicate a connection.
Right now i'm gonna write a code so that the range of random number generated is much larger, thanks matt this will be a good test since i'm only a begginer in coding
2 years and almost 2,400 comments later I'm sure someone has already suggested this, but you can use =RANDBETWEEN(1,120) for a much easier way to get a random integer between 1 and 120. You can also simplify your coprime formula to =(GCD(A1,B1)=1)*1 -Excelphile
He was so tired of rolling his dice at 12:30 that he said 63 was broken into factors of 3 and 31. I love that this mathematical mad man did this video.
Weird I almost didn't notice is accent. Something tells me he's on the other side of the world. OR he was just nice and allowed people to prepare a day in advance in order to shine on actual PI day ;)
Hey Matt, I know I'm months behind on seeing your awesome video, but I have a question; Could one potentially generate more results with fewer rolls by, say, rolling 3 'random' numbers (a, b, and c) and comparing 'ab', 'ac', and 'bc' for cofactors and coprimes? Or, would that introduce some kind of error that defeats the purpose of such a method?
Amazingly, this does not change the expected number of coprime pairs. This is a consequence of the fact that expected values simply add up regardless of dependency. It does, however, change the standard deviation.
When I got here there were 1459 comments. Did you know if you take the sum of the cubes of the digits of 1459, and then take the sum of the cubes of the digits of the result, you'll get back to 1459? Try it!
I wrote a python script for the same which did 100k iterations and the range of random number was (0 to 1M),got 3.1416 as the result,it was amazing to see how the result started to converge towards 3.14 with the increase in number of iterations ☺
you can also put points in a grid in a circle/square and then do the ratio between points where x^2+y^2 is smaller/bigger than 1. funnily enough this actually corresponds to the integral of the volume of a tube. (unit circle 1 unit above the ground and then volume underneath that)
So, I just had to pull out a cpp file and do it myself. Using numbers ranging from 1 to 2^16 and 1000000 samples, I was able to get PI with 3 digit precision. Kinda impressive considering you can't even notice the time it takes to execute.
If somebody wants to try it in python: #pi from gcd(m,n) import math as m import random as rand max = 10 ** 6 #maximum random numbers n = 10 ** 3 #number of tries count = 0 for i in range(n): if m.gcd(rand.randint(1, max), rand.randint(1, max)) == 1: count = count + 1 pi = (6 /(count/n)) ** (1/2) error = (pi - m.pi) / m.pi * 100 print('my PI = ' + str(pi)) print('error = ' + str(error) + '%')
I don't know about python 3, but this doesn't work in python 2. You have to import gcd from fractions, and (1/2) would evaluate to 0 because that's integer division. Same for count/n.
You could use floor(rand()/rand()) to get a random positive integer that goes up as high as the precision of rand allows. I'm not certain if the probability is uniform, though.
Thank you Matt. This is going to make the terrific activity when my Year 7 class studies Probability at the end of the year - it links together the work we will have done on prime numbers, and on pi - and we'll have an Excel exploration at the same time.
This is, what I can say, a mind blowing proof. I am actually from Electronics and Communiation background and Fourier series and transform is our base of Job 😒. I know using Fourier we can get that pi^2/6 =1/1^2 +1/2^2 +.... and we use it many times in electronics. When you first showed the probability it remembered that result which I accustomed of and stopped your video and tried to solve it using that. I failed and then went through your proof. It seemed mind blowing to me.
The moment that 6/pi^2 formula came up, I was laughing my ass off, because it occurred to me why you were rolling giant dice, and how you would approach the problem. Pi is used in a lot of really weird stuff, lol.
1992jamo The function RANDBETWEEN is just a simpler way to do what Matt did in the video. I don't mean to say that a computer can generate a truly random number.
Happy PI-Day. This video was awesome! I actually replicated it with slightly bigger max. rnd.numbers (100Mio.) and some more trials (5k) - estimation error is around 0.00-something. Loving it! Thx :)
The dice have only faces 8 and 3. What I want to know is what purpose they serve; why would anyone need a set of 500 d2's or d3's that only show 3 or 8?
My guess is in rolling pairs. 2d6 typically averages exactly 7. You can get this by adding the lowest and highest possible rolls from each die. 3 and 8 make that 11, however. While that would generate some very powerful D&D characters, I don't know what the exact purpose of these is.
Marginal return on effort seems to be limited here. By that I mean, increasing the max number only gets you a few fractions of a percent closer to the real value. Which compares with the infinite sum from last year, where adding a lot of terms adds only a minor and diminishing correction each time. For next year's Pi day, divide 32 by 10, then come to America and visit the Indiana legislature and try and convince them that it should be changed to 3.2 so you'll be correct. Again.
Hey Matt, keyboard shortcut for ya. To lock a cell in a formula, in lieu of typing the "$" , you can use F4. Multi press of F4 will cycle through the locking options (both row and column, column, row).
I'm currently running a program that generates pi the way described in the video, and N>10⁹ pairs in, I'm already at 3.1415. And it's not a random number of pairs, it has been being close to this for a long time. This is amazing. Hail maths and hail you, oh Matt.
From a practical perspective, I’m not sure how you would actually generate a random, unbounded integer (via a computer, or otherwise). However, if you’re interested in what “probability” means for an infinite, countable set (like the integers), then see the following: en.m.wikipedia.org/wiki/Natural_density (Which is studied as a part of en.m.wikipedia.org/wiki/Probabilistic_number_theory ) As an example, the probability of randomly choosing an even number turns out to be exactly 50% with this definition, as we might hope (However, if your infinite set is “uncountable,” such as the continuous number line, then you would want to learn about Measure Theory and Probability Theory)
Note: Every finite set will have zero probability, but various infinite sets will have non-zero probability Basically, you just have to make sure that your number generator results in probabilities that match the Natural Density for every subset of the Natural Numbers (You could easily include negative integers by randomizing the sign, but Natural Density itself only cares about the Natural Numbers, which start from zero)
I had this quite fun assignment for maths class on my uni where they asked us to approximate pi anyway you wish as long as you could back up your methods. I decided to go by the circumference of a circle being pi*d or in this case we'll use pi*2r and then continued to use the circumference of polygons of "radius" 1 with increasingly more edges. So if you make it easy on yourself you'd have a triangle for example you can divide it up into 3 pieces by drawing a line from the middle to each corner. Now every one of these sections would have an angle of 360/3 degrees. For the circumference, you can name the two "radii" of 1 a and b and slap the cosine rule against it. So in this case sqrt(1^2+1^2-2cos(120)) =~ 1.73. Now triple this for the number of edges and you got yourself your first-order approximation of pi*2r = 2pi = 5.19 -> pi = 2.595. Not that great. But for a triangle, not all that bad either. Now I found that for any given polygon with N edges you can write up this formula for its circumference as a function of N. So the 360/3 would logically become 360/N for an Nth polygon and at the end instead of multiplying it by 3 you'd multiply it by N instead. This leaves us with this beauty of an equation: N*sqrt(2-2cos(360/N)). This allowed me to quickly drag and drop a column in excel showing the progress of polygons approximating pi for each order of N. To have a nice little taste of the result we'll plug in N = 100 and see where we get: 100*sqrt(2+2cos(3.6)) = 6.282.... -> pi = 6.28215/2 = 3.141075. A hundred terms further we suddenly have pi accurate to the third decimal. Now obviously that's very slow progress but considering I had a very simple algorithm to follow I could easily bump up the value of N and reach quite a decent accuracy in this way. For example the 1000th term we get 3.141587 which rounds to two more decimals already. As we allow N -> infinity we of course find the exact value of pi eventually. Nothing too groundbreaking there, just thought it was a fun assignment and I was pretty stoked about my method actually working out :p
You could probably work out the functions you'd need in a couple hours at most. Then just know that you can drag to duplicate code, other than using a different row.
13:02 at my job i often have to sort things by tossing them into different bins for several hours and i know that feeling, i feel like thats common for that kind of task. you lose track and instinctively put something in the wrong one despite knowing it doesnt go there, only to realize afterward
Fun way to get a very close estimate with excel: Do a target-value analysis of the final value for pi and make the variable some cell that has no influence on the result. Excel will keep searching until it gets within 0.001 of the target value (PI)
What if you alternated rolling dice so each roll was used in 2 calculations. i.e gcd(a,b) , gcd(b,c) , then gcd(c,d) etc. Would that change the probability?
jonlottgaming, I do not know, but SorenEragon had a similar question and according to SmileMPV the probability is the same, but there is a different standard deviation.
this is a Python script that does the same thing: import random import math from fractions import gcd intsize = 1000000 a = [] b = [] length = 10000 portion = 0 for i in range(length): a.append(random.randint(0 , intsize)) b.append(random.randint(0 , intsize)) c = gcd(a[i],b[i]) if c == 1: d = 1 portion = portion + 1 else: d = 0 print(a[i] , b[i] , d) i = i+1 pi = math.sqrt(6 /(portion / length)) print("") print(length , portion) print ("Pi is" , pi)
# Here's how I did it in Ruby. There might be a more compact way do create the Enumerator but I've never worked with streams of randoms before so I just wrote it by hand. A = 39393939 # max random value N = 39393939 # iteration count coprimes = Enumerator.new { |y| loop { y
Very cool video! Here's a suggestion for next Pi Day: Run a Monte Carlo simulation of pairs (x, y) of random real numbers from -1 to 1 (or 0 to 1 if you prefer) and count how many are inside the unit circle, i.e., with x^2 + y^2 < 1. The fraction should be π/4, so multiply the final result by 4 to obtain an estimate of π.
The rand() function in excel is limited to a precision of 1 x 10^-9, meaning that the numbers to randomly choose from get capped for a random pool of numbers between 1 and n > 1 x 10^9. You will see the number of coprimes found decrease dramatically if you step above 1 x 10^12, significantly distorting your results if using the rand() function. The max number compatible with the gcd() function is 2^53, so if you want to push excel to the limits you will need to use the randbetween() function for your trials, i.e. randbetween(1,2^53).
Enjoyed this. Thanks. Terminology quibble, which may just be a US vs UK thing: you kept referring to gcd as "greatest common denominator", whereas in my experience it should be called the "greatest common divisor". In fact I have more often seen gcf, for "greatest common factor" used in this context.
Jonathan Beeson How are there not more comments on this verbal gaffe? There's no such thing as a greatest common denominator; the LEAST common denominator is what you use to add fractions like 1/6+3/10 (least common denominator is 30; 5/30+9/30=14/30=7/15). Using a greater common denominator than 30, like 60 or 300 or one of the other infinitely numerous multiples of 30, is just a silly waste of time. I'm starting to think Matt Parker doesn't actually know anything about math. I bet he learned the Pythagorean Theorem from the Scarecrow in The Wizard of Oz.
Matt was obviously thinking "gcd", as in greatest common divisor, but he muddled it with lcd, least common denominator, so "divisor" came out "denominator". After he said it once he just kept repeating the same mistake.
+Bill Schlafly No one in the class tried to correct the teacher's error? It would have been fairly easy to point out that the text book said "divisor" or "factor", not "denominator".
Here in New Zealand we were taught it as greatest common denominator, but I admit that it doesn't really make sense. What we were taught instead of least common denominator is least common multiple. It's just a regional thing, Matt clearly knows what it is mathematically, it's just that the words are different.
Wrote the code in C#. I found that I could calculate Pi to six decimal places (the precision of a floating point number) with an upper limit on random numbers at 2,147,483,647 (max value of a 32-bit signed integer) and a trial size of 78,283. Given it took Matt 1 hour, 58 minutes, and 52 seconds to roll 1,000 times, to get Pi to the same accuracy as my trusty computer did (with the same rolls and no errors), Matt would have been rolling for just over 6 days and 11 hours, and with a particularly monstrous 2,147,483,647-sided die.
I'm actually more impressed by the ability to look at two random numbers and know if they are coprime within a few seconds.
Ha. The Euclidean Algorithm is O(log n) for gcds. But since he's limiting n
Yeah I think he knows the primes pretty much, but the coprime part is harder
You only have to test each number for divisibility by four prime numbers, and three of those are trivial tests (evens end in 0,2,4,6,8: multiples of 5 end in 0 or 5: multiples of 3 have digits that sum to a multiple of 3) so it becomes easier than it sounds, and you'd also start testing each time with the smallest prime, 2, and work up, so that only rarely would you rule out 2, 3 & 5 as co-factors, at which point if you've also ruled out one or both being primes then the only option left is that they'd share a prime factor of 7.
Having gone to the 'all the rolls' video, then picked a time point about Matt's emotional state from the comments underneath it, I was immediately shown a pair of rolls that proved my last comment incorrect, as it showed a pair whose only common prime factor was 23. I now think that you would need to test for divisibility by all primes up to 37, but for each prime factor you would rule out a large-ish percentage of the remaining possible combinations, so you'd only got to the largest primes a vanishingly small percentage of the time (Assuming you'd memorised and eliminated any actual primes first). I make it 37 because you'd need them to be coprime but not actually primes, so they must have at least one other prime factor each. To make it to the largest factor then the other prime factors that they don't share must be as small as possible, so keep to one prime factor each, and make them the smallest two primes available, so that one is a multiple of 2 and the other is a multiple of 3, but both are multiples of some largest prime. The maximum limit is thus 1/3 of 120, or 40, with the largest prime under that limit being 37. So yeah it was potentially more impressive than I first thought.
In my original, and somewhat incorrect, comment I had assumed that any combination where one or both were primes to be essentially trivial cases, so had calcuated the largest prime test only for a situation where two composite numbers had been rolled. A pair of primes are automatically co-prime, but I hadn't fully considered the situation where one was prime and one was composite.
Ok so add an extra rule that says that for all instances with one prime being 41, 43, 47, 53 or 59 double check that the other number isn't twice the value of the prime. If your one prime is 29, 31 or 37 you need to check both twice and thrice the number. If it's 19 or 23 you check twice, thrice and five times. If it's 11, 13 or 17 you need to check twice, thrice, five times and seven times.
But for all combinations where neither number is itself prime the largest prime test (I think) is 37. And for those where one is prime, it's easier to multiply the prime up (by the multiples mentioned above) and see if that matches the composite one, rather than simply trying sucessive prime tests on the composite one.
Me: Can you calculate pi?
Matt: 3.04
Me: I think it's a bit higher.
Matt: 3.05?
payton richards 9 left
It’s a Parker pi!
this comment was too funny, it absolutely made my day 😂😂
Right after your last sentence, TH-cam answers "Less" :)
3.14≈(1-(1÷3)+…
a suggestion for next year:
1. at a constant rate, dig a tunnel through the center of the earth
2. at the same rate, walk along a path formed by a great circle around the earth
3. divide the walking time by the digging time
you can expect some error but it would make for great video
mthielssalvo
Please delete your comment, he might literally just do it!!
mthielssalvo
Your method is flawed!! Considering the fact that the Earth is flat surrounded by the great circle of iceberg wall, how would digging a tunnel through the center of the earth help??
😂
Your Pal Nurav Well in that case, walk around along that ice ridge and then walk from the ridge to the center of the circle. But of course, the Earth is neither flat nor round, but is in fact a figment of your imagination. Wake up, child. You've slept for so long.
Your Pal Nurav not to mention the earth isn't a perfect sphere.
Mephostopheles
🕔
🎵Beep Beep Beep Beep Beep Beep🎵
Good morning Mephostopheles, thanks for waking me up I can see the reality now
When you're doing a time lapse like this in the future, can you please put an analog wall clock in the frame behind you so we can see it whirl around when you're working?
Yes. Also, put a baby next to you and then, we'll get to see it grow.
@@solidwaterslayer How about put a 5 year old in the same room but give them nothing to do, then we can watch frustration grow
@@silverjohnson3163 I do not wish a bored 5 year old on anyone, let alone themselves. Nooo thank you!
"I probably made some mistakes... I'm kinda hopping the mistakes have averaged out."
It's good coming back to older videos and seeing that the spirit of Matt hasn't changed one bit in essence.
From that day forward it was forever known that Parker pi is equal to 3.052338478336799.
Actually Parker pi changes each year, making it even more Parker. Also sorry if I'm a bit late l o l
There are 120^2=14400 possible pairs of numbers between 1 and 120.
8771out of them are coprime pairs.
This gives a value of pi~sqrt(6*14400/8771)~3.13857.
'8771 out of them are coprime pairs.' how did you figure it out? bruteforce or not?
@@f0kes32 You can use the excel method that Matt uses in his video but instead of stopping at 1000, you keep going til you hit 14400. Also, instead of having it give you 14400 random numbers, you have it give you one of every possible option. Count them up and, voila.
@@f0kes32 Euler's phi function. You just have to prime factorize the number 14400. Then use phi function.
@@anirbanmallick8502 thank you
If you bump this up to 200-sided dice, there are 40,000 combinations
24463 of them are coprime
This gives you pi = 3.132, which interestingly is even less accurate.
As Pierre Roux suggested, I have used the digits of Pi as a source of random numbers from which to calculate pi! Taking the first 10^6 digits of pi, and splitting them into 10^5 integers of length 10, I got 5*10^4 pairs of random integers (assuming the digits of pi are truly random - this is unproven). Comparison of these integers yields the estimate 3.14099105107796, which is accurate to a 0.00019% error :)
PI-Ception!
you mean 0.019% error... You haven't multiplied by 100.
That is goddamn brilliant.
beautiful idea !
Adam Brown so you used pi to find pi
So pi begets pi... interesting, lol.
I'm probably not the first person to mention this, but in Excel you can have it just count a column in its entirety. You don't need to define a set range, you can just put COUNT(C:C) and then you can add more rows at will without having to edit that function.
oh wow it’s the real 12tone
You can also use more complex systems to to more complex things references to the maximum cell or simply have a counter and use thats maximum as the reference. In the end with index() you can do pretty much anything.
Could we do some programming 🤔🤔
I never know that actually
O.K. I did the whole column method, worked out a value of pi and compared it to the actual value of pi and took the difference. I then ran the thing 100 times (selected in advance so as to not bodgy up the value to be close as explained in the actual video). My total difference from pi was -1.8*10^-5. This used 1,048,575 rows and a maximum integer of 1,000,000.
12:20 "63 has a 3 in it, it is 31*3" XD, You need a nap Matt
He said 61 I thought that too then dobble checked it
Parker division
No@@malharr9993 , you don’t get it. 31*3=93, now I know you are probably a little child (when you made this) so I’ll tell you. 31*3= 30*3+(1*3)=90+3=93. He said 63. 31*3=93, not 61.
Euler strikes again. great video!
LOVE your channel.
Fun fact: Euler got the right answer by using a completely incorrect method. As I recall, he wrote the sine function as a product of its roots, which isn't accurate, but it worked out correctly. Since then, we've proven it several other ways and concurred that it is actually (pi^2)/6
Wait wtf are you doing here i just left your channel stop following me help
Never would I think you to be a mathsy person lol
I from 1 year future hi past
the montage starts at 3:14, coincidence? i think not
Which is less than pi and therefore a Parker Timing
No, actually it is slightly more (like literally about 0.16(yes, that was modified to best fit pi)) seconds after 3:14.
Pi minutes would be actually 3:08
Ah so that’s why the audio cuts off.
Circle every where
For those following along at home, these are the methods of calculation he has used in order of decreasing error: calculating the first terms of an infinite series (off by 3.1%), rolling two dice 500 times (off by 2.8%), timing a pendulum of known length (off by 0.433%), weighing a cardboard circle (off by 0.31%), and measuring a circle using pies (off by 0.1035%).
Love that the most accurate results came from the method he explicitly stated WS inaccurate.
so if you wanted to do this perfectly, you would need 2 infinitely sided die, which are spheres... PI strikes again
Gravity Falls called.
The infinity sided dice is a type of apeirohedron die that was brought back by Ford from another dimension. These type of dice are outlawed in 9,000 other dimensions due to the properties of their roll. Because of their infinite sides, anything can happen when one of these dice is rolled.
Our faces could melt into jelly. The world could turn into an egg! Or, you could just roll and eight. Who knows?
…is that actually why there’s a pi in there???
Are all of the weird pi appearances like that???
The proof for the sum of 1/k^2 is such a beautiful proof, it's absolutely amazing
I know all PI digits, not in the correct order though
Wicchi so do I.
I know (Binary) 1000 digits of pi.
Gideon McKinlay, do you mean 1000 (1111101000) digits of pi in binary or 8 (1000) digits of pi in decimal form?
@@nathanderhake839 3.1415926. So yes, decimal 8 (Binary 1000) digits of pi in decimal form
Vihart: 3.14199999
how long until the Parker square jokes when he doesn't perfectly calculate pi
Sammy Saperstein Δt=0.00000000314s
Sammy Saperstein
probably a Parker square minute or something...
Parker's Pi
he parker squared the circle.
that reminds me, I created what I call parker cosine(cosp):
cosp[subscript n](x) = 2 * integral from -n/2 to ∞ of (x^2T)cos(πT)/(2T)! dT
At n=1, it is just ever so slightly off!
Another simpler way:
- Take random pairs of numbers in [0, n] and interpret them as x and y coordinates within an n*n square
- For each pair, calculate the distance to the origin.
- Count the number of pairs for which that distance is less than or equal to n
Effectively, you've just determined whether or not that point is inside of a quarter-circle centered at the origin, with a radius n. So, the probability that a pair will be in that group is:
p = ((area of circle) / 4) / (area of square)
= ((πn^2) / 4) / n^2
= π / 4
i.e., in C:
#include
#include
#include
static const uint32_t n = 1000000;
int main()
{
uint32_t count = 0;
for(int i = 0; i< n; ++i) {
double x = arc4random_uniform(n);
double y = arc4random_uniform(n);
double distance = sqrt(x * x + y * y);
if (distance
Actually, the random function gives you a number in the interval [0;1). So the random number is between zero and one and can take on the value 0, but not the value 1. When you multiply by 120, you get a number in [0;120). *The correct way to proceed* at this point is to *round down and add one*. This gives a number in the range [0;119] plus 1, which is in the interval [1;120]. If on the other hand, you round up, you mess with the probabilities, because in case the random number is 0, you get roundup(0·120)=roundup(0)=0, which is outside of your desired range. Also, drawing 120 is slightly less likely because the event of the "direct hit" (where no rounding is required) is missing (which is kind of a sloppy explanation, but I hope y'all get what I mean).
this, entirely
Mathematician: But the measure of any finite set is zero! So the probability of hitting 0 is zero
Engineer: According to the IEEE 754 specification, there are exactly 2^52 double-precision floating point values in the range [1, 2). This range is used for the initial rand() computation, since they all fall between two consecutive powers of 2, and therefore all have the same “exponent” under the standard (unlike numbers from 0 to 1, which must worry about cross-over points such as 1/2, 1/4, 1/8, etc… on their way down to zero, with the smallest possible exponent being -1022)
This initial random number is then adjusted by subtracting 1, which gives a uniformly pseudorandom number in the range [0, 1). Thus the probability of returning exactly zero is 1 in 2^52 (certainly far smaller than the experimental variance in our calculation of pi, but it’s still worth writing correct code)
The probability of 2 random numbers being coprime is only 6/π² if you consider _ALL_ numbers, like, from 1 to infinity. If you only consider numbers within a specific range (such as from 1-120 or 42-63360 or whatever), the probability will be slightly higher than 6/π², and you'll get an experimental approximation that likely undershoots pi. Not sure if that was mentioned or not. Edit: yes it was 8:41
10X10X5 is a Parker cube
I know right, I was like "why did he call that a 'cuboid' instead of a rectangular prism," but now I understand ;)
Next year throw darts at a circle
Carrot Slice th-cam.com/video/M34TO71SKGk/w-d-xo.html
Related videos list: Calculating Pi with Darts from Physics Girl
If your happened to get really lucky and select exactly 304 dice, it would give an approximation for pi about:
3.1414043... which is astronomically close :).
I ran it on a D120 2500 times and got within 0.6% on the first try. You're the best mathematician of our time Matt! I love watching your videos!
As soon as you said 6/π², I thought of the Basel problem, but I couldn't prove it before you got to that point in the explanation. But once again, we see that familiar numbers do indeed indicate a connection.
Right now i'm gonna write a code so that the range of random number generated is much larger, thanks matt this will be a good test since i'm only a begginer in coding
Please notice me when you code will be done. I want to see it ;)
Shambo Basu in for results
Shambo Basu hahaha that was my idea too, we can compare our programs when we're finished :)
Mathematica makes things so fast: twitter.com/IsYitzack/status/841333878774222848
IsYitzach Reaching the answer is not my objective, it is just to teat my java coding skills
correction 12:28 63 is 3*21 not 3*31
+Nathan James You're right! How silly of me. I've added that to the corrections.
and you still wrote it down incorrectly ;>
+standupmaths now you made a mistake in the corrections, you wrote 61=3*21 haha
+standupmaths you've completely fucked up that edit in the comments, makes absolutely no sense.
+standupmaths Now the correction in the description is wrong, it says 61 instead of 63.
3:47 Yes, 169 is a great darts score. It's also impossible to score 169 in just one visit.
I was so happy when you were at the science fair I was like isn't that the guy from standupmaths? Thank you you really made my day :)
21:25 pro-tip: use C1:C to specify entire C column starting with C1; or C1:1 to specify entire 1 row starting with C1
2 years and almost 2,400 comments later I'm sure someone has already suggested this, but you can use =RANDBETWEEN(1,120) for a much easier way to get a random integer between 1 and 120.
You can also simplify your coprime formula to =(GCD(A1,B1)=1)*1
-Excelphile
Tip: If press F9 excel will recalc you table and update randoms.
Or press delete
"Impossibly the most convoluted way i ever done" - 3blue1brown appears with calculating Pi by number of collisions.
He was so tired of rolling his dice at 12:30 that he said 63 was broken into factors of 3 and 31. I love that this mathematical mad man did this video.
I think this is the first time I've ever heard someone say "cuboid" instead of the bland "box". Definitely an underused and underrated word.
but this was uploaded on the 13'th
matt you had one job
Classic Parker Square
Well, he underestimated the value of Pi, so he compenstaed for it with upload data
Time zones. There's a possibility that he uploaded it when you were still on the 13th.
Weird I almost didn't notice is accent. Something tells me he's on the other side of the world. OR he was just nice and allowed people to prepare a day in advance in order to shine on actual PI day ;)
Pharaoh on LFS
It's most likely the former though
Stand-up maths 2030: calculating points by baking 100 pies of random diameters
Hey Matt,
I know I'm months behind on seeing your awesome video, but I have a question; Could one potentially generate more results with fewer rolls by, say, rolling 3 'random' numbers (a, b, and c) and comparing 'ab', 'ac', and 'bc' for cofactors and coprimes? Or, would that introduce some kind of error that defeats the purpose of such a method?
Amazingly, this does not change the expected number of coprime pairs. This is a consequence of the fact that expected values simply add up regardless of dependency. It does, however, change the standard deviation.
Waaaay more entertaining than this should have been. Glad you pulled in the Excel how-to. Cheers!
I have actually a nostalgia feeling to this video... one of many and many I watched in one stage of my life, but it took me straight back
Brown paper is too mainstream; it's all about that brown table, dawg!
Who the heck uses brown paper....
A buncha filthy nerds, that's who.
NERD PRIDE
When I got here there were 1459 comments. Did you know if you take the sum of the cubes of the digits of 1459, and then take the sum of the cubes of the digits of the result, you'll get back to 1459?
Try it!
I wrote a python script for the same which did 100k iterations and the range of random number was (0 to 1M),got 3.1416 as the result,it was amazing to see how the result started to converge towards 3.14 with the increase in number of iterations ☺
Cool!
I wish you'd talk more about the sneaky Pi proof. You explanations make things much easier for me to understand!
you can also put points in a grid in a circle/square and then do the ratio between points where x^2+y^2 is smaller/bigger than 1. funnily enough this actually corresponds to the integral of the volume of a tube. (unit circle 1 unit above the ground and then volume underneath that)
So can we use Parker pi to calculate the circumference of a Parker circle?
x is equal to 6 over the parker square of pi
"Not a very rigorous proof..."
A Parker proof? ;)
the idea that most numbers are big just tripped me out, thanks.
I'm most amazed about you figuring out that the error was 2.8% in just a second or so. That's crazy mental maths
So, I just had to pull out a cpp file and do it myself. Using numbers ranging from 1 to 2^16 and 1000000 samples, I was able to get PI with 3 digit precision. Kinda impressive considering you can't even notice the time it takes to execute.
Yeah how accurate does it do it for a billion samples? Or 4 billion? Or 2^32 (0xFFFFFFFF and one more)
Oh God, it hurts so much seeing someone not closing the marker pens they're not using.
If somebody wants to try it in python:
#pi from gcd(m,n)
import math as m
import random as rand
max = 10 ** 6 #maximum random numbers
n = 10 ** 3 #number of tries
count = 0
for i in range(n):
if m.gcd(rand.randint(1, max), rand.randint(1, max)) == 1:
count = count + 1
pi = (6 /(count/n)) ** (1/2)
error = (pi - m.pi) / m.pi * 100
print('my PI = ' + str(pi))
print('error = ' + str(error) + '%')
I don't know about python 3, but this doesn't work in python 2. You have to import gcd from fractions, and (1/2) would evaluate to 0 because that's integer division. Same for count/n.
cryptexify
it's a py3 code, yeah
You could use floor(rand()/rand()) to get a random positive integer that goes up as high as the precision of rand allows. I'm not certain if the probability is uniform, though.
Thank you Matt. This is going to make the terrific activity when my Year 7 class studies Probability at the end of the year - it links together the work we will have done on prime numbers, and on pi - and we'll have an Excel exploration at the same time.
Can anyone explain why those dice have only III and VIII?
No one else wanted them, so he got them super cheap?
They are part of the non-transitive grime dice sold on maths gear. I'm sure there is a video on them.
Matt Parker to be the next Doctor!!
If only he had been.
Nope. We are supposed to suffer through another season. 13 is an unlucky number.
Who?
To recalculate or re-run formulas in excel, simply press F9. At least it works in windows which has a Function Key row.
Selenium Glow thx it is so much quicker now
Absolutely love you dude, makes maths so much more interesting
This is, what I can say, a mind blowing proof. I am actually from Electronics and Communiation background and Fourier series and transform is our base of Job 😒. I know using Fourier we can get that pi^2/6 =1/1^2 +1/2^2 +.... and we use it many times in electronics. When you first showed the probability it remembered that result which I accustomed of and stopped your video and tried to solve it using that. I failed and then went through your proof. It seemed mind blowing to me.
Nice one. I guess the pi estimation would improve with larger sample size. Now try again with 5000 dices.
The moment that 6/pi^2 formula came up, I was laughing my ass off, because it occurred to me why you were rolling giant dice, and how you would approach the problem.
Pi is used in a lot of really weird stuff, lol.
pst... Use RANDBETWEEN in Excel to generate a random integer between two numbers.
Glad someone else pointed this out. Didn't wanna be that guy. ;oP
that what i use as well :)
Psst... Most non cryptography random number generators are not that random...
@1992jamo - Just how random do you want for this. I mean, you're not wrong but...
1992jamo The function RANDBETWEEN is just a simpler way to do what Matt did in the video. I don't mean to say that a computer can generate a truly random number.
WOW! Escher _AND_ origami in the background? Truly a man of taste!
Happy PI-Day. This video was awesome! I actually replicated it with slightly bigger max. rnd.numbers (100Mio.) and some more trials (5k) - estimation error is around 0.00-something. Loving it! Thx :)
Wow just take a look at the side of the two boxes! Look at how many dices with 3 are facing us!
What are the odds of that?
Look at how many 3’s are facing us on the pile they are picked from.not so surprising really :)
Colin Mackinlay but he throws them in...
The dice have only faces 8 and 3. What I want to know is what purpose they serve; why would anyone need a set of 500 d2's or d3's that only show 3 or 8?
My guess is in rolling pairs. 2d6 typically averages exactly 7. You can get this by adding the lowest and highest possible rolls from each die. 3 and 8 make that 11, however. While that would generate some very powerful D&D characters, I don't know what the exact purpose of these is.
12:24 "63, which has a 3 in it, Its 3 by 31..." I don't know if I trust his math.
Marginal return on effort seems to be limited here. By that I mean, increasing the max number only gets you a few fractions of a percent closer to the real value. Which compares with the infinite sum from last year, where adding a lot of terms adds only a minor and diminishing correction each time.
For next year's Pi day, divide 32 by 10, then come to America and visit the Indiana legislature and try and convince them that it should be changed to 3.2 so you'll be correct. Again.
Hey Matt, keyboard shortcut for ya. To lock a cell in a formula, in lieu of typing the "$" , you can use F4. Multi press of F4 will cycle through the locking options (both row and column, column, row).
I'm currently running a program that generates pi the way described in the video, and N>10⁹ pairs in, I'm already at 3.1415. And it's not a random number of pairs, it has been being close to this for a long time. This is amazing. Hail maths and hail you, oh Matt.
if he keeps doing this every year and averages his results they will get there eventually
He would have had to count less if he counted just the dice in the cofactor box.
This is a little bit of a Parker pi, isn't it?
For 500 trials I expect a relative error of 5%. It's spot on.
1:35 “both divisible by … 3”
Throws dice into box, it spins and land with only the number 3 visible
Something I can't wrap my mind around: how can you describe fairly and randomly picking a number from the unbounded set of all integers?
From a practical perspective, I’m not sure how you would actually generate a random, unbounded integer (via a computer, or otherwise). However, if you’re interested in what “probability” means for an infinite, countable set (like the integers), then see the following:
en.m.wikipedia.org/wiki/Natural_density
(Which is studied as a part of en.m.wikipedia.org/wiki/Probabilistic_number_theory )
As an example, the probability of randomly choosing an even number turns out to be exactly 50% with this definition, as we might hope
(However, if your infinite set is “uncountable,” such as the continuous number line, then you would want to learn about Measure Theory and Probability Theory)
Note: Every finite set will have zero probability, but various infinite sets will have non-zero probability
Basically, you just have to make sure that your number generator results in probabilities that match the Natural Density for every subset of the Natural Numbers
(You could easily include negative integers by randomizing the sign, but Natural Density itself only cares about the Natural Numbers, which start from zero)
@@Muhahahahaz thanks
What I'm taking away from this is that trying to calculate pi makes you crazy.
Or that being crazy makes you calculate pi?
@@rosuav or both
"that may have been my largest understatement" literal lol
Wow how exciting!!!! This exactly what I wanted to watch for more than 2 minutes on my Tuesday afternoon
I had this quite fun assignment for maths class on my uni where they asked us to approximate pi anyway you wish as long as you could back up your methods.
I decided to go by the circumference of a circle being pi*d or in this case we'll use pi*2r and then continued to use the circumference of polygons of "radius" 1 with increasingly more edges.
So if you make it easy on yourself you'd have a triangle for example you can divide it up into 3 pieces by drawing a line from the middle to each corner. Now every one of these sections would have an angle of 360/3 degrees. For the circumference, you can name the two "radii" of 1 a and b and slap the cosine rule against it. So in this case sqrt(1^2+1^2-2cos(120)) =~ 1.73. Now triple this for the number of edges and you got yourself your first-order approximation of pi*2r = 2pi = 5.19 -> pi = 2.595. Not that great. But for a triangle, not all that bad either.
Now I found that for any given polygon with N edges you can write up this formula for its circumference as a function of N. So the 360/3 would logically become 360/N for an Nth polygon and at the end instead of multiplying it by 3 you'd multiply it by N instead. This leaves us with this beauty of an equation: N*sqrt(2-2cos(360/N)). This allowed me to quickly drag and drop a column in excel showing the progress of polygons approximating pi for each order of N. To have a nice little taste of the result we'll plug in N = 100 and see where we get:
100*sqrt(2+2cos(3.6)) = 6.282.... -> pi = 6.28215/2 = 3.141075. A hundred terms further we suddenly have pi accurate to the third decimal. Now obviously that's very slow progress but considering I had a very simple algorithm to follow I could easily bump up the value of N and reach quite a decent accuracy in this way. For example the 1000th term we get 3.141587 which rounds to two more decimals already. As we allow N -> infinity we of course find the exact value of pi eventually.
Nothing too groundbreaking there, just thought it was a fun assignment and I was pretty stoked about my method actually working out :p
Err... I do know how to program but I don't know how to use Excel lol :D
same
You could probably work out the functions you'd need in a couple hours at most. Then just know that you can drag to duplicate code, other than using a different row.
PI DAY OMG I FORGOT
JiaMing Lim Still one day.
Yeah, one day to go here
JiaMing Lim u suck
Thomas Southgate, Rude
It's okay, just wait until UK pi day which is on the third of Fourteember.
"To prove..." - we believe you!!!
13:02 at my job i often have to sort things by tossing them into different bins for several hours and i know that feeling, i feel like thats common for that kind of task. you lose track and instinctively put something in the wrong one despite knowing it doesnt go there, only to realize afterward
Fun way to get a very close estimate with excel: Do a target-value analysis of the final value for pi and make the variable some cell that has no influence on the result. Excel will keep searching until it gets within 0.001 of the target value (PI)
"exactly the same thing yourself..."
me: lol, too late I did it in c
him: "...in excel"
What if you alternated rolling dice so each roll was used in 2 calculations. i.e gcd(a,b) , gcd(b,c) , then gcd(c,d) etc. Would that change the probability?
jonlottgaming, I do not know, but SorenEragon had a similar question and according to SmileMPV the probability is the same, but there is a different standard deviation.
this is a Python script that does the same thing:
import random
import math
from fractions import gcd
intsize = 1000000
a = []
b = []
length = 10000
portion = 0
for i in range(length):
a.append(random.randint(0 , intsize))
b.append(random.randint(0 , intsize))
c = gcd(a[i],b[i])
if c == 1:
d = 1
portion = portion + 1
else: d = 0
print(a[i] , b[i] , d)
i = i+1
pi = math.sqrt(6 /(portion / length))
print("")
print(length , portion)
print ("Pi is" , pi)
from random import randint as x; from fractions import gcd;(lambda m,n:(6/(sum(gcd(x(1,m),x(1,m))==1 for i in range(n))/n))**0.5)(1000000,10000)
Bendik Mihle Hansen
soooo.... you print out 20 000 random numbers...
WHY?!?!?
oh, and a pi approximation ofc.
# Here's how I did it in Ruby. There might be a more compact way do create the Enumerator but I've never worked with streams of randoms before so I just wrote it by hand.
A = 39393939 # max random value
N = 39393939 # iteration count
coprimes = Enumerator.new { |y| loop { y
K 466 Thanks
It's pseudo-random. It's better to use numbers from random.org it's more random.
Very cool video! Here's a suggestion for next Pi Day: Run a Monte Carlo simulation of pairs (x, y) of random real numbers from -1 to 1 (or 0 to 1 if you prefer) and count how many are inside the unit circle, i.e., with x^2 + y^2 < 1. The fraction should be π/4, so multiply the final result by 4 to obtain an estimate of π.
done it. It works. I think Matt prefers something a bit more spectacular though. ;-)
Many sided dice are also notoriously uneven in the frequency of landing on particular sides.
I tried this on my computer 3.14 billion times and got 3.1416. I think I messed up someplace.
Isn't GCD greatest common *divisor* and not greatest common denominator?
Well you can use it for both anyway. Greatest common denominator is done using greatest common divisor (via the least common multiple, whatever)
6:32 “2 divides half of all numbers.” Someone needs to brush up on his George Cantor.
So much unjustifiable rage at Matt mixing up denominator and divisor
The rand() function in excel is limited to a precision of 1 x 10^-9, meaning that the numbers to randomly choose from get capped for a random pool of numbers between 1 and n > 1 x 10^9. You will see the number of coprimes found decrease dramatically if you step above 1 x 10^12, significantly distorting your results if using the rand() function. The max number compatible with the gcd() function is 2^53, so if you want to push excel to the limits you will need to use the randbetween() function for your trials, i.e. randbetween(1,2^53).
Enjoyed this. Thanks. Terminology quibble, which may just be a US vs UK thing: you kept referring to gcd as "greatest common denominator", whereas in my experience it should be called the "greatest common divisor". In fact I have more often seen gcf, for "greatest common factor" used in this context.
Jonathan Beeson How are there not more comments on this verbal gaffe? There's no such thing as a greatest common denominator; the LEAST common denominator is what you use to add fractions like 1/6+3/10 (least common denominator is 30; 5/30+9/30=14/30=7/15). Using a greater common denominator than 30, like 60 or 300 or one of the other infinitely numerous multiples of 30, is just a silly waste of time.
I'm starting to think Matt Parker doesn't actually know anything about math. I bet he learned the Pythagorean Theorem from the Scarecrow in The Wizard of Oz.
Matt was obviously thinking "gcd", as in greatest common divisor, but he muddled it with lcd, least common denominator, so "divisor" came out "denominator". After he said it once he just kept repeating the same mistake.
Midwest USA here...I learned it as largest common denominator.
+Bill Schlafly No one in the class tried to correct the teacher's error? It would have been fairly easy to point out that the text book said "divisor" or "factor", not "denominator".
Here in New Zealand we were taught it as greatest common denominator, but I admit that it doesn't really make sense. What we were taught instead of least common denominator is least common multiple.
It's just a regional thing, Matt clearly knows what it is mathematically, it's just that the words are different.
"169... which is a great score in darts"
It is an impossible score (with three darts). No idea if thats why you smile while saying it?
Wait did he say 3 and 31 is 63?
3+31=34, Matt
Wrote the code in C#. I found that I could calculate Pi to six decimal places (the precision of a floating point number) with an upper limit on random numbers at 2,147,483,647 (max value of a 32-bit signed integer) and a trial size of 78,283.
Given it took Matt 1 hour, 58 minutes, and 52 seconds to roll 1,000 times, to get Pi to the same accuracy as my trusty computer did (with the same rolls and no errors), Matt would have been rolling for just over 6 days and 11 hours, and with a particularly monstrous 2,147,483,647-sided die.
Excel also has a RANDBETWEEN(min,max) function.