I've been appreciating your lectures Sergeant Briggs. In my stats and probability courses in college, I always felt that we never talked enough about the philosophy of the epistemological foundations of probability and statistical models. I could understand the proofs of the theorems, but it never felt as though there was a consistent, cogent philosophy of statistics, probability and science motivating the techniques that we were learning. And so I always felt as though I was just doing math without fully understanding or having a clear idea of what it was conveying. That entire explication of the importance of conditioning & the cognitive error of subliminal conditioning is something that should've been discussed in my courses, but never was. Every lecture that I've tuned in to so far, I've enjoyed & I will eventually watch all of them. Your case here for an objective case of probability I think is also well-reasoned.
Suppose I ask (before the US election of 2024), "Will the probability of nuclear war be greater if Kamala Harris wins or if Donald Trump wins?" As a statement of current [2024-09-09] reality, it's moot (the election hasn't occurred), but I think that the question is still philosophically quite valid. Do you disagree? Do you think this is a senseless, philosophically empty question? To me, it doesn't seem to fit nicely with your insistence that "All probability is of the form Pr(A|B)." (This should be obvious but would involve conditioning on mutually exclusive notions of B.) Dr. Briggs, I don't find this lecture persuasive, and here's why. I view probability as a chance of something happening *in the future*. Statistics is past-looking (data have been collected), but probability is future-looking. When you give examples of X + Y = 3 etc. (about halfway through your lecture), and when you state things like "there's no probability you have cancer" (at about 10:48), that's a description of one's ignorance of reality. Ah, but reality is *present tense*: it is what it *is*. Back to the question of the change in probability of nuclear war given two possible future events: This is a question that cannot be answered in the frequentist notion of probability, because obviously nuclear war just doesn't lend itself to frequentist sampling very well. In the Bayesian sense of probability, I think it's quite valid. It is okay for two distinct notions of probability to both exist.
It's a natural objection, but here's an example to show you why it doesn't hold. On my desk yesterday (but not now), I had a quantity of change. How much? This is in the past, but the proposition is unknown to you, hence 'random'. See the earlier lectures.
@@WMB Dr. Briggs, I really do appreciate the reply, and I admit that I've jumped into this lecture series mid-way, not from the beginning. (TH-cam suggested this, and it was interesting.) So, I will make a real effort to listen to the series in order from the start, and perhaps then I'll be persuaded, but as of now I don't think that you've answered my question, at least not regarding the validity of the philosophical assumptions behind the question, "Will the probability of nuclear war be greater if Kamala Harris wins or if Donald Trump wins?" I specifically asked if you think this is a philosophically empty question. Your reply of a quantity of loose change on your desk in the past doesn't connect the dots for me. Put another way, suppose that I am a geologist (I'm not) and I say, "The probability of there being oil under this here hectare of surface land is pretty high" (and I could make up some number between 0 and 1 if I wanted to). Well, if you say that there is no probability here-either there's oil there or there isn't-then I could agree with you, because reality is what it is and we just have imperfect knowledge of it. That's how I see the to-me unknown quantity of change on your desk yesterday: it is what it is, there is no probability. Again, present reality is present tense. But the possibility of nuclear war does exist, future tense, and while I can't give you a meaningful number, I can still say it's between impossible and certain, 0 to 1. So to me the nuclear war question is philosophically valid (with respect to the philosophy of probability), and I'm curious if you agree or not. From the view point of God, who knows everything in the future, it may be equivalent to the geologist's oil notion, in that nuclear war either will happen or it won't within a finite time interval from the present and thereby there is no probability-but then that seems to me to be a not-so-practical viewpoint for mere mortals like us. One more comment, as an aside: I view statistics as a branch of philosophy (quantitative epistemology), not a branch of mathematics or a natural or social science. Just a comment.
This is completely wrong. Any probability is the result of a model. How you got to that model is subjective, what data you factor in, omit, etc. Amongst all possible models, prefer the ones with lower entropy of distribution of predictions. With a Highspeed-Camera I can even determine the result of a coinflip. If you factor in the knowledge of which side is up, that side has 51% of probability.
@@WMB If you believe, probabilities are objective, you have answered your questions wrong. Change the model, you will change the probabilities. Probability theory and everything that comes out of it is just concerned with the model. The link between that said model and metaphysics is missing. There is a modeling observer you don't acknowledge that turns the real world into a model. That is, per definition, subjective.
@@WMB Let us suppose there are 9 blue and 3 red balls in a closed sack. You will guess I get a 3:12 and 9:12 chance and equal distribution. But I can see into future and see dirac measure, I already know all draws. I will guess a specific distribution that will be correct and thus, have outperformed you. Now, suppose we in multiverse. I will guess correctly in each universe, because I can behave differently in each universe, while your behavior is the same in each universe and you can only go for probabilities. Now look at cross product of all possible combinations of drawing with putting back cross your decision in each case. Your equal distribution guess on the result distribution suddenly looks very bad in this case. You see equal distribution on that, while I have dirac mesaure in each case and low entropy. I give to you simple example you can do in head. We flip a coin. In your case, you say you have equal distribution on HT-HT, HH-HT, TH-HT, TT-HT, HH-HT, HT-TT, HH-TT, ... etc where H is Head and T is tails and the third and fourth letter represents the choice you make, what you guess first and second result will be. You say, prbability is objective, it is arbitrary, which of the choices HT or TT etc I make. Now in the realization of the experiment we will have HH or HT or TT or TH. I can see the future, so I know that it will be HT. So I will value the choices differently, while you asses equal goodness to all choices, I will assign 0 to all choices but the one that will come true. Anything else, is in between. Thus, the metric you need to compare models is entropy. If you have 0 entropy, you have good model and know what will happen exactly with probablity 1. If you have high entropy like in case of equal distribution or normal distribution, all you know is, that you know not much and need further research. You can now compare different models, let us say we make the chain longer. One guy can predict one of the results telepathically, but not the rest. He will have slightly better model than equal distribution. In case of throwing a coin it has been confirmed experimentally that the chance of what is heads up landing heads up is close to 50.8%. You see there how it is only a fair coin toss, if you omit the information which side is on the top, you have to delete information from your model. You see there, how your probability is subjective? If you pay attention to which side is up you will change your model and have sth better than equal distribution, if you don't you will not. In any case, your frequentist observation will match the probabilities according to the law or large numbers. Not only that, they have experimentally verified that there are some people that can even land heads >60% of the time. We still want to model the real world. While, for a given model, the probabilities are pretty clear, which model you choose to model the same real world event, is not entirely clear. That is why probability is subjective, dependent on the observer.
I’m so glad content like this still being produced, keep it up man ❤
Your lectures, like probability itself, are not subjectively brilliant-they are objectively exceptional.
Many thanks.
I've been appreciating your lectures Sergeant Briggs. In my stats and probability courses in college, I always felt that we never talked enough about the philosophy of the epistemological foundations of probability and statistical models. I could understand the proofs of the theorems, but it never felt as though there was a consistent, cogent philosophy of statistics, probability and science motivating the techniques that we were learning. And so I always felt as though I was just doing math without fully understanding or having a clear idea of what it was conveying. That entire explication of the importance of conditioning & the cognitive error of subliminal conditioning is something that should've been discussed in my courses, but never was. Every lecture that I've tuned in to so far, I've enjoyed & I will eventually watch all of them. Your case here for an objective case of probability I think is also well-reasoned.
Many thanks for watching! Glad to have you with us.
Not only Probability is subjective; but all of mathematics is essentially subjective, a subjective mental activity.
So is your complete understanding and perception of reality… does it help you now?
No it isn't. Mathematical results are independent of those that discover them.
That‘s what your perception tells you and that is subjective experience so….
But I know what you mean, that‘s not the point.
Suppose I ask (before the US election of 2024), "Will the probability of nuclear war be greater if Kamala Harris wins or if Donald Trump wins?" As a statement of current [2024-09-09] reality, it's moot (the election hasn't occurred), but I think that the question is still philosophically quite valid. Do you disagree? Do you think this is a senseless, philosophically empty question? To me, it doesn't seem to fit nicely with your insistence that "All probability is of the form Pr(A|B)." (This should be obvious but would involve conditioning on mutually exclusive notions of B.)
Dr. Briggs, I don't find this lecture persuasive, and here's why. I view probability as a chance of something happening *in the future*. Statistics is past-looking (data have been collected), but probability is future-looking. When you give examples of X + Y = 3 etc. (about halfway through your lecture), and when you state things like "there's no probability you have cancer" (at about 10:48), that's a description of one's ignorance of reality. Ah, but reality is *present tense*: it is what it *is*. Back to the question of the change in probability of nuclear war given two possible future events: This is a question that cannot be answered in the frequentist notion of probability, because obviously nuclear war just doesn't lend itself to frequentist sampling very well. In the Bayesian sense of probability, I think it's quite valid. It is okay for two distinct notions of probability to both exist.
It's a natural objection, but here's an example to show you why it doesn't hold.
On my desk yesterday (but not now), I had a quantity of change. How much?
This is in the past, but the proposition is unknown to you, hence 'random'.
See the earlier lectures.
@@WMB Dr. Briggs, I really do appreciate the reply, and I admit that I've jumped into this lecture series mid-way, not from the beginning. (TH-cam suggested this, and it was interesting.) So, I will make a real effort to listen to the series in order from the start, and perhaps then I'll be persuaded, but as of now I don't think that you've answered my question, at least not regarding the validity of the philosophical assumptions behind the question, "Will the probability of nuclear war be greater if Kamala Harris wins or if Donald Trump wins?" I specifically asked if you think this is a philosophically empty question. Your reply of a quantity of loose change on your desk in the past doesn't connect the dots for me.
Put another way, suppose that I am a geologist (I'm not) and I say, "The probability of there being oil under this here hectare of surface land is pretty high" (and I could make up some number between 0 and 1 if I wanted to). Well, if you say that there is no probability here-either there's oil there or there isn't-then I could agree with you, because reality is what it is and we just have imperfect knowledge of it. That's how I see the to-me unknown quantity of change on your desk yesterday: it is what it is, there is no probability. Again, present reality is present tense. But the possibility of nuclear war does exist, future tense, and while I can't give you a meaningful number, I can still say it's between impossible and certain, 0 to 1. So to me the nuclear war question is philosophically valid (with respect to the philosophy of probability), and I'm curious if you agree or not. From the view point of God, who knows everything in the future, it may be equivalent to the geologist's oil notion, in that nuclear war either will happen or it won't within a finite time interval from the present and thereby there is no probability-but then that seems to me to be a not-so-practical viewpoint for mere mortals like us.
One more comment, as an aside: I view statistics as a branch of philosophy (quantitative epistemology), not a branch of mathematics or a natural or social science. Just a comment.
Thank you.
This is completely wrong. Any probability is the result of a model. How you got to that model is subjective, what data you factor in, omit, etc. Amongst all possible models, prefer the ones with lower entropy of distribution of predictions. With a Highspeed-Camera I can even determine the result of a coinflip. If you factor in the knowledge of which side is up, that side has 51% of probability.
I'm afraid you'll have to start from the beginning. We have answered all these questions, and many more beside, from the start.
@@WMB If you believe, probabilities are objective, you have answered your questions wrong.
Change the model, you will change the probabilities. Probability theory and everything that comes out of it is just concerned with the model. The link between that said model and metaphysics is missing. There is a modeling observer you don't acknowledge that turns the real world into a model. That is, per definition, subjective.
@@WMB Let us suppose there are 9 blue and 3 red balls in a closed sack. You will guess I get a 3:12 and 9:12 chance and equal distribution. But I can see into future and see dirac measure, I already know all draws. I will guess a specific distribution that will be correct and thus, have outperformed you.
Now, suppose we in multiverse. I will guess correctly in each universe, because I can behave differently in each universe, while your behavior is the same in each universe and you can only go for probabilities. Now look at cross product of all possible combinations of drawing with putting back cross your decision in each case. Your equal distribution guess on the result distribution suddenly looks very bad in this case. You see equal distribution on that, while I have dirac mesaure in each case and low entropy.
I give to you simple example you can do in head. We flip a coin. In your case, you say you have equal distribution on HT-HT, HH-HT, TH-HT, TT-HT, HH-HT, HT-TT, HH-TT, ... etc where H is Head and T is tails and the third and fourth letter represents the choice you make, what you guess first and second result will be. You say, prbability is objective, it is arbitrary, which of the choices HT or TT etc I make. Now in the realization of the experiment we will have HH or HT or TT or TH. I can see the future, so I know that it will be HT. So I will value the choices differently, while you asses equal goodness to all choices, I will assign 0 to all choices but the one that will come true.
Anything else, is in between. Thus, the metric you need to compare models is entropy. If you have 0 entropy, you have good model and know what will happen exactly with probablity 1. If you have high entropy like in case of equal distribution or normal distribution, all you know is, that you know not much and need further research.
You can now compare different models, let us say we make the chain longer. One guy can predict one of the results telepathically, but not the rest. He will have slightly better model than equal distribution.
In case of throwing a coin it has been confirmed experimentally that the chance of what is heads up landing heads up is close to 50.8%. You see there how it is only a fair coin toss, if you omit the information which side is on the top, you have to delete information from your model. You see there, how your probability is subjective? If you pay attention to which side is up you will change your model and have sth better than equal distribution, if you don't you will not. In any case, your frequentist observation will match the probabilities according to the law or large numbers.
Not only that, they have experimentally verified that there are some people that can even land heads >60% of the time.
We still want to model the real world. While, for a given model, the probabilities are pretty clear, which model you choose to model the same real world event, is not entirely clear. That is why probability is subjective, dependent on the observer.
@@WMB ok just watched to 11:00 and you even admit that. THE CLICKBAIT TITLE MAN!!
I mean you ADMIT IT!!