Why stop at quantifying one level of certainty? Why not quantify your level of certainty about your level of certainty?(www.lesswrong.com/posts/6LT4g7vEGAPWv57eF/measuring-meta-certainty) Sometimes it's more practical to not try to measure your certainty, but sometimes it is more practical to measure your certainty or your meta-certainty (or maybe even your meta-meta-certainty?). My heuristic for deciding when you should or shouldn't quantify is asking yourself "How important is it that I get this prediction right?". The more important it is the more you should quantify. If it's a loose smalltalk with your neighbor maybe don't try to quantify your uncertainty, if it's a country-wide policy decision, maybe quantify up to the level of meta-certainty.
I never meta-certainty I didn't like. ;) Showing the full probability distribution of our estimates seems overkill for Saffo's stated purpose of rapidly deploying a good-enough model of the future to make decisions - if you're expecting the result to change wildly every time a member of the team contributes, simply using Kent's wide linguistic categories ("almost certainly," "probably not") is probably enough to get the room's temperature & encourage some analysis without literally aggregating probability functions, neh?
In Canadian refugee law the legal test to be applied when assessing the claimant's future risk of persecution is a "serious possibility". This was always known to be lower than the civil standard (i.e balance of probabilities) but about a year ago one Federal Court judge stated in a decision that he thought a "serious possibility" of persecution meant 35-40%. I was surprised as I'd previously assumed a serious possibly meant something like 15-20%. And here I thought THAT was a significant discrepancy.
Great episode, like always. I think you're quite right that putting numbers and standard language to estimates certainly helps prompt inquiry into the estimate, eliminating some of the fuzzyness of the communication. Somewhat more cynically though I the unstated corrolary here has to be "when we assume arguments in good faith". Your own example of someone pressing for a 100% guarantee of the result one way or another of wearing a mask I think is more likely to be a person trying to nail down your absolutel statement so that they can refer to it in the future as counter-evidence or as evidence to reinforce their own existing opinion. I hope I'm just being old and overly cynical but that argument in question about masks seems to me like the sort of example where people often wouldn't be arguing in good faith.
I'm not sure that pressing for a guarantee is usually bad-faith argumentation (although it certainly can be!). I was specifically thinking about patients who simply can't deal with the uncertainty of medical diagnosis - it's a common practice in medicine to carefully deduce a patient's tolerance for uncertainty before informing them about what they (probably) have & what medicine to take to (possibly) help with it. This also rings true for many places I've worked in the past - my boss never wants to know the percentage certainty I have that the thing will be done on time. ;)
One thing I learned in the used book trade is that you can't reduce judgement to an algorithm. The training budget for new employees was just paying for their rookie mistakes when they bought. The method was to just let them do it and then go over what they bought and tell them which purchases sure things and why, which ones were mistakes and why and some even 20 year veterans weren't sure of. Then amazon came along and ruined everything. I learned to recognize "management experts" and their theories are at best a list of things to consider but more often a load of hoohey (Dag nabbit, I thought I found a cool retro term to substitute for my usual swearing and the blasted red squiggle had foiled me.)
Yeah, I tend to think of management "algorithms" as useful cognitive tools to employ in certain situations, just to have some useful lens thru which to view things. Good judgment is hard to come by!
I think there's an amount of discipline and practice that needs be present to make quantitative estimation valuable. Intelligence analysis definitely cultivates such a skill in an environment where it can be nurtured by experts. My neurosis compels me to make considered estimations even about trivial matters, and I am starkly aware that I'm both deeply in the minority and still terrible at estimation. Adding error bars to an estimation is the concept I find most immediately useful. It forces a more careful consideration and may cut down on rectal RNG.
> rectal RNG en.wikipedia.org/wiki/Scatomancy It's a fair point that pulling a percentage out of...thin air is of limited utility, & it's valuable to practice some sort of due diligence before assigning probability estimates.
Love your videos. Very useful for my attempt to write a practical book on what (little) we can do to be more grounded and clear. However, I think it would be helpful for many foregin viewers if you could slow down the videos some 5% making it easier to follow?
There's a hidden speed setting that might help - if you click the gear icon in the bar under the video & "Playback speed" you can select 0.75x and enjoy it a little slower!
Dont probaility only apply to populations? I struggle with the risks linked to probabilistic assertions about specific events. Eg you will never 22% pregnant, etc. Any thoughts?
You will never be 22% pregnant, but it makes sense to say that e.g. given your symptoms there is a 22% chance that you are pregnant (i.e., 100% pregnant), right?
Similarly, a specific coin toss will never land 50% on heads. The number 50/100 is about us _not knowing_ how it landed, and inferring from what we know about previous (known) coin tosses results. Epistemic probability is about putting numbers on _our uncertainty_ about a specific event, not about putting numbers on the event in itself.
It strikes me that how clearly we go about introducing statistics into the colloquial lexicon will be one of those linguistic milestones of our era. Dang fine essay, my dude.
Short term thinkers’ motto - Strong Opinions, Weekly Held
"Test, amend and refine!"
Why stop at quantifying one level of certainty? Why not quantify your level of certainty about your level of certainty?(www.lesswrong.com/posts/6LT4g7vEGAPWv57eF/measuring-meta-certainty)
Sometimes it's more practical to not try to measure your certainty, but sometimes it is more practical to measure your certainty or your meta-certainty (or maybe even your meta-meta-certainty?).
My heuristic for deciding when you should or shouldn't quantify is asking yourself "How important is it that I get this prediction right?". The more important it is the more you should quantify. If it's a loose smalltalk with your neighbor maybe don't try to quantify your uncertainty, if it's a country-wide policy decision, maybe quantify up to the level of meta-certainty.
I never meta-certainty I didn't like. ;)
Showing the full probability distribution of our estimates seems overkill for Saffo's stated purpose of rapidly deploying a good-enough model of the future to make decisions - if you're expecting the result to change wildly every time a member of the team contributes, simply using Kent's wide linguistic categories ("almost certainly," "probably not") is probably enough to get the room's temperature & encourage some analysis without literally aggregating probability functions, neh?
In Canadian refugee law the legal test to be applied when assessing the claimant's future risk of persecution is a "serious possibility". This was always known to be lower than the civil standard (i.e balance of probabilities) but about a year ago one Federal Court judge stated in a decision that he thought a "serious possibility" of persecution meant 35-40%. I was surprised as I'd previously assumed a serious possibly meant something like 15-20%. And here I thought THAT was a significant discrepancy.
Great episode, like always. I think you're quite right that putting numbers and standard language to estimates certainly helps prompt inquiry into the estimate, eliminating some of the fuzzyness of the communication. Somewhat more cynically though I the unstated corrolary here has to be "when we assume arguments in good faith". Your own example of someone pressing for a 100% guarantee of the result one way or another of wearing a mask I think is more likely to be a person trying to nail down your absolutel statement so that they can refer to it in the future as counter-evidence or as evidence to reinforce their own existing opinion. I hope I'm just being old and overly cynical but that argument in question about masks seems to me like the sort of example where people often wouldn't be arguing in good faith.
I'm not sure that pressing for a guarantee is usually bad-faith argumentation (although it certainly can be!). I was specifically thinking about patients who simply can't deal with the uncertainty of medical diagnosis - it's a common practice in medicine to carefully deduce a patient's tolerance for uncertainty before informing them about what they (probably) have & what medicine to take to (possibly) help with it.
This also rings true for many places I've worked in the past - my boss never wants to know the percentage certainty I have that the thing will be done on time. ;)
Great video! This is my strong opinion, strongly held :)
Glad you liked it! 😁
Great writing in this one. Well done, Josh.
Thanks man! :D Glad you enjoyed it!
One thing I learned in the used book trade is that you can't reduce judgement to an algorithm. The training budget for new employees was just paying for their rookie mistakes when they bought. The method was to just let them do it and then go over what they bought and tell them which purchases sure things and why, which ones were mistakes and why and some even 20 year veterans weren't sure of. Then amazon came along and ruined everything.
I learned to recognize "management experts" and their theories are at best a list of things to consider but more often a load of hoohey (Dag nabbit, I thought I found a cool retro term to substitute for my usual swearing and the blasted red squiggle had foiled me.)
Yeah, I tend to think of management "algorithms" as useful cognitive tools to employ in certain situations, just to have some useful lens thru which to view things. Good judgment is hard to come by!
I think there's an amount of discipline and practice that needs be present to make quantitative estimation valuable. Intelligence analysis definitely cultivates such a skill in an environment where it can be nurtured by experts. My neurosis compels me to make considered estimations even about trivial matters, and I am starkly aware that I'm both deeply in the minority and still terrible at estimation.
Adding error bars to an estimation is the concept I find most immediately useful. It forces a more careful consideration and may cut down on rectal RNG.
> rectal RNG
en.wikipedia.org/wiki/Scatomancy
It's a fair point that pulling a percentage out of...thin air is of limited utility, & it's valuable to practice some sort of due diligence before assigning probability estimates.
@@THUNKShow Ha! Gonna tell my diary about that one, that's gold.
There is a severe probability, that thunk will someday grow immensely.. but those numbers are just comming from my intuition.
^.^ Your confidence is reassuring! Thanks!
I was going to make a joke about Twitter and other social media sites, but you kind of covered that in the video
Love your videos. Very useful for my attempt to write a practical book on what (little) we can do to be more grounded and clear. However, I think it would be helpful for many foregin viewers if you could slow down the videos some 5% making it easier to follow?
There's a hidden speed setting that might help - if you click the gear icon in the bar under the video & "Playback speed" you can select 0.75x and enjoy it a little slower!
@@THUNKShow Hi. Thanks. I know. My comment wasnt meant for me, but for non-english speakers in general. :)
Dont probaility only apply to populations? I struggle with the risks linked to probabilistic assertions about specific events. Eg you will never 22% pregnant, etc. Any thoughts?
You will never be 22% pregnant, but it makes sense to say that e.g. given your symptoms there is a 22% chance that you are pregnant (i.e., 100% pregnant), right?
Similarly, a specific coin toss will never land 50% on heads. The number 50/100 is about us _not knowing_ how it landed, and inferring from what we know about previous (known) coin tosses results.
Epistemic probability is about putting numbers on _our uncertainty_ about a specific event, not about putting numbers on the event in itself.
It strikes me that how clearly we go about introducing statistics into the colloquial lexicon will be one of those linguistic milestones of our era. Dang fine essay, my dude.
I hope so! It seems like a good idea...probably...
:D Thanks Ulf!