Let S be the sample space and let |.| define the cardinality of a set. Let ¢ denote subset for the lack of the actual symbol used in math notation. By definition: For any event A ¢ S: p(A) = |A| / |S| (Or the porportion of the samples in event A divided by the total samples in the sample space) Then: p(A U B) = |A U B| / |S| But note that: |A U B| = |A| + |B| - |A ^ B| where ^ here denotes intersection. This is true since if the sets were not disjoint, then by adding their cardinalities we would have counted some elements twice so we need to remove away the ones we over counted and those are exactly the elements that occur in both. Thus: p(A U B) = (|A| + |B| - |A ^ B|) / |S| = |A| / |S| + |B| / |S| - |A ^ B| / |S| = p(A) + p(B) - p(A ^ B)
I don't know why TH-cam recommended me this video, but nice video man. It remebers me when I used to have statistics classes in college.
Excellent video! Keep it up
In my class, we have to use generating function to prove this
I haven't seen it done that way before! Now I gotta check it out!
Let S be the sample space and let |.| define the cardinality of a set. Let ¢ denote subset for the lack of the actual symbol used in math notation. By definition:
For any event A ¢ S:
p(A) = |A| / |S|
(Or the porportion of the samples in event A divided by the total samples in the sample space)
Then:
p(A U B) = |A U B| / |S|
But note that:
|A U B| = |A| + |B| - |A ^ B| where ^ here denotes intersection. This is true since if the sets were not disjoint, then by adding their cardinalities we would have counted some elements twice so we need to remove away the ones we over counted and those are exactly the elements that occur in both. Thus:
p(A U B) = (|A| + |B| - |A ^ B|) / |S|
= |A| / |S| + |B| / |S| - |A ^ B| / |S|
= p(A) + p(B) - p(A ^ B)
How it feels to chew 5 gum
What?
@@snellbrosmathHow it feels to chew 5 gum
Here is my proof:
Definition
QED
Looks like a lot of my students' geometry proofs 😂