Just in case someone out there doesnt know, Asaf Karagila is a legendary set theorist, expert in foundation of mathematics and logic, plus an overall cool helpful dude. Big big big Thanks to the folks at Numberphile for having him collab once more!
When I took Number Theory in college, we learned that if you take Well Foundedness (or Well Orderedness) as an axiom, you can prove Mathematical Induction works. If, instead, you take Mathematical Induction as an axiom, you can prove Well Foundedness works. But you need to take one as an axiom to prove the other. Still amazing.
@@liamdonegan9042 If you take one as true, you can immediately prove the other as true. So either way you get the same two things as true. So the two "worlds" are equivalent.
I'll never understand induction. Here's why: I started with no knowledge of induction whatsoever. Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it. Thus, I'll never understand induction.
isnt't this assumption false: "Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it"? If you change it to "whatever I know now, I will learn something new when I read a book about it", you just proved that you can be an expert in induction at some time... 😀
@@manloeste5555 You've just proven there is an infinite amount of things to know about induction, meaning you will never understand induction. Hence induction is a paradox and induction is thereby non-existent; proof by contradiction.
@@Nope-w3c no that’s not how induction works either. Induction works by saying at any point you can take one step further and since we know we can start at some beginning then any step is valid so yes there may be infinite knowledge but at every step you can learn more from the same method there is no requirement of hitting the “limit” of the function
This is a way better explanation than I got, which was: "see it's true for 1," and "see it's true for 2," which I said, "but that only proves it's true for 1 and 2, there's potentially an edge case here," but you started from "it's true for n=1" and then proved for, "it's true for n+1"... Which is *quite a different thing.* Thank you.
This is how it should always be presented, though induction is one of those things that is often poorly explained, and it's very easy for that key point to be missed or misunderstood. Often because they try to use the base case to make the inductive step feel less abstract (even though needs to be abstract). imho I always felt that it was better to start with the inductive hypothesis and inductive step, and then provide the base case at the end. So you basically say "when it's true for n, it must also be true for n+1", this step should be 100% generalized and have no relation to any specific value of n whatsoever. This helps prevent the base case from muddying up the purpose of the inductive step.
I'm sure you could kind of relate even that with this concept, base case would be your electric coil has current flowing trough it, and inductional hypothetisys is: "if it has current flowing trough it, pan close to it will increase heat", and induction stops working when you make the base no longer true
And it has been claimed that this method was already known before Gauss. Famous people often benefit from what is known as the 'Matthew Effect". For example, Einstein is often credited with things that he did not originate. For a start, 'relativity' was Poincare's term.
@DJF1947 but this is about the YOUNG Gauss. The trick was of course long known, and it wasn't named after him or anything. As for relativity, it was Lorentz who wrote a series of papers on "Einstein's principle of relativity". The Lorentz transformations are still named after him.
I think the lesson in the story is that children can, and should be taught how to, think logically about numbers. Not just what is happening, but why, and what the underlying patterns are.
@@ronald3836 My point was that the young Gauss is routinely credited, by 'popular writers' with inventing it. And please do not lecture a physicist about relativity. Poincare was the first to coin the term. Einstein did not like that: he wanted to call it invariants theory (in translation)..
Occasionally I like to think about induction as "There is no smallest counterexample". That way you exchange the domino-effect abstraction for the contradiction abstraction. That can be easier for some.
This is why well-fundness is crucial here. If there is a counterexample that means it has to exist the one with the lowest index. Let's say we have the n_0 is the one, so we can tell all of the previous are correct and then we can prove that n_0 is not a counterexample leading to contradiction.
I introduce the “no smallest counterexample” formulation as “no minimal criminal”. This nicely generalises to transfinite induction on well-ordered sets.
There is a famous proof that all natural numbers are interesting. Base case: Zero is interesting. It's the identity for addition, if nothing else. Induction step: Suppose n > 0, and all natural numbers m < n are interesting. Now we consider two cases: 1) If n is interesting, then it is trivially interesting. 2) If n is not interesting, then it is the smallest uninteresting natural number, which is pretty interesting. Therefore, n is interesting. QED
Yes. This is the well-ordering principle in action. One can also prove the well-ordering principle from induction They're both logically equivalent.@@PunkSage
One way you can think about it is that the natural numbers are built by inventing this thing we call 0 (or 1) and repeatedly applying a successor operation on it, while postulating that each successor is a new object (we never loop back). So, every natural number is either 0 (or 1), or a successor of some other natural number and these are the only two cases you need to check.
The proof of the simple, yet mesmerizing and elegant Dijkstra Shortest Path algorithm requires induction and well-soundness. It's so simple after you grasp all of this!
I used to be completely baffled by induction, but after actually learning how it actually works, it's really fun to use. Best proof technique candidate.
It's funny, I had the same change of opinion, until I realized that it is not always a satisfying proof technique. The reason is: you need to know the answer beforehand, which often lets you wonder where it came from. This is why I now prefer any proof that derives the answer with no required knowledge on the answer.
Agreed, it feels very mechanical. Follow a recipe, do some algebraic manipulation and you're done. It's a cool mechanism of proof in terms of the logic but the novelty wears off quickly @@Nolord_
@@Nolord_ That's fair, it is a flaw in it. Though you can also test if a pattern you seem to have noticed works or not by trying to apply induction to it. If it works, you just proved the whole thing. If it doesn't work, you know your version is false and can work from there. This approach is less rigorous though, as you'd be lightly brute-forcing things this way.
@@plushloler I agree that induction is very useful in that regard. But usually, when writing a paper, when I get my induction proof that guarantees me the validity of the pattern, I try to find another proof that derives the answer which is made easier by the fact that you know the answer. This often leads to a shorter, more elegant proof and that feels less arbitrary. In one of my papers that deals with a lot of sequences, I have like 2 inductions in them. Other proofs always use the information gained, to avoid doing them again.
The return of Asaf! Gotta say, his vids have the highest chance of making no sense to me, but it's neat that I've seen him on the math stackExchange a lot lol
This is some good, pure numberphile. I’ll have to rewatch to actually understand tho. This is the type of mathematics that I wish I knew better. It’s what holds everything up.
The basic premise of induction is like setting up dominoes and knocking over the first one. If you can prove that the next domino will fall down if the previous one does and you can make the first one fall then all the dominoes will fall. It’s just explained in more generic terms. The base case is the first domino, you prove that whatever it is is true. The induction case is proving that no matter what domino/case you’re looking at, if the previous one is true then it must mean this one is also true. It’s the trickiest part of induction, proving the dominos are lined up. Once you have those two things done then you’ve got a proof. If N then N+1 applies to that base case you established. If 1 works and N=1 then N+1=2 works, and if 2 works and N=2 then N+1=3 works, etc etc.
I was worried I wasn't going to get anything new out of this one, but then he managed to link my semester of proofs in a new way to discrete math with the binary tree search. Worthwhile watch, thanks.
Those who really understand maths such as Asaf; know how simple and trivial it is and knowing this can bring you the kind of joy that is as sweet as pi
It's interesting to note that so-called "strong induction" is not actually stronger than ordinary induction, in the sense that anything that can be proved by strong induction can also be proved by ordinary induction. It's just that for some problems the form is more convenient.
The "strong" in strong induction is not about its strength for proving stuff. The word "strong" refers to the notion that a stronger statement implies a weaker statement, i.e if A implies B, but B does not imply A, we say that A is stronger than B. In that sense, the induction hypothesis is stronger in strong induction, because assuming that something is true for all numbers up to n implies that it is true for n. However, assuming that something is true for n does not imply that it is true for every number up to n.
There are statements that are true on the Real numbers that can be shown with strong induction that I think would be very hard to do with normal induction.
@@karlwaugh30just rephrase your statement you are proving by induction to be “this holds for all numbers less than n”, then your regular induction hypothesis will be equivalent to the strong induction hypothesis. You might think I am cheating but this just goes to show there is really no distinction between strong and ordinary induction - strong induction can just be viewed as regular induction on a slightly modified statement
saying "F_(k+2) = F_k + F_(k+1)" is true, but what you're trying instead to do is go from F_k + F_(k+1), use your induction hypothesis, to arrive to the number corresponding to F_(k+2) at the bottom! That is, prove the formula is equivalent through sandwiching by the expected value / go from RHS to LHS with only other derivations in-between
Beautiful Squire Strat. I want to replace my dirty old gas range with an induction cooker; and also replace my water heater to go gas-free. Factorials are a prime case for dynamic programming instead of recursion.
I was actually totally understanding the prime factor induction proof, but then I got so lost when he tried to justify it further with “well foundedness”.
That's my impression too, and it is because there are actually two proofs of the fact that every number larger than 1 has a prime factor in the video. The first proof is by ordinary induction on the natural numbers n, but what is proven at each step is the fact that each element of {2,3,4,...n} has a prime factor. Here, the induction is initialized with n=2, and the assertion obviously holds for {2}. The second proof is in a way on the size of the set of divisors of the number n. The induction is founded by all primes, as they have a 2-element set of divisors. The induction step goes like this: if n has more than 2 divisors, then it can be written as a product of two factors which each have less divisors than n. As one can decrease the number of divisors all the way down to 2, this proves the assertion. In my opinion, it's simply not true that this induction needs no foundation, and my impression is that the distinction between these two proofs hasn't been made sufficiently clear in the video. Notice that the first proof uses the usual complete ordering "
I think the prime divisor example is a good one because you actually _need_ induction (or something very similar) to prove that. Euclid's proof by infinite descent is essentially equivalent.
Admittedly a long time ago …. The awesome Mr Russell (Maths teacher extraordinaire) taught us the steps as: (1) assume true for n=k; (2) using that prove for k+1; (3) find an example (like n=1)…. At the time I noted that many people got hung on the change from n to k so the method here would have helped them.
Since programming was brought up: induction is kind of like memoization in recursive dynamic programming: memoization=if not calculated yet, recurse, then calculate. induction: if not proved yet, recurse then prove
If I'm allowed to brag a bit, I figured out the formula for triangular numbers on my own, and then proved it by induction , when I was in my early teens. That's a case of reinventing the wheel, of course, but I still have fond memories about it 🙂
Same! My brother had given me the little fact about the numbers on a roulette wheel adding to 666. I was bored in the shops and decided to verify this mentally, which led me to looking for a faster way than just summing them. Trying to find/prove other formulae (e.g. for pyramid numbers, etc) became a favourite passtime.
Rinse the rice in a sieve. Place in a pan on the stove. Add water according to the directions. Bring to boil. Set to lowest heat. Cover and cook for 11 min (white rice) or 23 min (brown rice).
Great video. Remember the magic of induction from school. The Python example however is a bit silly. A recursive function will perform the same number of multiplications as an iterative function, only you have to pay the cost of function calls. The real magic is C++ constexpr, "constexpr int factorial(int n) { int res = 1; while (n > 1) res *= n--; return res; }".
In my undergrad, we said a set A was well-ordered iff for any subset B there one element of B that was "smaller" than all the rest. Well-formed was a term from the philosophy department specifically in logic. A formula is a well formed formula if it meets all the relevant syntax rules.
The term Asaf used is well-founded. Well-ordered is well-founded+totally-ordered. Divisibility, for example, is well-founded but not well-ordered, since it's not a total order.
And I think it is more intuitive. The first and last numbers are (n+1). The next pair is also (n+1). And so on. And there will be n/2 pairs of numbers... so (n+1) x (n/2)
The thing that helped me the most about induction was not the domino characterization, but infinite descent. Basically you establish a base case and then assume failure for some n. You can choose n to be the minimum that failure occurs and then use the fact that it works for n-1 you establish a contradiction. Simple.
Be aware that if the wall gets quite cold and the radiators are working hard, guitar leaning against the wall might get unpleasant warping of the neck. Potentially you'll get no issues, but there's no guarantee.
Finding equivalencies in mathematics is incredibly difficult. The recursion was a great showcase, reasonably easy to understand, unfortunately I don't think it enables me to find similar concepts anywhere else.
It is also possible to think about the prime divisor example by considering only 2 as the base case (i.e. not every prime): 2 is prime, so it has a prime divisor (itself). Then the same induction hypothesis and step complete the argument, noting that if n is not prime, by definition n has a factor k that is strictly greater than 1 and less than n.
Proof by contradiction always made the most sense to me, but it's not always clear what contradiction you need to build towards in your proof. Contraposition is the real magic.
Contraposition is just a reversal of a standard if clause or assumption. The initial statement is “If P then Q”, the contraposition is “If not Q then not P” Basically if p is true then q must be true, therefore the only way q can be false is if p is also false. If p wasn’t false then q couldn’t be false.
In programming, this stack limit problem is exactly why recursion isn't used very often in practice. In the example of the factorial, the space complexity is indeed reduced to log_2(n) (where n is the number you are computing, which is different from regular big O notation), but the time complexity is still at n, you still need n multiplication operations no matter how you slice it. For this reason, a simple loop is often preferred in this case, as the space complexity is now detached from the input (i.e. O(1)), while the time complexity is still n, but will be much faster in practice, exactly because you don't need to allocate stack space.
Yeah, math's people always go for recursion first, treating everything as a function. In most languages every function call must return. But in languages that do optimise tail-calls, where if a function that has nothing left to do, it can be skipped in the return path, then this form of recursion is identical to a loop in practice. But a "loop" that could run different functions on each iteration.
You're assuming that multiplication is a constant-time operation. For big integers, it is not, and it turns out that the binary recursion algorithm is faster than the loop for large enough n.
@@DeGuerre A: You don't need recursion to do a binary tree algorithm. It simplifies it, but it's not required. B: If avoiding big integers is a target, the recursion shown is suboptimal, because it groups together all the big numbers separate from all the small numbers. You'd want to select the inner 50% and the outer 50% together.
@@dojelnotmyrealname4018 Sure. The "optimal" algorithm would actually be to put all of the numbers into a priority queue, and then iteratively remove the two smallest numbers, multiply them together, and insert the product back. Repeat until you only have one number left. Nobody said recursion was required. My point only that the binary recursion algorithm is faster in practice than a simple loop for computing large factorials.
It's funny how there are multiple ways to approach things. Years ago I randomly came up with: B = (A/2) * (A - 1) + A Where A is the consecutive numbers in the series, and B is the Sum of numbers ...turns out a formula already exists, but I saw the pattern between the Total (Sum) and the numbers in the series increase by a set rate 1/1 = 1 3/2 = 1.5 6/3 = 2... Since it's at a rate of 0.5 increase, this is why I divided by 2 first (A/2) Because the rate is proportional to the previous consecutive numbers, I get the previous number (A-1) To get the previous total, I multiply the halved value by the previous... (A/2) * (A-1) Then to get the next in the series, you just add the last number (A/2) * (A-1) + A
"Horses must all have the same color as each other." Proof by weak induction: - base case One horse has the same color as itself, trivially true. - induction hypothesis: Assume N horses must have the same color. - induction step: given N+1 horses, the first N horses (i.e. all except horse N+1) all have the same color by the induction hypothesis, and the last N horses (i.e. all except horse 1) have the same color by the induction hypothesis; therefore ALL N+1 horses are in a set that all have the same color, thus all have the same color. 🥴
The problem here is with the N=2 case because there is no overlap in the two sets. You would first shave to prove that any two horses have the same color for this argument to work Edit: I have since realized this is a well-known example, but did not know that when making my original comment
@@notinglonias I have viewed this as in some sense fatal to the utility of (simple) induction. What this shows is that when you prove the inductive step (n-> n+1) you have to show it is actually correct for all n, i.e. true for an infinite number of cases, much like the original problem you are trying to prove.
@@silver6054 I get where you're coming from but I don't think that is the case. IIRC weak and strong induction are logically equivalent to each other but have different use cases
@@notinglonias Oh, I agree that strong and weak induction are logically equivalent, I was just using weak because that is what was used in the original horses are the same colour post. But of course, I still think my point holds some. Here we show it works for n=1 (or 0 if you like) and show a fairly convincing n->n+1 proof. You then have to go back to discover the argument doesn't work for n=2, and, IMO, the reason we do this is that we know the result is absurd, otherwise we might have been content with proo. So while I don't have an example, you can imagine an inductive step involving a chain of inequalities, and not realizing that one of them isn't true for some values (e.g. pi(x) > li(x)) "silently" breaking the inductive reasoning.
It's at the very least an understatement to call this just "induction" - it is, but it's a _special_ kind of induction, it's what's more precisely called "complete induction".
I've found recently while playing Gloomhaven, that some "puzzles" are more about being able to change your mindset than trying to fit the problem in it. Many things in mathematics, to me, are like this.
Although they amount to the same thing, I’ve always preferred the recursive phrasing (1), because it puts more emphasis on the importance of well-foundedness-we’re counting _down_ finitely-while a typical presentation of induction (2) looks like counting _up_ infinitely, which is really more like coinduction. 1. f(n) := if (n = 0) then [ base case ] else [ inductive case using f(n − 1) ] 2. P(0) by [ base case ] and for all n, P(n) → P(n + 1) by [ inductive case ]
Wow, a very very good video. Thank you professor. This is why there needs to be a check for the first step in induction, right? That is what checks that what you are doing is well founded.
Be careful when defining factorial to a computer. * You didn't specify the argument type. If you pass 7.5, you'll get the wrong answer. If you pass "aoeu", instead of getting a method error at the function call, you'll get a method error inside the function. * If you pass a negative integer, you'll get 1, which is wrong. (-1)!=∞, which is not available in the Integer types, so it should throw an error. * The standard factorial function in Julia, when passed 1000, will advise you to do big(1000), resulting in a BigInt. If it didn't throw an error, it'd return 0, because 1000! is a multiple of 2^64. I once had to compute a formula involving diagonal numbers of Pascal's triangle. The arguments involved were small (around 100?), but big enough that the factorials overflowed the Float64 type, and I had to use the loggamma function and exponentiate the result.
(Solved) 1 whole (1/0) or 1 base (0/1). N= 1 whole because they used square. To whole something, it has to be square. Yes, all whole numbers are square. Not in shape but in fuction. Circle is squared ( L and H) ( 1 and 1)). Two functions. 1 suared is 2. Parallelograms prove this. What else do Parallelograms tell you? Probability? Outcome?
The domino analogy could've been used for more than visuals here: each domino is a specific property like "the sum of integers from 0 to 999 equals (999² + 999) / 2", and tipping it over means proving the property. Induction lets you find a chain of dominoes that will knock over the one you're interested in, but if there isn't a domino in that chain that you can tip over, nothing happens and your target remains unproven
Fun fact, the binary splitting algorithm for computing factorial is also more efficient because large integer multiplication is faster if the two numbers being multiplied are roughly the same magnitude. Smart multiplication algorithms like Karatsuba and FFT work better in this case. Multiplying 23! by 24 is very "lopsided", but multiplying 12! by (24!/12!) is not.
The thing about Gauss was first told by his friend Wolfgang Sartorius von Waltershausen in a biography titled "Gauss zum Gedächtnis". He knew, according to him, from very trustworthy sources. He wrote, however, that it was the sum of an arihtmetic series that the students were told to compute, he does not say precisely which one.
You proved, in effect, that P(n) implies P(n+1). You'd find the algebra easier if you'd proved P(n-1) implies P(n). IME inductive arguments are easier if the goal case is n rather than n+1.
It also saves time, but for reasons that aren't immediately obvious. 1. Memory locality. A shallower stack depth means a smaller working set, so you are working with your CPU's cache, not against it. 2. Multiplication of two large integers is not a constant-time operation. Specifically, big integer multiplication is more efficient if you are multiplying two numbers of similar magnitude, than if you're multiplying a very large number with a very small one (e.g. 23! multiplied by 24). The reason is that the big integer library uses smarter algorithms such as Karatsuba or FFT, which work more efficiently in that case.
I think the induction step, case 1 (if n is prime) actually _is_ the base case (or, the set of base cases). If you would structure the proof like that, then there wouldn't be a "hidden" base case.
If we assume the well-ordering principle to be true then induction has to be true as well. Proof: we want to prove a certain proposition P is true for all natural numbers. Let S={n in N : P(n) is true}. Suppose the WOP is true (meaning every subset of the natural numbers has a least element). Assume further we have succesfully proven 1)The case n=1 and 2) the inductive step for every n :P(n)=>P(n+1). Now assume S is not equal to N, then there must be a subset M within N for which P is false and that set cannot contain n=1 because we already proved P(1) is true. Then this means by the WOP it must have a least element say m+1. But then m must be true and by our inductive step having been proven succesfully we conclude P(m+1) must also be true. Therefore we arrive at a contradiction and we conclude P is true for all n, then S=N. Therefore WOP=> Induction is a valid proof method.
actually completely forgot about strong induction. my real analysis lecturer talked about it in passing, so we didnt study it that much (as generally you will be using the principle of induction over strong induction). was confused about how to prove the fundamental theorem of arithmetic in a recent subject, but completely forgot about this method of induction
Using the induction method taught here: Base case: In any set of 1 horse, all horses in the group are all the same color (trivially). Induction hypothesis: In any set of N horses, all N horses in the set are the same color. Induction step: Take a set of N+1 horses. Remove 1 horse from the set. That leaves a set of N horses, which by the Induction hypothesis, are all the same color as each other. Now replace the horse and remove a different horse. Again, we have a set of N Horses, all the same color as each other. Since the two smaller sets overlap with each other, they must both be the same color. Therefore, all horses in the set of N+1 Horses are the same color as each other. Furthermore, if this process is continued, every horse will be included at some step. Therfore ALL horses are the same color. So... Find the flaw. It's actually a little tricky and tests your understanding of induction and things you need to be careful for when crafting inductive proofs. :)
You haven't proven that the "N" in the 'set of N+1 horses' is the same "N" as in the hypothesis. You haven't shown how the horse's color fits into ANY equation.
@WayneASchneider it is the same N. You don't show that, it's defined that way. This is the standard way you move from induction hypothesis to induction step. In the induction hypothesis, you are assuming that for some given N, this thing is true. You've already proved it for your base case, the first N. Now you're assuming it for a random, given N. And then you're going to take that assumption and use it to prove that if the thing is true for N, then it is also true for N+1. You're proving that IF you cash get to the Nth step of the ladder, THEN you can also get to the N+1st step. In that way, it's true for EVERY N, because you've shown is true for N=1, then you've shown that if it's true for N=1, it's true for N=2, and you've shown that if it's true for N=2, it's true for N=3, and so on forever.
Induction can be confusing because it is based on the fundamental properties of the subject. If you don't know what they are, how they work and what you can do with them then you can't use logic to line them together to end up with a conclusion. I think that is what he means by well founded.
No, that's not what he means by well-founded. This well-founded thing is one of the axioms of modern mathematics (ZFC) and just means that for every set S there exists an order R under which all non-empty subsets of S have a smallest element. Example specifically for the natural numbers: If you pick random numbers your selection (as long as you picked more than zero numbers) will always have a smallest number in it - and that is true for all selections you can possibly make. That's what this whole induction magic relies on. You proof that IF it holds for a specific element it will hold for the "next" one. And then you show the specific case for the "smallest" element and poooooof.... you've shown it for all.
@@tetsi0815 It may refer to a specific thing in context but I feel this is true for any set of rules which you can use to reach a conclusion with induction. If he didn't mean it that is quite the interesting coincidence.
I like this guy! He has a black Strat with a white pickguard sitting right next to him in his office... as do I. 😁 (Although my fretboard is maple, while his is rosewood.)
Often people will use circular reasoning in supposed induction/proofs. I don't see a problem with this so long as they admit it is circular reasoning but then they don't admit it is.
I once needed that formula for triangular numbers and I figured it out with a bit different method. I imagined a n by n square divided to 1 by 1 squares. Triangular numbers fill the diagonal from upper left to lower right corner plus everything to the left of that diagonal. So we have n²/2 squares plus a bit extra form the stair steps in the diagonal. That diagonal has n squares and our n²/2 triangle already ate half of each square so we have n halves left. So the result is n²/2 + n/2 = (n²+n)/2.
Just in case someone out there doesnt know, Asaf Karagila is a legendary set theorist, expert in foundation of mathematics and logic, plus an overall cool helpful dude. Big big big Thanks to the folks at Numberphile for having him collab once more!
It's crazy for me watching this, because I'm currently being taught by him in uni!
excellently worded and brilliantly explained. i hope Asaf returns for more videos!
I'm currently being taught Set Theory and First Order Logic by Asaf, he's an excellent teacher :)
When I took Number Theory in college, we learned that if you take Well Foundedness (or Well Orderedness) as an axiom, you can prove Mathematical Induction works. If, instead, you take Mathematical Induction as an axiom, you can prove Well Foundedness works. But you need to take one as an axiom to prove the other. Still amazing.
Are they equivalent? Or do you get different truths elsewhere depending on which axiom you take?
@@liamdonegan9042 If you take one as true, you can immediately prove the other as true. So either way you get the same two things as true. So the two "worlds" are equivalent.
I'll never understand induction. Here's why: I started with no knowledge of induction whatsoever. Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it. Thus, I'll never understand induction.
isnt't this assumption false: "Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it"? If you change it to "whatever I know now, I will learn something new when I read a book about it", you just proved that you can be an expert in induction at some time... 😀
Yea bro, just give up😊😊
Skibido ohio toilet rizz is just way more important anyway😢
GG quitters mentality for the win😮😊❤
@@manloeste5555 You've just proven there is an infinite amount of things to know about induction, meaning you will never understand induction. Hence induction is a paradox and induction is thereby non-existent; proof by contradiction.
Induction is very non-intuitive to me, just not how I think.
@@Nope-w3c no that’s not how induction works either. Induction works by saying at any point you can take one step further and since we know we can start at some beginning then any step is valid so yes there may be infinite knowledge but at every step you can learn more from the same method there is no requirement of hitting the “limit” of the function
This is a way better explanation than I got, which was: "see it's true for 1," and "see it's true for 2," which I said, "but that only proves it's true for 1 and 2, there's potentially an edge case here," but you started from "it's true for n=1" and then proved for, "it's true for n+1"... Which is *quite a different thing.* Thank you.
This is how it should always be presented, though induction is one of those things that is often poorly explained, and it's very easy for that key point to be missed or misunderstood. Often because they try to use the base case to make the inductive step feel less abstract (even though needs to be abstract).
imho I always felt that it was better to start with the inductive hypothesis and inductive step, and then provide the base case at the end. So you basically say "when it's true for n, it must also be true for n+1", this step should be 100% generalized and have no relation to any specific value of n whatsoever. This helps prevent the base case from muddying up the purpose of the inductive step.
Attaching the concept of induction to recursion is so helpful to me...
I find it hard to believe this is what cooks my eggs in the morning. ;O)-
I'm sure you could kind of relate even that with this concept, base case would be your electric coil has current flowing trough it, and inductional hypothetisys is: "if it has current flowing trough it, pan close to it will increase heat", and induction stops working when you make the base no longer true
It's an amazing technology. It proves math, it cooks food - it's amazingly versatile.
I also takes musicians into the hall of fame.
To understand induction you need to know someone who understands induction.
To understand recursion, you need to understand recursion.
Recursion is recursion.
Original source: divine intervention? Random chance? Someone just hallucinated understanding of induction?
@@minecraftermad invention is the base of all creation
Gauss did not use induction, though. He just saw that 1 + 2 + 3 + ... + 100 = (1 + 100) + (2 + 99) + ... + (50 + 51) = 50 * 101 = 5050.
And it has been claimed that this method was already known before Gauss. Famous people often benefit from what is known as the 'Matthew Effect". For example, Einstein is often credited with things that he did not originate. For a start, 'relativity' was Poincare's term.
@DJF1947 but this is about the YOUNG Gauss. The trick was of course long known, and it wasn't named after him or anything. As for relativity, it was Lorentz who wrote a series of papers on "Einstein's principle of relativity". The Lorentz transformations are still named after him.
I think the lesson in the story is that children can, and should be taught how to, think logically about numbers. Not just what is happening, but why, and what the underlying patterns are.
@@ronald3836 My point was that the young Gauss is routinely credited, by 'popular writers' with inventing it. And please do not lecture a physicist about relativity. Poincare was the first to coin the term. Einstein did not like that: he wanted to call it invariants theory (in translation)..
That’s correct, I was his teacher
Occasionally I like to think about induction as "There is no smallest counterexample". That way you exchange the domino-effect abstraction for the contradiction abstraction. That can be easier for some.
This is why well-fundness is crucial here. If there is a counterexample that means it has to exist the one with the lowest index. Let's say we have the n_0 is the one, so we can tell all of the previous are correct and then we can prove that n_0 is not a counterexample leading to contradiction.
I introduce the “no smallest counterexample” formulation as “no minimal criminal”. This nicely generalises to transfinite induction on well-ordered sets.
There is a famous proof that all natural numbers are interesting.
Base case: Zero is interesting. It's the identity for addition, if nothing else.
Induction step: Suppose n > 0, and all natural numbers m < n are interesting. Now we consider two cases:
1) If n is interesting, then it is trivially interesting.
2) If n is not interesting, then it is the smallest uninteresting natural number, which is pretty interesting.
Therefore, n is interesting.
QED
my set theory professor likes to call proofs that work off that pattern "contraduction"
Yes. This is the well-ordering principle in action. One can also prove the well-ordering principle from induction
They're both logically equivalent.@@PunkSage
One way you can think about it is that the natural numbers are built by inventing this thing we call 0 (or 1) and repeatedly applying a successor operation on it, while postulating that each successor is a new object (we never loop back). So, every natural number is either 0 (or 1), or a successor of some other natural number and these are the only two cases you need to check.
I got about half way through Gödel Escher Bach on my own. I remember Mr. Hofstadter talking about this.
We don't even have to postulate that the successor is a different object, that can be derived from Peano axioms.
The proof of the simple, yet mesmerizing and elegant Dijkstra Shortest Path algorithm requires induction and well-soundness. It's so simple after you grasp all of this!
I love how he switched from math to programming to explain a mathematical concept that he was talking about with a real-life example.
I used to be completely baffled by induction, but after actually learning how it actually works, it's really fun to use. Best proof technique candidate.
It's funny, I had the same change of opinion, until I realized that it is not always a satisfying proof technique. The reason is: you need to know the answer beforehand, which often lets you wonder where it came from.
This is why I now prefer any proof that derives the answer with no required knowledge on the answer.
@@Nolord_yeah i only get to use it when I suspect some pattern is true, definitely not my favourite
Agreed, it feels very mechanical. Follow a recipe, do some algebraic manipulation and you're done. It's a cool mechanism of proof in terms of the logic but the novelty wears off quickly @@Nolord_
@@Nolord_ That's fair, it is a flaw in it. Though you can also test if a pattern you seem to have noticed works or not by trying to apply induction to it. If it works, you just proved the whole thing. If it doesn't work, you know your version is false and can work from there. This approach is less rigorous though, as you'd be lightly brute-forcing things this way.
@@plushloler I agree that induction is very useful in that regard. But usually, when writing a paper, when I get my induction proof that guarantees me the validity of the pattern, I try to find another proof that derives the answer which is made easier by the fact that you know the answer. This often leads to a shorter, more elegant proof and that feels less arbitrary.
In one of my papers that deals with a lot of sequences, I have like 2 inductions in them. Other proofs always use the information gained, to avoid doing them again.
The return of Asaf!
Gotta say, his vids have the highest chance of making no sense to me, but it's neat that I've seen him on the math stackExchange a lot lol
This is some good, pure numberphile. I’ll have to rewatch to actually understand tho.
This is the type of mathematics that I wish I knew better. It’s what holds everything up.
The basic premise of induction is like setting up dominoes and knocking over the first one. If you can prove that the next domino will fall down if the previous one does and you can make the first one fall then all the dominoes will fall. It’s just explained in more generic terms.
The base case is the first domino, you prove that whatever it is is true.
The induction case is proving that no matter what domino/case you’re looking at, if the previous one is true then it must mean this one is also true. It’s the trickiest part of induction, proving the dominos are lined up.
Once you have those two things done then you’ve got a proof. If N then N+1 applies to that base case you established. If 1 works and N=1 then N+1=2 works, and if 2 works and N=2 then N+1=3 works, etc etc.
I was worried I wasn't going to get anything new out of this one, but then he managed to link my semester of proofs in a new way to discrete math with the binary tree search. Worthwhile watch, thanks.
Those who really understand maths such as Asaf; know how simple and trivial it is and knowing this can bring you the kind of joy that is as sweet as pi
Finally I understand how my induction stove works!
Phycisist here: I thought he was gonna talk about EM Induction lol
the lighting symbol in the thumbnail tricked me too
Me too! My brain automatically assumed it was a Sixty Symbols video and not Numberphile.
Physicist here: I was confused as to why Numberphile would talk about EM induction, but why not right.
lolol I almost did. But saw it was numberphile so def prepared myself for some hardcore mathematics.
Getcher motor running...
I love asaf. He makes ideas that piss me off when I hear other people explain them make sense.
It's interesting to note that so-called "strong induction" is not actually stronger than ordinary induction, in the sense that anything that can be proved by strong induction can also be proved by ordinary induction. It's just that for some problems the form is more convenient.
The "strong" in strong induction is not about its strength for proving stuff. The word "strong" refers to the notion that a stronger statement implies a weaker statement, i.e if A implies B, but B does not imply A, we say that A is stronger than B.
In that sense, the induction hypothesis is stronger in strong induction, because assuming that something is true for all numbers up to n implies that it is true for n. However, assuming that something is true for n does not imply that it is true for every number up to n.
There are statements that are true on the Real numbers that can be shown with strong induction that I think would be very hard to do with normal induction.
@@karlwaugh30just rephrase your statement you are proving by induction to be “this holds for all numbers less than n”, then your regular induction hypothesis will be equivalent to the strong induction hypothesis.
You might think I am cheating but this just goes to show there is really no distinction between strong and ordinary induction - strong induction can just be viewed as regular induction on a slightly modified statement
Induction Well-Ordering Principle
Paying in university and learning from you guys !
Yay, a Numberphile video on induction! Maybe I'll finally understand ihow it works.
Edit: I didn't - but I appreciate the effort.
1) Prove (statement holds for 1)
2) Prove (statement holds for n) => (statement holds for n+1)
Therefore (statement holds for all numbers)
Don't mind me, just proving the Fibonacci formula by induction:
Fibonacci formula: F(n) = (φ^n - (-1/φ)^n)/√5
Fibonacci series: F_n = F_(n-2) + F_(n-1) , F_0 = 0 , F_1 = 1
We need to prove P_F(n):
F(n) = F_n
base case n=0:
F(0) = (φ^0 - (-1/φ)^0)/√5
= 0 = F_0
base case n=1:
F(1) = (φ^1 - (-1/φ)^1)/√5
= 1 = F_1
F(0) = F_0 and F(1) = F_1
P_F(0) and P_F(1) are true.
assume P_F(k) and P_F(k+1) are true, k ∃ ℕ.
For n = k+2:
F_(k+2) = F_k + F_(k+1)
= (φ^k - (-1/φ)^k)/√5 + (φ^(k+1) - (-1/φ)^(k+1))/√5
= (φ^k(1+φ) - (-1/φ)^k(1-1/φ))/√5
= (φ^k * φ^2 - (-1/φ)^k * (-1/φ)^2)/√5
= (φ^(k+2) - (-1/φ)^(k+2))/√5
= F(k+2)
F(k+2) = F_(k+2)
P_F(k+2) is true.
∴ by the principle of mathematical induction, P_F(n) is true for n ∃ ℕ.
QED.
I dont need maths to prove he was a great pianist.
saying "F_(k+2) = F_k + F_(k+1)" is true, but what you're trying instead to do is go from F_k + F_(k+1), use your induction hypothesis, to arrive to the number corresponding to F_(k+2) at the bottom! That is, prove the formula is equivalent through sandwiching by the expected value / go from RHS to LHS with only other derivations in-between
Beautiful Squire Strat.
I want to replace my dirty old gas range with an induction cooker; and also replace my water heater to go gas-free.
Factorials are a prime case for dynamic programming instead of recursion.
More Asaf Karagila please! Thanks!
I was actually totally understanding the prime factor induction proof, but then I got so lost when he tried to justify it further with “well foundedness”.
That's my impression too, and it is because there are actually two proofs of the fact that every number larger than 1 has a prime factor in the video.
The first proof is by ordinary induction on the natural numbers n, but what is proven at each step is the fact that each element of {2,3,4,...n} has a prime factor. Here, the induction is initialized with n=2, and the assertion obviously holds for {2}.
The second proof is in a way on the size of the set of divisors of the number n. The induction is founded by all primes, as they have a 2-element set of divisors. The induction step goes like this: if n has more than 2 divisors, then it can be written as a product of two factors which each have less divisors than n. As one can decrease the number of divisors all the way down to 2, this proves the assertion. In my opinion, it's simply not true that this induction needs no foundation, and my impression is that the distinction between these two proofs hasn't been made sufficiently clear in the video.
Notice that the first proof uses the usual complete ordering "
I think the prime divisor example is a good one because you actually _need_ induction (or something very similar) to prove that. Euclid's proof by infinite descent is essentially equivalent.
Admittedly a long time ago …. The awesome Mr Russell (Maths teacher extraordinaire) taught us the steps as: (1) assume true for n=k; (2) using that prove for k+1; (3) find an example (like n=1)…. At the time I noted that many people got hung on the change from n to k so the method here would have helped them.
RIP to all the Electrical Engineering students who are lost and confused.
Since programming was brought up: induction is kind of like memoization in recursive dynamic programming: memoization=if not calculated yet, recurse, then calculate. induction: if not proved yet, recurse then prove
Disappointed this wasn't about electromagnetic induction.
Very intrigued and excited to learn a new form of induction I've never heard of.
Missed opportunity to flash an image of Matt when recursion was mentioned.
For an instant I thought he was going to explain us magnetic induction in guitar pick ups...
If I'm allowed to brag a bit, I figured out the formula for triangular numbers on my own, and then proved it by induction , when I was in my early teens. That's a case of reinventing the wheel, of course, but I still have fond memories about it 🙂
I loved repeatedly proving famous formulae to myself as a teen. It nurtured my love for math!
Definitely brag about it dude, that's incredible in our day and age.
Same! My brother had given me the little fact about the numbers on a roulette wheel adding to 666. I was bored in the shops and decided to verify this mentally, which led me to looking for a faster way than just summing them. Trying to find/prove other formulae (e.g. for pyramid numbers, etc) became a favourite passtime.
Never brag.
@@jacoboribilik3253 I want to say some mean things to you.
Gauss was just built different. Just a genius from age five. Gauss and Euler were the goats.
11:32 37 mentioned! Raaah! What is a random number?!
This didn't solve any of my questions about how to cook rice on my induction stove 😮💨
Rinse the rice in a sieve. Place in a pan on the stove. Add water according to the directions. Bring to boil. Set to lowest heat. Cover and cook for 11 min (white rice) or 23 min (brown rice).
Get a rice cooker.
Great video. Remember the magic of induction from school. The Python example however is a bit silly. A recursive function will perform the same number of multiplications as an iterative function, only you have to pay the cost of function calls. The real magic is C++ constexpr, "constexpr int factorial(int n) { int res = 1; while (n > 1) res *= n--; return res; }".
I remember when I first saw induction in high school. It is an insanely smart idea if you stop to think about that.
I didn't know the guitarist from Metallica was a Math nerd aswell! :O
So is the guitarist from Queen! 😮
He even kept his guitar behind 😂
@@richardgratton7557 He's an *astronomy* nerd. (Which admittedly involves a certain amount of math nerdery...)
Babe, babe wake up, a new numberphile video just dropped
In my undergrad, we said a set A was well-ordered iff for any subset B there one element of B that was "smaller" than all the rest. Well-formed was a term from the philosophy department specifically in logic. A formula is a well formed formula if it meets all the relevant syntax rules.
The term Asaf used is well-founded. Well-ordered is well-founded+totally-ordered. Divisibility, for example, is well-founded but not well-ordered, since it's not a total order.
The domino graphics are on point since induction is just utilizing the concept of "implies next".
2:57 I've always seen the formula stated as (n+1)n / 2, so the factoring is less cumbersome
Exactly.
And I think it is more intuitive. The first and last numbers are (n+1). The next pair is also (n+1). And so on. And there will be n/2 pairs of numbers... so (n+1) x (n/2)
@ericrosen6626 Exactly!
1:55 He’s overestimating how much is taught in high school. Induction isn’t taught. College students barely understand Algebra 1.
Asaf Karagila is amazing.
Induction is so surprisingly powerful. Like the Pigeonhole Theorem or the Triangle Inequality.
Poincaré described this concept more succinctly.
Infinitely more succinctly.
The thing that helped me the most about induction was not the domino characterization, but infinite descent. Basically you establish a base case and then assume failure for some n. You can choose n to be the minimum that failure occurs and then use the fact that it works for n-1 you establish a contradiction. Simple.
Be aware that if the wall gets quite cold and the radiators are working hard, guitar leaning against the wall might get unpleasant warping of the neck. Potentially you'll get no issues, but there's no guarantee.
Induction-add a little resistance, a battery and next thing you know you have set fire to the foundations of modern society.
As the Tiger Lillies said, burning things is fun.
Finding equivalencies in mathematics is incredibly difficult. The recursion was a great showcase, reasonably easy to understand, unfortunately I don't think it enables me to find similar concepts anywhere else.
It is also possible to think about the prime divisor example by considering only 2 as the base case (i.e. not every prime): 2 is prime, so it has a prime divisor (itself). Then the same induction hypothesis and step complete the argument, noting that if n is not prime, by definition n has a factor k that is strictly greater than 1 and less than n.
The inductive proof for 2 can be used as-is, since "every number k such that 1
Proof by contradiction always made the most sense to me, but it's not always clear what contradiction you need to build towards in your proof. Contraposition is the real magic.
Contraposition is just a reversal of a standard if clause or assumption. The initial statement is “If P then Q”, the contraposition is “If not Q then not P”
Basically if p is true then q must be true, therefore the only way q can be false is if p is also false. If p wasn’t false then q couldn’t be false.
@@anthonycannet1305 Did someone say ravens?
In programming, this stack limit problem is exactly why recursion isn't used very often in practice. In the example of the factorial, the space complexity is indeed reduced to log_2(n) (where n is the number you are computing, which is different from regular big O notation), but the time complexity is still at n, you still need n multiplication operations no matter how you slice it.
For this reason, a simple loop is often preferred in this case, as the space complexity is now detached from the input (i.e. O(1)), while the time complexity is still n, but will be much faster in practice, exactly because you don't need to allocate stack space.
Yeah, math's people always go for recursion first, treating everything as a function. In most languages every function call must return. But in languages that do optimise tail-calls, where if a function that has nothing left to do, it can be skipped in the return path, then this form of recursion is identical to a loop in practice. But a "loop" that could run different functions on each iteration.
You're assuming that multiplication is a constant-time operation. For big integers, it is not, and it turns out that the binary recursion algorithm is faster than the loop for large enough n.
@@DeGuerre A: You don't need recursion to do a binary tree algorithm. It simplifies it, but it's not required.
B: If avoiding big integers is a target, the recursion shown is suboptimal, because it groups together all the big numbers separate from all the small numbers. You'd want to select the inner 50% and the outer 50% together.
@@dojelnotmyrealname4018 Sure. The "optimal" algorithm would actually be to put all of the numbers into a priority queue, and then iteratively remove the two smallest numbers, multiply them together, and insert the product back. Repeat until you only have one number left.
Nobody said recursion was required. My point only that the binary recursion algorithm is faster in practice than a simple loop for computing large factorials.
@@DeGuerre That could increase the complexity because now you have to sort through the queue to place the numbers.
This IS like watching magic! Thank you!
It's funny how there are multiple ways to approach things.
Years ago I randomly came up with: B = (A/2) * (A - 1) + A
Where A is the consecutive numbers in the series, and B is the Sum of numbers
...turns out a formula already exists, but I saw the pattern between the Total (Sum) and the numbers in the series increase by a set rate
1/1 = 1
3/2 = 1.5
6/3 = 2...
Since it's at a rate of 0.5 increase, this is why I divided by 2 first (A/2)
Because the rate is proportional to the previous consecutive numbers, I get the previous number (A-1)
To get the previous total, I multiply the halved value by the previous... (A/2) * (A-1)
Then to get the next in the series, you just add the last number (A/2) * (A-1) + A
It was an old joke in high schools and colleges... "3 is a prime, 5 is a prime, 7 is a prime... The rest follows from induction."
"Horses must all have the same color as each other."
Proof by weak induction:
- base case One horse has the same color as itself, trivially true.
- induction hypothesis: Assume N horses must have the same color.
- induction step: given N+1 horses, the first N horses (i.e. all except horse N+1) all have the same color by the induction hypothesis, and the last N horses (i.e. all except horse 1) have the same color by the induction hypothesis; therefore ALL N+1 horses are in a set that all have the same color, thus all have the same color.
🥴
The problem here is with the N=2 case because there is no overlap in the two sets. You would first shave to prove that any two horses have the same color for this argument to work
Edit: I have since realized this is a well-known example, but did not know that when making my original comment
@@notinglonias I have viewed this as in some sense fatal to the utility of (simple) induction. What this shows is that when you prove the inductive step (n-> n+1) you have to show it is actually correct for all n, i.e. true for an infinite number of cases, much like the original problem you are trying to prove.
@@silver6054 I get where you're coming from but I don't think that is the case. IIRC weak and strong induction are logically equivalent to each other but have different use cases
@@notinglonias Oh, I agree that strong and weak induction are logically equivalent, I was just using weak because that is what was used in the original horses are the same colour post. But of course, I still think my point holds some. Here we show it works for n=1 (or 0 if you like) and show a fairly convincing n->n+1 proof. You then have to go back to discover the argument doesn't work for n=2, and, IMO, the reason we do this is that we know the result is absurd, otherwise we might have been content with proo. So while I don't have an example, you can imagine an inductive step involving a chain of inequalities, and not realizing that one of them isn't true for some values (e.g. pi(x) > li(x)) "silently" breaking the inductive reasoning.
8:55 took me a minute, but it's basically a recursion function in software.
12:56 Boom.
It's at the very least an understatement to call this just "induction" - it is, but it's a _special_ kind of induction, it's what's more precisely called "complete induction".
This is evidenced simply as, if you removed 2 from the trivial solution set, you would gain infinitely many counter-examples.
I've found recently while playing Gloomhaven, that some "puzzles" are more about being able to change your mindset than trying to fit the problem in it. Many things in mathematics, to me, are like this.
And here I was expecting electricity stuff
Although they amount to the same thing, I’ve always preferred the recursive phrasing (1), because it puts more emphasis on the importance of well-foundedness-we’re counting _down_ finitely-while a typical presentation of induction (2) looks like counting _up_ infinitely, which is really more like coinduction.
1. f(n) := if (n = 0) then [ base case ] else [ inductive case using f(n − 1) ]
2. P(0) by [ base case ] and for all n, P(n) → P(n + 1) by [ inductive case ]
Wow, a very very good video. Thank you professor. This is why there needs to be a check for the first step in induction, right? That is what checks that what you are doing is well founded.
Be careful when defining factorial to a computer.
* You didn't specify the argument type. If you pass 7.5, you'll get the wrong answer. If you pass "aoeu", instead of getting a method error at the function call, you'll get a method error inside the function.
* If you pass a negative integer, you'll get 1, which is wrong. (-1)!=∞, which is not available in the Integer types, so it should throw an error.
* The standard factorial function in Julia, when passed 1000, will advise you to do big(1000), resulting in a BigInt. If it didn't throw an error, it'd return 0, because 1000! is a multiple of 2^64.
I once had to compute a formula involving diagonal numbers of Pascal's triangle. The arguments involved were small (around 100?), but big enough that the factorials overflowed the Float64 type, and I had to use the loggamma function and exponentiate the result.
A minor note, int in Python has unlimited length, i.e. it won't overflow.
I just learned this in school and didn't get sh**. Now I do, thanks Numberphile! :)
(Solved) 1 whole (1/0) or 1 base (0/1). N= 1 whole because they used square. To whole something, it has to be square. Yes, all whole numbers are square. Not in shape but in fuction. Circle is squared ( L and H) ( 1 and 1)). Two functions. 1 suared is 2. Parallelograms prove this. What else do Parallelograms tell you? Probability? Outcome?
The first example can start at 0 : the sum of 0 is 0 and also (0^2 + 0)/2 = 0. The formula is proved for all Natural Numbers.
There is not a consistent definition of "natural numbers" in the world. You happened to include it.
@forcelifeforce You can't count the number of cows in a field without zero. Nature is as natural as you can get.
The domino analogy could've been used for more than visuals here: each domino is a specific property like "the sum of integers from 0 to 999 equals (999² + 999) / 2", and tipping it over means proving the property. Induction lets you find a chain of dominoes that will knock over the one you're interested in, but if there isn't a domino in that chain that you can tip over, nothing happens and your target remains unproven
Asaf Karagila! A name math stack exchange users will know well
I mean I'm sure Gauss could've figured out a formula, but he wouldn't have done it inductively. Summing opposite pairs is much more intuitive.
i took notes..........
great explanations!!!!!!
Fun fact, the binary splitting algorithm for computing factorial is also more efficient because large integer multiplication is faster if the two numbers being multiplied are roughly the same magnitude. Smart multiplication algorithms like Karatsuba and FFT work better in this case.
Multiplying 23! by 24 is very "lopsided", but multiplying 12! by (24!/12!) is not.
The thing about Gauss was first told by his friend Wolfgang Sartorius von Waltershausen in a biography titled "Gauss zum Gedächtnis". He knew, according to him, from very trustworthy sources. He wrote, however, that it was the sum of an arihtmetic series that the students were told to compute, he does not say precisely which one.
Fantatsic, thank you. Loved this video. Induction should be taught in high school. עם ישראל חי
4:47 - “There’s a half-life for factoids.”
Such an interesting idea.
P.S. This was in context of 4:19.
You proved, in effect, that P(n) implies P(n+1). You'd find the algebra easier if you'd proved P(n-1) implies P(n). IME inductive arguments are easier if the goal case is n rather than n+1.
Splitting the factorial recursion to binary positions saves space but not time. The same number of operations still happen.
It also saves time, but for reasons that aren't immediately obvious.
1. Memory locality. A shallower stack depth means a smaller working set, so you are working with your CPU's cache, not against it.
2. Multiplication of two large integers is not a constant-time operation. Specifically, big integer multiplication is more efficient if you are multiplying two numbers of similar magnitude, than if you're multiplying a very large number with a very small one (e.g. 23! multiplied by 24). The reason is that the big integer library uses smarter algorithms such as Karatsuba or FFT, which work more efficiently in that case.
Amazing how many people here think they understand this!
I think the induction step, case 1 (if n is prime) actually _is_ the base case (or, the set of base cases). If you would structure the proof like that, then there wouldn't be a "hidden" base case.
If we assume the well-ordering principle to be true then induction has to be true as well. Proof: we want to prove a certain proposition P is true for all natural numbers. Let S={n in N : P(n) is true}.
Suppose the WOP is true (meaning every subset of the natural numbers has a least element). Assume further we have succesfully proven 1)The case n=1 and 2) the inductive step for every n :P(n)=>P(n+1). Now assume S is not equal to N, then there must be a subset M within N for which P is false and that set cannot contain n=1 because we already proved P(1) is true. Then this means by the WOP it must have a least element say m+1. But then m must be true and by our inductive step having been proven succesfully we conclude P(m+1) must also be true. Therefore we arrive at a contradiction and we conclude P is true for all n, then S=N. Therefore WOP=> Induction is a valid proof method.
Great video, thanks!
Just submitted my thesis and my main result was proved by double induction!
Is it or will it be available online? Would love to have a read 😀
_Double_ induction? That sounds fancy.
Keeping that guitar near both a radiator and a window is a double crime
actually completely forgot about strong induction. my real analysis lecturer talked about it in passing, so we didnt study it that much (as generally you will be using the principle of induction over strong induction). was confused about how to prove the fundamental theorem of arithmetic in a recent subject, but completely forgot about this method of induction
Using the induction method taught here:
Base case:
In any set of 1 horse, all horses in the group are all the same color (trivially).
Induction hypothesis:
In any set of N horses, all N horses in the set are the same color.
Induction step:
Take a set of N+1 horses.
Remove 1 horse from the set. That leaves a set of N horses, which by the Induction hypothesis, are all the same color as each other.
Now replace the horse and remove a different horse. Again, we have a set of N Horses, all the same color as each other.
Since the two smaller sets overlap with each other, they must both be the same color. Therefore, all horses in the set of N+1 Horses are the same color as each other.
Furthermore, if this process is continued, every horse will be included at some step. Therfore ALL horses are the same color.
So... Find the flaw. It's actually a little tricky and tests your understanding of induction and things you need to be careful for when crafting inductive proofs. :)
"Since the two smaller sets overlap with each other" they don't necessarily overlap
@@killianobrien2007 ah, why not? In what case do they not overlap?
You haven't proven that the "N" in the 'set of N+1 horses' is the same "N" as in the hypothesis. You haven't shown how the horse's color fits into ANY equation.
@WayneASchneider it is the same N. You don't show that, it's defined that way. This is the standard way you move from induction hypothesis to induction step.
In the induction hypothesis, you are assuming that for some given N, this thing is true. You've already proved it for your base case, the first N. Now you're assuming it for a random, given N. And then you're going to take that assumption and use it to prove that if the thing is true for N, then it is also true for N+1. You're proving that IF you cash get to the Nth step of the ladder, THEN you can also get to the N+1st step.
In that way, it's true for EVERY N, because you've shown is true for N=1, then you've shown that if it's true for N=1, it's true for N=2, and you've shown that if it's true for N=2, it's true for N=3, and so on forever.
@@Angi_Mathochist In the very first step, with a set of 2 horses!
Induction can be confusing because it is based on the fundamental properties of the subject. If you don't know what they are, how they work and what you can do with them then you can't use logic to line them together to end up with a conclusion. I think that is what he means by well founded.
No, that's not what he means by well-founded. This well-founded thing is one of the axioms of modern mathematics (ZFC) and just means that for every set S there exists an order R under which all non-empty subsets of S have a smallest element. Example specifically for the natural numbers: If you pick random numbers your selection (as long as you picked more than zero numbers) will always have a smallest number in it - and that is true for all selections you can possibly make. That's what this whole induction magic relies on. You proof that IF it holds for a specific element it will hold for the "next" one. And then you show the specific case for the "smallest" element and poooooof.... you've shown it for all.
@@tetsi0815 It may refer to a specific thing in context but I feel this is true for any set of rules which you can use to reach a conclusion with induction. If he didn't mean it that is quite the interesting coincidence.
I like this guy! He has a black Strat with a white pickguard sitting right next to him in his office... as do I. 😁 (Although my fretboard is maple, while his is rosewood.)
I LOVE YOU, INDUCTION
I’d like more videos with this guy
Most of this went over my head I admit, I’ll come back and watch the video another day when I’m less tired from irl stuff.
Fascinating.
Often people will use circular reasoning in supposed induction/proofs. I don't see a problem with this so long as they admit it is circular reasoning but then they don't admit it is.
I once needed that formula for triangular numbers and I figured it out with a bit different method. I imagined a n by n square divided to 1 by 1 squares. Triangular numbers fill the diagonal from upper left to lower right corner plus everything to the left of that diagonal. So we have n²/2 squares plus a bit extra form the stair steps in the diagonal. That diagonal has n squares and our n²/2 triangle already ate half of each square so we have n halves left. So the result is n²/2 + n/2 = (n²+n)/2.