Just in case someone out there doesnt know, Asaf Karagila is a legendary set theorist, expert in foundation of mathematics and logic, plus an overall cool helpful dude. Big big big Thanks to the folks at Numberphile for having him collab once more!
I'll never understand induction. Here's why: I started with no knowledge of induction whatsoever. Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it. Thus, I'll never understand induction.
isnt't this assumption false: "Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it"? If you change it to "whatever I know now, I will learn something new when I read a book about it", you just proved that you can be an expert in induction at some time... 😀
@@manloeste5555 You've just proven there is an infinite amount of things to know about induction, meaning you will never understand induction. Hence induction is a paradox and induction is thereby non-existent; proof by contradiction.
@@Nope-w3c no that’s not how induction works either. Induction works by saying at any point you can take one step further and since we know we can start at some beginning then any step is valid so yes there may be infinite knowledge but at every step you can learn more from the same method there is no requirement of hitting the “limit” of the function
When I took Number Theory in college, we learned that if you take Well Foundedness (or Well Orderedness) as an axiom, you can prove Mathematical Induction works. If, instead, you take Mathematical Induction as an axiom, you can prove Well Foundedness works. But you need to take one as an axiom to prove the other. Still amazing.
@@liamdonegan9042 If you take one as true, you can immediately prove the other as true. So either way you get the same two things as true. So the two "worlds" are equivalent.
Occasionally I like to think about induction as "There is no smallest counterexample". That way you exchange the domino-effect abstraction for the contradiction abstraction. That can be easier for some.
This is why well-fundness is crucial here. If there is a counterexample that means it has to exist the one with the lowest index. Let's say we have the n_0 is the one, so we can tell all of the previous are correct and then we can prove that n_0 is not a counterexample leading to contradiction.
I introduce the “no smallest counterexample” formulation as “no minimal criminal”. This nicely generalises to transfinite induction on well-ordered sets.
There is a famous proof that all natural numbers are interesting. Base case: Zero is interesting. It's the identity for addition, if nothing else. Induction step: Suppose n > 0, and all natural numbers m < n are interesting. Now we consider two cases: 1) If n is interesting, then it is trivially interesting. 2) If n is not interesting, then it is the smallest uninteresting natural number, which is pretty interesting. Therefore, n is interesting. QED
Yes. This is the well-ordering principle in action. One can also prove the well-ordering principle from induction They're both logically equivalent.@@PunkSage
This is a way better explanation than I got, which was: "see it's true for 1," and "see it's true for 2," which I said, "but that only proves it's true for 1 and 2, there's potentially an edge case here," but you started from "it's true for n=1" and then proved for, "it's true for n+1"... Which is *quite a different thing.* Thank you.
This is how it should always be presented, though induction is one of those things that is often poorly explained, and it's very easy for that key point to be missed or misunderstood. Often because they try to use the base case to make the inductive step feel less abstract (even though needs to be abstract). imho I always felt that it was better to start with the inductive hypothesis and inductive step, and then provide the base case at the end. So you basically say "when it's true for n, it must also be true for n+1", this step should be 100% generalized and have no relation to any specific value of n whatsoever. This helps prevent the base case from muddying up the purpose of the inductive step.
I'm sure you could kind of relate even that with this concept, base case would be your electric coil has current flowing trough it, and inductional hypothetisys is: "if it has current flowing trough it, pan close to it will increase heat", and induction stops working when you make the base no longer true
And it has been claimed that this method was already known before Gauss. Famous people often benefit from what is known as the 'Matthew Effect". For example, Einstein is often credited with things that he did not originate. For a start, 'relativity' was Poincare's term.
@DJF1947 but this is about the YOUNG Gauss. The trick was of course long known, and it wasn't named after him or anything. As for relativity, it was Lorentz who wrote a series of papers on "Einstein's principle of relativity". The Lorentz transformations are still named after him.
I think the lesson in the story is that children can, and should be taught how to, think logically about numbers. Not just what is happening, but why, and what the underlying patterns are.
@@ronald3836 My point was that the young Gauss is routinely credited, by 'popular writers' with inventing it. And please do not lecture a physicist about relativity. Poincare was the first to coin the term. Einstein did not like that: he wanted to call it invariants theory (in translation)..
The proof of the simple, yet mesmerizing and elegant Dijkstra Shortest Path algorithm requires induction and well-soundness. It's so simple after you grasp all of this!
One way you can think about it is that the natural numbers are built by inventing this thing we call 0 (or 1) and repeatedly applying a successor operation on it, while postulating that each successor is a new object (we never loop back). So, every natural number is either 0 (or 1), or a successor of some other natural number and these are the only two cases you need to check.
The first thing, that they proved here, has a nice visual proof: The numerator in the small Formula can be always drawn as a rectangular grid of small square-cells, where the two outer edge-lengths differ by 1. Such a rectangle can be always divided by 2 by creating a triangular staircase with one tile sized steps. And this staircase than shows the numbers in the long sum on the left.
I used to be completely baffled by induction, but after actually learning how it actually works, it's really fun to use. Best proof technique candidate.
It's funny, I had the same change of opinion, until I realized that it is not always a satisfying proof technique. The reason is: you need to know the answer beforehand, which often lets you wonder where it came from. This is why I now prefer any proof that derives the answer with no required knowledge on the answer.
Agreed, it feels very mechanical. Follow a recipe, do some algebraic manipulation and you're done. It's a cool mechanism of proof in terms of the logic but the novelty wears off quickly @@Nolord_
@@Nolord_ That's fair, it is a flaw in it. Though you can also test if a pattern you seem to have noticed works or not by trying to apply induction to it. If it works, you just proved the whole thing. If it doesn't work, you know your version is false and can work from there. This approach is less rigorous though, as you'd be lightly brute-forcing things this way.
@@plushloler I agree that induction is very useful in that regard. But usually, when writing a paper, when I get my induction proof that guarantees me the validity of the pattern, I try to find another proof that derives the answer which is made easier by the fact that you know the answer. This often leads to a shorter, more elegant proof and that feels less arbitrary. In one of my papers that deals with a lot of sequences, I have like 2 inductions in them. Other proofs always use the information gained, to avoid doing them again.
The return of Asaf! Gotta say, his vids have the highest chance of making no sense to me, but it's neat that I've seen him on the math stackExchange a lot lol
This is some good, pure numberphile. I’ll have to rewatch to actually understand tho. This is the type of mathematics that I wish I knew better. It’s what holds everything up.
The basic premise of induction is like setting up dominoes and knocking over the first one. If you can prove that the next domino will fall down if the previous one does and you can make the first one fall then all the dominoes will fall. It’s just explained in more generic terms. The base case is the first domino, you prove that whatever it is is true. The induction case is proving that no matter what domino/case you’re looking at, if the previous one is true then it must mean this one is also true. It’s the trickiest part of induction, proving the dominos are lined up. Once you have those two things done then you’ve got a proof. If N then N+1 applies to that base case you established. If 1 works and N=1 then N+1=2 works, and if 2 works and N=2 then N+1=3 works, etc etc.
Those who really understand maths such as Asaf; know how simple and trivial it is and knowing this can bring you the kind of joy that is as sweet as pi
I was worried I wasn't going to get anything new out of this one, but then he managed to link my semester of proofs in a new way to discrete math with the binary tree search. Worthwhile watch, thanks.
And I think it is more intuitive. The first and last numbers are (n+1). The next pair is also (n+1). And so on. And there will be n/2 pairs of numbers... so (n+1) x (n/2)
If I'm allowed to brag a bit, I figured out the formula for triangular numbers on my own, and then proved it by induction , when I was in my early teens. That's a case of reinventing the wheel, of course, but I still have fond memories about it 🙂
Same! My brother had given me the little fact about the numbers on a roulette wheel adding to 666. I was bored in the shops and decided to verify this mentally, which led me to looking for a faster way than just summing them. Trying to find/prove other formulae (e.g. for pyramid numbers, etc) became a favourite passtime.
It's interesting to note that so-called "strong induction" is not actually stronger than ordinary induction, in the sense that anything that can be proved by strong induction can also be proved by ordinary induction. It's just that for some problems the form is more convenient.
The "strong" in strong induction is not about its strength for proving stuff. The word "strong" refers to the notion that a stronger statement implies a weaker statement, i.e if A implies B, but B does not imply A, we say that A is stronger than B. In that sense, the induction hypothesis is stronger in strong induction, because assuming that something is true for all numbers up to n implies that it is true for n. However, assuming that something is true for n does not imply that it is true for every number up to n.
There are statements that are true on the Real numbers that can be shown with strong induction that I think would be very hard to do with normal induction.
@@karlwaugh30just rephrase your statement you are proving by induction to be “this holds for all numbers less than n”, then your regular induction hypothesis will be equivalent to the strong induction hypothesis. You might think I am cheating but this just goes to show there is really no distinction between strong and ordinary induction - strong induction can just be viewed as regular induction on a slightly modified statement
14:29 actually it IS also faster! The moment you use any algorithm for multiplication that is better than O(n^2) like Karatsuba, you can show this is faster than any implementation of the naive algorithm, even with the best multiplication algorithm!
I've found recently while playing Gloomhaven, that some "puzzles" are more about being able to change your mindset than trying to fit the problem in it. Many things in mathematics, to me, are like this.
I think the prime divisor example is a good one because you actually _need_ induction (or something very similar) to prove that. Euclid's proof by infinite descent is essentially equivalent.
Beautiful Squire Strat. I want to replace my dirty old gas range with an induction cooker; and also replace my water heater to go gas-free. Factorials are a prime case for dynamic programming instead of recursion.
Since programming was brought up: induction is kind of like memoization in recursive dynamic programming: memoization=if not calculated yet, recurse, then calculate. induction: if not proved yet, recurse then prove
It is also possible to think about the prime divisor example by considering only 2 as the base case (i.e. not every prime): 2 is prime, so it has a prime divisor (itself). Then the same induction hypothesis and step complete the argument, noting that if n is not prime, by definition n has a factor k that is strictly greater than 1 and less than n.
I was actually totally understanding the prime factor induction proof, but then I got so lost when he tried to justify it further with “well foundedness”.
That's my impression too, and it is because there are actually two proofs of the fact that every number larger than 1 has a prime factor in the video. The first proof is by ordinary induction on the natural numbers n, but what is proven at each step is the fact that each element of {2,3,4,...n} has a prime factor. Here, the induction is initialized with n=2, and the assertion obviously holds for {2}. The second proof is in a way on the size of the set of divisors of the number n. The induction is founded by all primes, as they have a 2-element set of divisors. The induction step goes like this: if n has more than 2 divisors, then it can be written as a product of two factors which each have less divisors than n. As one can decrease the number of divisors all the way down to 2, this proves the assertion. In my opinion, it's simply not true that this induction needs no foundation, and my impression is that the distinction between these two proofs hasn't been made sufficiently clear in the video. Notice that the first proof uses the usual complete ordering "
Admittedly a long time ago …. The awesome Mr Russell (Maths teacher extraordinaire) taught us the steps as: (1) assume true for n=k; (2) using that prove for k+1; (3) find an example (like n=1)…. At the time I noted that many people got hung on the change from n to k so the method here would have helped them.
saying "F_(k+2) = F_k + F_(k+1)" is true, but what you're trying instead to do is go from F_k + F_(k+1), use your induction hypothesis, to arrive to the number corresponding to F_(k+2) at the bottom! That is, prove the formula is equivalent through sandwiching by the expected value / go from RHS to LHS with only other derivations in-between
Although they amount to the same thing, I’ve always preferred the recursive phrasing (1), because it puts more emphasis on the importance of well-foundedness-we’re counting _down_ finitely-while a typical presentation of induction (2) looks like counting _up_ infinitely, which is really more like coinduction. 1. f(n) := if (n = 0) then [ base case ] else [ inductive case using f(n − 1) ] 2. P(0) by [ base case ] and for all n, P(n) → P(n + 1) by [ inductive case ]
The thing that helped me the most about induction was not the domino characterization, but infinite descent. Basically you establish a base case and then assume failure for some n. You can choose n to be the minimum that failure occurs and then use the fact that it works for n-1 you establish a contradiction. Simple.
In my undergrad, we said a set A was well-ordered iff for any subset B there one element of B that was "smaller" than all the rest. Well-formed was a term from the philosophy department specifically in logic. A formula is a well formed formula if it meets all the relevant syntax rules.
The term Asaf used is well-founded. Well-ordered is well-founded+totally-ordered. Divisibility, for example, is well-founded but not well-ordered, since it's not a total order.
It's at the very least an understatement to call this just "induction" - it is, but it's a _special_ kind of induction, it's what's more precisely called "complete induction".
Be aware that if the wall gets quite cold and the radiators are working hard, guitar leaning against the wall might get unpleasant warping of the neck. Potentially you'll get no issues, but there's no guarantee.
In programming, this stack limit problem is exactly why recursion isn't used very often in practice. In the example of the factorial, the space complexity is indeed reduced to log_2(n) (where n is the number you are computing, which is different from regular big O notation), but the time complexity is still at n, you still need n multiplication operations no matter how you slice it. For this reason, a simple loop is often preferred in this case, as the space complexity is now detached from the input (i.e. O(1)), while the time complexity is still n, but will be much faster in practice, exactly because you don't need to allocate stack space.
Yeah, math's people always go for recursion first, treating everything as a function. In most languages every function call must return. But in languages that do optimise tail-calls, where if a function that has nothing left to do, it can be skipped in the return path, then this form of recursion is identical to a loop in practice. But a "loop" that could run different functions on each iteration.
You're assuming that multiplication is a constant-time operation. For big integers, it is not, and it turns out that the binary recursion algorithm is faster than the loop for large enough n.
@@DeGuerre A: You don't need recursion to do a binary tree algorithm. It simplifies it, but it's not required. B: If avoiding big integers is a target, the recursion shown is suboptimal, because it groups together all the big numbers separate from all the small numbers. You'd want to select the inner 50% and the outer 50% together.
@@dojelnotmyrealname4018 Sure. The "optimal" algorithm would actually be to put all of the numbers into a priority queue, and then iteratively remove the two smallest numbers, multiply them together, and insert the product back. Repeat until you only have one number left. Nobody said recursion was required. My point only that the binary recursion algorithm is faster in practice than a simple loop for computing large factorials.
They just tell you what to write to get the marks. Like Feynman said, math in school isn't about math. It's about getting the slower kids to get a "right answer".
Great video. Remember the magic of induction from school. The Python example however is a bit silly. A recursive function will perform the same number of multiplications as an iterative function, only you have to pay the cost of function calls. The real magic is C++ constexpr, "constexpr int factorial(int n) { int res = 1; while (n > 1) res *= n--; return res; }".
Rinse the rice in a sieve. Place in a pan on the stove. Add water according to the directions. Bring to boil. Set to lowest heat. Cover and cook for 11 min (white rice) or 23 min (brown rice).
It's funny how there are multiple ways to approach things. Years ago I randomly came up with: B = (A/2) * (A - 1) + A Where A is the consecutive numbers in the series, and B is the Sum of numbers ...turns out a formula already exists, but I saw the pattern between the Total (Sum) and the numbers in the series increase by a set rate 1/1 = 1 3/2 = 1.5 6/3 = 2... Since it's at a rate of 0.5 increase, this is why I divided by 2 first (A/2) Because the rate is proportional to the previous consecutive numbers, I get the previous number (A-1) To get the previous total, I multiply the halved value by the previous... (A/2) * (A-1) Then to get the next in the series, you just add the last number (A/2) * (A-1) + A
"Horses must all have the same color as each other." Proof by weak induction: - base case One horse has the same color as itself, trivially true. - induction hypothesis: Assume N horses must have the same color. - induction step: given N+1 horses, the first N horses (i.e. all except horse N+1) all have the same color by the induction hypothesis, and the last N horses (i.e. all except horse 1) have the same color by the induction hypothesis; therefore ALL N+1 horses are in a set that all have the same color, thus all have the same color. 🥴
The problem here is with the N=2 case because there is no overlap in the two sets. You would first shave to prove that any two horses have the same color for this argument to work Edit: I have since realized this is a well-known example, but did not know that when making my original comment
@@notinglonias I have viewed this as in some sense fatal to the utility of (simple) induction. What this shows is that when you prove the inductive step (n-> n+1) you have to show it is actually correct for all n, i.e. true for an infinite number of cases, much like the original problem you are trying to prove.
@@silver6054 I get where you're coming from but I don't think that is the case. IIRC weak and strong induction are logically equivalent to each other but have different use cases
@@notinglonias Oh, I agree that strong and weak induction are logically equivalent, I was just using weak because that is what was used in the original horses are the same colour post. But of course, I still think my point holds some. Here we show it works for n=1 (or 0 if you like) and show a fairly convincing n->n+1 proof. You then have to go back to discover the argument doesn't work for n=2, and, IMO, the reason we do this is that we know the result is absurd, otherwise we might have been content with proo. So while I don't have an example, you can imagine an inductive step involving a chain of inequalities, and not realizing that one of them isn't true for some values (e.g. pi(x) > li(x)) "silently" breaking the inductive reasoning.
In my view, induction is just a special case of proof by contradiction (I don't care about people who don't like proof by contradiction). If you want to prove that P(n) is true for all n >= 1, assume that there is n>=1 for which P(n) is false. Then there is a minimal such number n. If n=1, then prove that P(1) is true. If n > 1, then prove that P(n) is true by using the fact that P(k) is true for all k < n. edit: indeed the video points out that "there is a smallest number" is key.
@@hanifarroisimukhlis5989 Intuitionists. (Apparently they do accept proof by induction, I guess because you can always "expand" such a proof to a direct proof.)
The proof also requires the well-founded property and working with subsets of the natural numbers. It kind of depends on which concept you think is more fundamental, double negative elimination, well-foundedness + subsets of natural numbers, or induction.
Yeah I had some profs that would completely ban proof by contradiction. No idea where the hate came from, maybe it's a regional/generational thing. But I feel like a lot of the clearest and most canonized proofs in math history use contradiction. (Proof of infinite primes, proof of irrational roots, etc)
The domino analogy could've been used for more than visuals here: each domino is a specific property like "the sum of integers from 0 to 999 equals (999² + 999) / 2", and tipping it over means proving the property. Induction lets you find a chain of dominoes that will knock over the one you're interested in, but if there isn't a domino in that chain that you can tip over, nothing happens and your target remains unproven
The thing about Gauss was first told by his friend Wolfgang Sartorius von Waltershausen in a biography titled "Gauss zum Gedächtnis". He knew, according to him, from very trustworthy sources. He wrote, however, that it was the sum of an arihtmetic series that the students were told to compute, he does not say precisely which one.
Fun fact, the binary splitting algorithm for computing factorial is also more efficient because large integer multiplication is faster if the two numbers being multiplied are roughly the same magnitude. Smart multiplication algorithms like Karatsuba and FFT work better in this case. Multiplying 23! by 24 is very "lopsided", but multiplying 12! by (24!/12!) is not.
In most languages you could use tail recursion to not blow the stack, it would require the language to optimize for tail recursion. Python doesn't do this by default. But if it did you could just use: def tail_recursive_factorial(n, accumulator=1): if n == 0: return accumulator else: return tail_recursive_factorial(n-1, n*accumulator) Which would be a more optimal solution to factorial given it requires less compute. Regardless, I love how this video helps people to think outside the box.
I think the induction step, case 1 (if n is prime) actually _is_ the base case (or, the set of base cases). If you would structure the proof like that, then there wouldn't be a "hidden" base case.
It also saves time, but for reasons that aren't immediately obvious. 1. Memory locality. A shallower stack depth means a smaller working set, so you are working with your CPU's cache, not against it. 2. Multiplication of two large integers is not a constant-time operation. Specifically, big integer multiplication is more efficient if you are multiplying two numbers of similar magnitude, than if you're multiplying a very large number with a very small one (e.g. 23! multiplied by 24). The reason is that the big integer library uses smarter algorithms such as Karatsuba or FFT, which work more efficiently in that case.
The sum for 1 + 2 + 3 ... n = (n^2 + n)/2, as Asaf shows us above. But we also know that the sum of all natural numbers is -1/12, as Brady (and Prof Copeland, Prof Padilla, and others) have shown us before. Therefore, (n^2 + n)/2 = -1/12 n^2 + n = -1/6 n^2 + n + 1/6 = 0 Using the quadratic formula, we calculate that n = -1/2 ± (1/2√3), which approximates to n = -0.211 or n = -0.789. So there you go, the last natural number is either -1/2 + (1/2√3) or -1/2 - (1/2√3).
I mean, I appreciate the humor, but that was probably the worst Numberphile episode ever made. The series that sums up all natural numbers diverges. Full Stop. Only when you start bringing in the Zeta function and analytic continuation (ζ(-1) = -1/12), or Ramanujan summation you can make sense of the -1/12 result. And not even then this is a "traditional" sum as we would understand it.
@@tetsi0815 "Divergent series are not bad! They are useful, and they converge much faster than convergent series!" - Carl Bender I think the confusion is that people see 1 + 2 + 3 + ... and think the "+" sign means addition, and therefore what you have to do with it is add terms up. A series is a formal object which just happens to agree with addition in the finite and convergent cases.
actually completely forgot about strong induction. my real analysis lecturer talked about it in passing, so we didnt study it that much (as generally you will be using the principle of induction over strong induction). was confused about how to prove the fundamental theorem of arithmetic in a recent subject, but completely forgot about this method of induction
Wow, a very very good video. Thank you professor. This is why there needs to be a check for the first step in induction, right? That is what checks that what you are doing is well founded.
Finding equivalencies in mathematics is incredibly difficult. The recursion was a great showcase, reasonably easy to understand, unfortunately I don't think it enables me to find similar concepts anywhere else.
Proof by contradiction always made the most sense to me, but it's not always clear what contradiction you need to build towards in your proof. Contraposition is the real magic.
Contraposition is just a reversal of a standard if clause or assumption. The initial statement is “If P then Q”, the contraposition is “If not Q then not P” Basically if p is true then q must be true, therefore the only way q can be false is if p is also false. If p wasn’t false then q couldn’t be false.
(Solved) 1 whole (1/0) or 1 base (0/1). N= 1 whole because they used square. To whole something, it has to be square. Yes, all whole numbers are square. Not in shape but in fuction. Circle is squared ( L and H) ( 1 and 1)). Two functions. 1 suared is 2. Parallelograms prove this. What else do Parallelograms tell you? Probability? Outcome?
Just in case someone out there doesnt know, Asaf Karagila is a legendary set theorist, expert in foundation of mathematics and logic, plus an overall cool helpful dude. Big big big Thanks to the folks at Numberphile for having him collab once more!
It's crazy for me watching this, because I'm currently being taught by him in uni!
excellently worded and brilliantly explained. i hope Asaf returns for more videos!
I'll never understand induction. Here's why: I started with no knowledge of induction whatsoever. Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it. Thus, I'll never understand induction.
isnt't this assumption false: "Assuming what I know now about induction, I don't get any more knowledge about induction by reading a book or watching a video about it"? If you change it to "whatever I know now, I will learn something new when I read a book about it", you just proved that you can be an expert in induction at some time... 😀
Yea bro, just give up😊😊
Skibido ohio toilet rizz is just way more important anyway😢
GG quitters mentality for the win😮😊❤
@@manloeste5555 You've just proven there is an infinite amount of things to know about induction, meaning you will never understand induction. Hence induction is a paradox and induction is thereby non-existent; proof by contradiction.
Induction is very non-intuitive to me, just not how I think.
@@Nope-w3c no that’s not how induction works either. Induction works by saying at any point you can take one step further and since we know we can start at some beginning then any step is valid so yes there may be infinite knowledge but at every step you can learn more from the same method there is no requirement of hitting the “limit” of the function
I'm currently being taught Set Theory and First Order Logic by Asaf, he's an excellent teacher :)
Where does he teach?
When I took Number Theory in college, we learned that if you take Well Foundedness (or Well Orderedness) as an axiom, you can prove Mathematical Induction works. If, instead, you take Mathematical Induction as an axiom, you can prove Well Foundedness works. But you need to take one as an axiom to prove the other. Still amazing.
Are they equivalent? Or do you get different truths elsewhere depending on which axiom you take?
@@liamdonegan9042 If you take one as true, you can immediately prove the other as true. So either way you get the same two things as true. So the two "worlds" are equivalent.
Attaching the concept of induction to recursion is so helpful to me...
Occasionally I like to think about induction as "There is no smallest counterexample". That way you exchange the domino-effect abstraction for the contradiction abstraction. That can be easier for some.
This is why well-fundness is crucial here. If there is a counterexample that means it has to exist the one with the lowest index. Let's say we have the n_0 is the one, so we can tell all of the previous are correct and then we can prove that n_0 is not a counterexample leading to contradiction.
I introduce the “no smallest counterexample” formulation as “no minimal criminal”. This nicely generalises to transfinite induction on well-ordered sets.
There is a famous proof that all natural numbers are interesting.
Base case: Zero is interesting. It's the identity for addition, if nothing else.
Induction step: Suppose n > 0, and all natural numbers m < n are interesting. Now we consider two cases:
1) If n is interesting, then it is trivially interesting.
2) If n is not interesting, then it is the smallest uninteresting natural number, which is pretty interesting.
Therefore, n is interesting.
QED
my set theory professor likes to call proofs that work off that pattern "contraduction"
Yes. This is the well-ordering principle in action. One can also prove the well-ordering principle from induction
They're both logically equivalent.@@PunkSage
This is a way better explanation than I got, which was: "see it's true for 1," and "see it's true for 2," which I said, "but that only proves it's true for 1 and 2, there's potentially an edge case here," but you started from "it's true for n=1" and then proved for, "it's true for n+1"... Which is *quite a different thing.* Thank you.
This is how it should always be presented, though induction is one of those things that is often poorly explained, and it's very easy for that key point to be missed or misunderstood. Often because they try to use the base case to make the inductive step feel less abstract (even though needs to be abstract).
imho I always felt that it was better to start with the inductive hypothesis and inductive step, and then provide the base case at the end. So you basically say "when it's true for n, it must also be true for n+1", this step should be 100% generalized and have no relation to any specific value of n whatsoever. This helps prevent the base case from muddying up the purpose of the inductive step.
I find it hard to believe this is what cooks my eggs in the morning. ;O)-
I'm sure you could kind of relate even that with this concept, base case would be your electric coil has current flowing trough it, and inductional hypothetisys is: "if it has current flowing trough it, pan close to it will increase heat", and induction stops working when you make the base no longer true
It's an amazing technology. It proves math, it cooks food - it's amazingly versatile.
I also takes musicians into the hall of fame.
Gauss did not use induction, though. He just saw that 1 + 2 + 3 + ... + 100 = (1 + 100) + (2 + 99) + ... + (50 + 51) = 50 * 101 = 5050.
And it has been claimed that this method was already known before Gauss. Famous people often benefit from what is known as the 'Matthew Effect". For example, Einstein is often credited with things that he did not originate. For a start, 'relativity' was Poincare's term.
@DJF1947 but this is about the YOUNG Gauss. The trick was of course long known, and it wasn't named after him or anything. As for relativity, it was Lorentz who wrote a series of papers on "Einstein's principle of relativity". The Lorentz transformations are still named after him.
I think the lesson in the story is that children can, and should be taught how to, think logically about numbers. Not just what is happening, but why, and what the underlying patterns are.
@@ronald3836 My point was that the young Gauss is routinely credited, by 'popular writers' with inventing it. And please do not lecture a physicist about relativity. Poincare was the first to coin the term. Einstein did not like that: he wanted to call it invariants theory (in translation)..
That’s correct, I was his teacher
The proof of the simple, yet mesmerizing and elegant Dijkstra Shortest Path algorithm requires induction and well-soundness. It's so simple after you grasp all of this!
One way you can think about it is that the natural numbers are built by inventing this thing we call 0 (or 1) and repeatedly applying a successor operation on it, while postulating that each successor is a new object (we never loop back). So, every natural number is either 0 (or 1), or a successor of some other natural number and these are the only two cases you need to check.
I got about half way through Gödel Escher Bach on my own. I remember Mr. Hofstadter talking about this.
We don't even have to postulate that the successor is a different object, that can be derived from Peano axioms.
I love how he switched from math to programming to explain a mathematical concept that he was talking about with a real-life example.
The first thing, that they proved here, has a nice visual proof:
The numerator in the small Formula can be always drawn as a rectangular grid of small square-cells, where the two outer edge-lengths differ by 1.
Such a rectangle can be always divided by 2 by creating a triangular staircase with one tile sized steps.
And this staircase than shows the numbers in the long sum on the left.
I used to be completely baffled by induction, but after actually learning how it actually works, it's really fun to use. Best proof technique candidate.
It's funny, I had the same change of opinion, until I realized that it is not always a satisfying proof technique. The reason is: you need to know the answer beforehand, which often lets you wonder where it came from.
This is why I now prefer any proof that derives the answer with no required knowledge on the answer.
@@Nolord_yeah i only get to use it when I suspect some pattern is true, definitely not my favourite
Agreed, it feels very mechanical. Follow a recipe, do some algebraic manipulation and you're done. It's a cool mechanism of proof in terms of the logic but the novelty wears off quickly @@Nolord_
@@Nolord_ That's fair, it is a flaw in it. Though you can also test if a pattern you seem to have noticed works or not by trying to apply induction to it. If it works, you just proved the whole thing. If it doesn't work, you know your version is false and can work from there. This approach is less rigorous though, as you'd be lightly brute-forcing things this way.
@@plushloler I agree that induction is very useful in that regard. But usually, when writing a paper, when I get my induction proof that guarantees me the validity of the pattern, I try to find another proof that derives the answer which is made easier by the fact that you know the answer. This often leads to a shorter, more elegant proof and that feels less arbitrary.
In one of my papers that deals with a lot of sequences, I have like 2 inductions in them. Other proofs always use the information gained, to avoid doing them again.
Keeping that guitar near both a radiator and a window is a double crime
Finally I understand how my induction stove works!
To understand induction you need to know someone who understands induction.
To understand recursion, you need to understand recursion.
Recursion is recursion.
Original source: divine intervention? Random chance? Someone just hallucinated understanding of induction?
@@minecraftermad invention is the base of all creation
The return of Asaf!
Gotta say, his vids have the highest chance of making no sense to me, but it's neat that I've seen him on the math stackExchange a lot lol
This is some good, pure numberphile. I’ll have to rewatch to actually understand tho.
This is the type of mathematics that I wish I knew better. It’s what holds everything up.
The basic premise of induction is like setting up dominoes and knocking over the first one. If you can prove that the next domino will fall down if the previous one does and you can make the first one fall then all the dominoes will fall. It’s just explained in more generic terms.
The base case is the first domino, you prove that whatever it is is true.
The induction case is proving that no matter what domino/case you’re looking at, if the previous one is true then it must mean this one is also true. It’s the trickiest part of induction, proving the dominos are lined up.
Once you have those two things done then you’ve got a proof. If N then N+1 applies to that base case you established. If 1 works and N=1 then N+1=2 works, and if 2 works and N=2 then N+1=3 works, etc etc.
Phycisist here: I thought he was gonna talk about EM Induction lol
the lighting symbol in the thumbnail tricked me too
Me too! My brain automatically assumed it was a Sixty Symbols video and not Numberphile.
Physicist here: I was confused as to why Numberphile would talk about EM induction, but why not right.
lolol I almost did. But saw it was numberphile so def prepared myself for some hardcore mathematics.
Getcher motor running...
Those who really understand maths such as Asaf; know how simple and trivial it is and knowing this can bring you the kind of joy that is as sweet as pi
I was worried I wasn't going to get anything new out of this one, but then he managed to link my semester of proofs in a new way to discrete math with the binary tree search. Worthwhile watch, thanks.
Paying in university and learning from you guys !
2:57 I've always seen the formula stated as (n+1)n / 2, so the factoring is less cumbersome
Exactly.
And I think it is more intuitive. The first and last numbers are (n+1). The next pair is also (n+1). And so on. And there will be n/2 pairs of numbers... so (n+1) x (n/2)
@ericrosen6626 Exactly!
The domino graphics are on point since induction is just utilizing the concept of "implies next".
It was an old joke in high schools and colleges... "3 is a prime, 5 is a prime, 7 is a prime... The rest follows from induction."
Induction is so surprisingly powerful. Like the Pigeonhole Theorem or the Triangle Inequality.
If I'm allowed to brag a bit, I figured out the formula for triangular numbers on my own, and then proved it by induction , when I was in my early teens. That's a case of reinventing the wheel, of course, but I still have fond memories about it 🙂
I loved repeatedly proving famous formulae to myself as a teen. It nurtured my love for math!
Definitely brag about it dude, that's incredible in our day and age.
Same! My brother had given me the little fact about the numbers on a roulette wheel adding to 666. I was bored in the shops and decided to verify this mentally, which led me to looking for a faster way than just summing them. Trying to find/prove other formulae (e.g. for pyramid numbers, etc) became a favourite passtime.
Never brag.
@@jacoboribilik3253 I want to say some mean things to you.
I love asaf. He makes ideas that piss me off when I hear other people explain them make sense.
Yay, a Numberphile video on induction! Maybe I'll finally understand ihow it works.
Edit: I didn't - but I appreciate the effort.
1) Prove (statement holds for 1)
2) Prove (statement holds for n) => (statement holds for n+1)
Therefore (statement holds for all numbers)
Induction Well-Ordering Principle
It's interesting to note that so-called "strong induction" is not actually stronger than ordinary induction, in the sense that anything that can be proved by strong induction can also be proved by ordinary induction. It's just that for some problems the form is more convenient.
The "strong" in strong induction is not about its strength for proving stuff. The word "strong" refers to the notion that a stronger statement implies a weaker statement, i.e if A implies B, but B does not imply A, we say that A is stronger than B.
In that sense, the induction hypothesis is stronger in strong induction, because assuming that something is true for all numbers up to n implies that it is true for n. However, assuming that something is true for n does not imply that it is true for every number up to n.
There are statements that are true on the Real numbers that can be shown with strong induction that I think would be very hard to do with normal induction.
@@karlwaugh30just rephrase your statement you are proving by induction to be “this holds for all numbers less than n”, then your regular induction hypothesis will be equivalent to the strong induction hypothesis.
You might think I am cheating but this just goes to show there is really no distinction between strong and ordinary induction - strong induction can just be viewed as regular induction on a slightly modified statement
14:29 actually it IS also faster!
The moment you use any algorithm for multiplication that is better than O(n^2) like Karatsuba, you can show this is faster than any implementation of the naive algorithm, even with the best multiplication algorithm!
Gauss was just built different. Just a genius from age five. Gauss and Euler were the goats.
I've found recently while playing Gloomhaven, that some "puzzles" are more about being able to change your mindset than trying to fit the problem in it. Many things in mathematics, to me, are like this.
I think the prime divisor example is a good one because you actually _need_ induction (or something very similar) to prove that. Euclid's proof by infinite descent is essentially equivalent.
Beautiful Squire Strat.
I want to replace my dirty old gas range with an induction cooker; and also replace my water heater to go gas-free.
Factorials are a prime case for dynamic programming instead of recursion.
Asaf Karagila is amazing.
Disappointed this wasn't about electromagnetic induction.
Very intrigued and excited to learn a new form of induction I've never heard of.
Since programming was brought up: induction is kind of like memoization in recursive dynamic programming: memoization=if not calculated yet, recurse, then calculate. induction: if not proved yet, recurse then prove
Babe, babe wake up, a new numberphile video just dropped
It is also possible to think about the prime divisor example by considering only 2 as the base case (i.e. not every prime): 2 is prime, so it has a prime divisor (itself). Then the same induction hypothesis and step complete the argument, noting that if n is not prime, by definition n has a factor k that is strictly greater than 1 and less than n.
The inductive proof for 2 can be used as-is, since "every number k such that 1
More Asaf Karagila please! Thanks!
I was actually totally understanding the prime factor induction proof, but then I got so lost when he tried to justify it further with “well foundedness”.
That's my impression too, and it is because there are actually two proofs of the fact that every number larger than 1 has a prime factor in the video.
The first proof is by ordinary induction on the natural numbers n, but what is proven at each step is the fact that each element of {2,3,4,...n} has a prime factor. Here, the induction is initialized with n=2, and the assertion obviously holds for {2}.
The second proof is in a way on the size of the set of divisors of the number n. The induction is founded by all primes, as they have a 2-element set of divisors. The induction step goes like this: if n has more than 2 divisors, then it can be written as a product of two factors which each have less divisors than n. As one can decrease the number of divisors all the way down to 2, this proves the assertion. In my opinion, it's simply not true that this induction needs no foundation, and my impression is that the distinction between these two proofs hasn't been made sufficiently clear in the video.
Notice that the first proof uses the usual complete ordering "
Missed opportunity to flash an image of Matt when recursion was mentioned.
This IS like watching magic! Thank you!
I remember when I first saw induction in high school. It is an insanely smart idea if you stop to think about that.
I love videos with Asaf. He is so cool
Admittedly a long time ago …. The awesome Mr Russell (Maths teacher extraordinaire) taught us the steps as: (1) assume true for n=k; (2) using that prove for k+1; (3) find an example (like n=1)…. At the time I noted that many people got hung on the change from n to k so the method here would have helped them.
Don't mind me, just proving the Fibonacci formula by induction:
Fibonacci formula: F(n) = (φ^n - (-1/φ)^n)/√5
Fibonacci series: F_n = F_(n-2) + F_(n-1) , F_0 = 0 , F_1 = 1
We need to prove P_F(n):
F(n) = F_n
base case n=0:
F(0) = (φ^0 - (-1/φ)^0)/√5
= 0 = F_0
base case n=1:
F(1) = (φ^1 - (-1/φ)^1)/√5
= 1 = F_1
F(0) = F_0 and F(1) = F_1
P_F(0) and P_F(1) are true.
assume P_F(k) and P_F(k+1) are true, k ∃ ℕ.
For n = k+2:
F_(k+2) = F_k + F_(k+1)
= (φ^k - (-1/φ)^k)/√5 + (φ^(k+1) - (-1/φ)^(k+1))/√5
= (φ^k(1+φ) - (-1/φ)^k(1-1/φ))/√5
= (φ^k * φ^2 - (-1/φ)^k * (-1/φ)^2)/√5
= (φ^(k+2) - (-1/φ)^(k+2))/√5
= F(k+2)
F(k+2) = F_(k+2)
P_F(k+2) is true.
∴ by the principle of mathematical induction, P_F(n) is true for n ∃ ℕ.
QED.
I dont need maths to prove he was a great pianist.
saying "F_(k+2) = F_k + F_(k+1)" is true, but what you're trying instead to do is go from F_k + F_(k+1), use your induction hypothesis, to arrive to the number corresponding to F_(k+2) at the bottom! That is, prove the formula is equivalent through sandwiching by the expected value / go from RHS to LHS with only other derivations in-between
Induction-add a little resistance, a battery and next thing you know you have set fire to the foundations of modern society.
As the Tiger Lillies said, burning things is fun.
1:55 He’s overestimating how much is taught in high school. Induction isn’t taught. College students barely understand Algebra 1.
Just submitted my thesis and my main result was proved by double induction!
Is it or will it be available online? Would love to have a read 😀
_Double_ induction? That sounds fancy.
RIP to all the Electrical Engineering students who are lost and confused.
Although they amount to the same thing, I’ve always preferred the recursive phrasing (1), because it puts more emphasis on the importance of well-foundedness-we’re counting _down_ finitely-while a typical presentation of induction (2) looks like counting _up_ infinitely, which is really more like coinduction.
1. f(n) := if (n = 0) then [ base case ] else [ inductive case using f(n − 1) ]
2. P(0) by [ base case ] and for all n, P(n) → P(n + 1) by [ inductive case ]
I LOVE YOU, INDUCTION
The thing that helped me the most about induction was not the domino characterization, but infinite descent. Basically you establish a base case and then assume failure for some n. You can choose n to be the minimum that failure occurs and then use the fact that it works for n-1 you establish a contradiction. Simple.
For an instant I thought he was going to explain us magnetic induction in guitar pick ups...
In my undergrad, we said a set A was well-ordered iff for any subset B there one element of B that was "smaller" than all the rest. Well-formed was a term from the philosophy department specifically in logic. A formula is a well formed formula if it meets all the relevant syntax rules.
The term Asaf used is well-founded. Well-ordered is well-founded+totally-ordered. Divisibility, for example, is well-founded but not well-ordered, since it's not a total order.
i took notes..........
great explanations!!!!!!
Q: How do mathematicians induce good behavior in their kids?
A: "If I've told you n times, I've told you n+1 times..."
I didn't know the guitarist from Metallica was a Math nerd aswell! :O
So is the guitarist from Queen! 😮
He even kept his guitar behind 😂
@@richardgratton7557 He's an *astronomy* nerd. (Which admittedly involves a certain amount of math nerdery...)
It's at the very least an understatement to call this just "induction" - it is, but it's a _special_ kind of induction, it's what's more precisely called "complete induction".
Be aware that if the wall gets quite cold and the radiators are working hard, guitar leaning against the wall might get unpleasant warping of the neck. Potentially you'll get no issues, but there's no guarantee.
In programming, this stack limit problem is exactly why recursion isn't used very often in practice. In the example of the factorial, the space complexity is indeed reduced to log_2(n) (where n is the number you are computing, which is different from regular big O notation), but the time complexity is still at n, you still need n multiplication operations no matter how you slice it.
For this reason, a simple loop is often preferred in this case, as the space complexity is now detached from the input (i.e. O(1)), while the time complexity is still n, but will be much faster in practice, exactly because you don't need to allocate stack space.
Yeah, math's people always go for recursion first, treating everything as a function. In most languages every function call must return. But in languages that do optimise tail-calls, where if a function that has nothing left to do, it can be skipped in the return path, then this form of recursion is identical to a loop in practice. But a "loop" that could run different functions on each iteration.
You're assuming that multiplication is a constant-time operation. For big integers, it is not, and it turns out that the binary recursion algorithm is faster than the loop for large enough n.
@@DeGuerre A: You don't need recursion to do a binary tree algorithm. It simplifies it, but it's not required.
B: If avoiding big integers is a target, the recursion shown is suboptimal, because it groups together all the big numbers separate from all the small numbers. You'd want to select the inner 50% and the outer 50% together.
@@dojelnotmyrealname4018 Sure. The "optimal" algorithm would actually be to put all of the numbers into a priority queue, and then iteratively remove the two smallest numbers, multiply them together, and insert the product back. Repeat until you only have one number left.
Nobody said recursion was required. My point only that the binary recursion algorithm is faster in practice than a simple loop for computing large factorials.
@@DeGuerre That could increase the complexity because now you have to sort through the queue to place the numbers.
They just tell you what to write to get the marks.
Like Feynman said, math in school isn't about math. It's about getting the slower kids to get a "right answer".
Fascinating.
Great video. Remember the magic of induction from school. The Python example however is a bit silly. A recursive function will perform the same number of multiplications as an iterative function, only you have to pay the cost of function calls. The real magic is C++ constexpr, "constexpr int factorial(int n) { int res = 1; while (n > 1) res *= n--; return res; }".
9:56 "Euler said this and i am saying it again, There is atleast one prime between any natural number and its double"-Paul Erodos.
Great video, thanks!
This didn't solve any of my questions about how to cook rice on my induction stove 😮💨
Rinse the rice in a sieve. Place in a pan on the stove. Add water according to the directions. Bring to boil. Set to lowest heat. Cover and cook for 11 min (white rice) or 23 min (brown rice).
Get a rice cooker.
Fantatsic, thank you. Loved this video. Induction should be taught in high school. עם ישראל חי
It's funny how there are multiple ways to approach things.
Years ago I randomly came up with: B = (A/2) * (A - 1) + A
Where A is the consecutive numbers in the series, and B is the Sum of numbers
...turns out a formula already exists, but I saw the pattern between the Total (Sum) and the numbers in the series increase by a set rate
1/1 = 1
3/2 = 1.5
6/3 = 2...
Since it's at a rate of 0.5 increase, this is why I divided by 2 first (A/2)
Because the rate is proportional to the previous consecutive numbers, I get the previous number (A-1)
To get the previous total, I multiply the halved value by the previous... (A/2) * (A-1)
Then to get the next in the series, you just add the last number (A/2) * (A-1) + A
"Horses must all have the same color as each other."
Proof by weak induction:
- base case One horse has the same color as itself, trivially true.
- induction hypothesis: Assume N horses must have the same color.
- induction step: given N+1 horses, the first N horses (i.e. all except horse N+1) all have the same color by the induction hypothesis, and the last N horses (i.e. all except horse 1) have the same color by the induction hypothesis; therefore ALL N+1 horses are in a set that all have the same color, thus all have the same color.
🥴
The problem here is with the N=2 case because there is no overlap in the two sets. You would first shave to prove that any two horses have the same color for this argument to work
Edit: I have since realized this is a well-known example, but did not know that when making my original comment
@@notinglonias I have viewed this as in some sense fatal to the utility of (simple) induction. What this shows is that when you prove the inductive step (n-> n+1) you have to show it is actually correct for all n, i.e. true for an infinite number of cases, much like the original problem you are trying to prove.
@@silver6054 I get where you're coming from but I don't think that is the case. IIRC weak and strong induction are logically equivalent to each other but have different use cases
@@notinglonias Oh, I agree that strong and weak induction are logically equivalent, I was just using weak because that is what was used in the original horses are the same colour post. But of course, I still think my point holds some. Here we show it works for n=1 (or 0 if you like) and show a fairly convincing n->n+1 proof. You then have to go back to discover the argument doesn't work for n=2, and, IMO, the reason we do this is that we know the result is absurd, otherwise we might have been content with proo. So while I don't have an example, you can imagine an inductive step involving a chain of inequalities, and not realizing that one of them isn't true for some values (e.g. pi(x) > li(x)) "silently" breaking the inductive reasoning.
In my view, induction is just a special case of proof by contradiction (I don't care about people who don't like proof by contradiction). If you want to prove that P(n) is true for all n >= 1, assume that there is n>=1 for which P(n) is false. Then there is a minimal such number n. If n=1, then prove that P(1) is true. If n > 1, then prove that P(n) is true by using the fact that P(k) is true for all k < n.
edit: indeed the video points out that "there is a smallest number" is key.
Who tf doesn't like proof by contradiction???
@@hanifarroisimukhlis5989 Intuitionists. (Apparently they do accept proof by induction, I guess because you can always "expand" such a proof to a direct proof.)
The proof also requires the well-founded property and working with subsets of the natural numbers. It kind of depends on which concept you think is more fundamental, double negative elimination, well-foundedness + subsets of natural numbers, or induction.
Yeah I had some profs that would completely ban proof by contradiction. No idea where the hate came from, maybe it's a regional/generational thing.
But I feel like a lot of the clearest and most canonized proofs in math history use contradiction. (Proof of infinite primes, proof of irrational roots, etc)
@@javen9693 Agreed. Assuming the contrary of what you want to prove gives you the best starting position. Why make life more difficult than necessary.
The domino analogy could've been used for more than visuals here: each domino is a specific property like "the sum of integers from 0 to 999 equals (999² + 999) / 2", and tipping it over means proving the property. Induction lets you find a chain of dominoes that will knock over the one you're interested in, but if there isn't a domino in that chain that you can tip over, nothing happens and your target remains unproven
That's good to know. Thank you.
Asaf Karagila! A name math stack exchange users will know well
And here I was expecting electricity stuff
The thing about Gauss was first told by his friend Wolfgang Sartorius von Waltershausen in a biography titled "Gauss zum Gedächtnis". He knew, according to him, from very trustworthy sources. He wrote, however, that it was the sum of an arihtmetic series that the students were told to compute, he does not say precisely which one.
Fun fact, the binary splitting algorithm for computing factorial is also more efficient because large integer multiplication is faster if the two numbers being multiplied are roughly the same magnitude. Smart multiplication algorithms like Karatsuba and FFT work better in this case.
Multiplying 23! by 24 is very "lopsided", but multiplying 12! by (24!/12!) is not.
In most languages you could use tail recursion to not blow the stack, it would require the language to optimize for tail recursion. Python doesn't do this by default. But if it did you could just use:
def tail_recursive_factorial(n, accumulator=1):
if n == 0:
return accumulator
else:
return tail_recursive_factorial(n-1, n*accumulator)
Which would be a more optimal solution to factorial given it requires less compute. Regardless, I love how this video helps people to think outside the box.
Oooooo, he really showed recursion in a computer. Earlier this year, i had to code factorial recursion in assembly language using MIPS. :o
3:10 It's so triggering that he's not writing it as n(n+1)+2(n+1)=(n+1)(n+2)
Poincaré described this concept more succinctly.
Infinitely more succinctly.
I think the induction step, case 1 (if n is prime) actually _is_ the base case (or, the set of base cases). If you would structure the proof like that, then there wouldn't be a "hidden" base case.
Can induction be proved without induction? I know it works, but what is the proof it works?
אסף, אתה מלך! 😁
Splitting the factorial recursion to binary positions saves space but not time. The same number of operations still happen.
It also saves time, but for reasons that aren't immediately obvious.
1. Memory locality. A shallower stack depth means a smaller working set, so you are working with your CPU's cache, not against it.
2. Multiplication of two large integers is not a constant-time operation. Specifically, big integer multiplication is more efficient if you are multiplying two numbers of similar magnitude, than if you're multiplying a very large number with a very small one (e.g. 23! multiplied by 24). The reason is that the big integer library uses smarter algorithms such as Karatsuba or FFT, which work more efficiently in that case.
I just learned this in school and didn't get sh**. Now I do, thanks Numberphile! :)
The sum for 1 + 2 + 3 ... n = (n^2 + n)/2, as Asaf shows us above.
But we also know that the sum of all natural numbers is -1/12, as Brady (and Prof Copeland, Prof Padilla, and others) have shown us before.
Therefore, (n^2 + n)/2 = -1/12
n^2 + n = -1/6
n^2 + n + 1/6 = 0
Using the quadratic formula, we calculate that n = -1/2 ± (1/2√3), which approximates to n = -0.211 or n = -0.789.
So there you go, the last natural number is either -1/2 + (1/2√3) or -1/2 - (1/2√3).
Only for n = infinity
I mean, I appreciate the humor, but that was probably the worst Numberphile episode ever made. The series that sums up all natural numbers diverges. Full Stop. Only when you start bringing in the Zeta function and analytic continuation (ζ(-1) = -1/12), or Ramanujan summation you can make sense of the -1/12 result. And not even then this is a "traditional" sum as we would understand it.
@@tetsi0815 "Divergent series are not bad! They are useful, and they converge much faster than convergent series!" - Carl Bender
I think the confusion is that people see 1 + 2 + 3 + ... and think the "+" sign means addition, and therefore what you have to do with it is add terms up. A series is a formal object which just happens to agree with addition in the finite and convergent cases.
Wow, serendipity! this one really helped me solve a programming problem. I had no idea this video was going to be relevant to me at all. Thanks!
actually completely forgot about strong induction. my real analysis lecturer talked about it in passing, so we didnt study it that much (as generally you will be using the principle of induction over strong induction). was confused about how to prove the fundamental theorem of arithmetic in a recent subject, but completely forgot about this method of induction
Wow, a very very good video. Thank you professor. This is why there needs to be a check for the first step in induction, right? That is what checks that what you are doing is well founded.
Finding equivalencies in mathematics is incredibly difficult. The recursion was a great showcase, reasonably easy to understand, unfortunately I don't think it enables me to find similar concepts anywhere else.
This is evidenced simply as, if you removed 2 from the trivial solution set, you would gain infinitely many counter-examples.
Proof by contradiction always made the most sense to me, but it's not always clear what contradiction you need to build towards in your proof. Contraposition is the real magic.
Contraposition is just a reversal of a standard if clause or assumption. The initial statement is “If P then Q”, the contraposition is “If not Q then not P”
Basically if p is true then q must be true, therefore the only way q can be false is if p is also false. If p wasn’t false then q couldn’t be false.
@@anthonycannet1305 Did someone say ravens?
(Solved) 1 whole (1/0) or 1 base (0/1). N= 1 whole because they used square. To whole something, it has to be square. Yes, all whole numbers are square. Not in shape but in fuction. Circle is squared ( L and H) ( 1 and 1)). Two functions. 1 suared is 2. Parallelograms prove this. What else do Parallelograms tell you? Probability? Outcome?
I’d like more videos with this guy