Blown away by how well you understand and explain it, as well as by the care put in the production and metaphors. Tough stuff but you made me get it! (I guess)
wow. these 3 videos of gödel incompleteness theorem blew my mind. your explanations are marvelous. thank god i runned into this channel! thank you so much, its an amazing work!
Undefined Behavior seems a nice mid-point between mathematical complexity and layman understandability. (Worse than Hamkins of course.) Here are the notes from the last 3 videos of his playlist, the only ones innovative to me. (Actually only the last 2 are such.) th-cam.com/video/YrKLy4VN-7k/w-d-xo.html Video nicely and quickly explaining what Gödel proved, but not HOW he proved it. There are dozens of other videos out there doing the same. th-cam.com/video/svOTZEbj3ys/w-d-xo.html As explained in this video, it only proves Completeness VS Soundness, not Consistency. It is explained in his next video. 1:00 An example of an unsound but consistent system, is one that proves only one statement, and that statement is false. Obviously unsound, but neither does it lead to a contradiction. 2:55 Reducing Completeness to the Halting Problem. Our program takes a code C and an input X, and goes down the list of all possible proofs, until it finds a proof that C(X) halts or a proof that C(X) never halts. One of the above two must be true, so the program would have to eventually halt. The amount of time it would take would be finite, but it could be any amount! However long we would wait for the answer, we would not be able to say how much time is left. So while I cannot refute this proof, it is certainly sketchy. 3:40 This is where the video outdoes Hamkins. It explains clearly how our program going through all the proofs would fail, and how it directly corresponds to what Gödel proved. This is the best Turing-Gödel comparison I found on the internet. 5:00 The program on which our program could not decide whether it loops or halts, must actually loop, because if it halted, there would exist a proof of it, a proof that just runs the program and follows its every step to the point of halting. Therefore there must exist a program that loops, but we cannot prove it. A beautiful parallel to the Gödel sentence, that is not bouncing like "heterological" or Russell's Paradox, but just is true while being unprovable. That difference bugged me, finally I see the perfect similarity of the 2 theorems. th-cam.com/video/7fvkbvWaRPk/w-d-xo.html To my surprise, this video goes around the hard math, and proves the "inability of proving Consistency" with Turing again! (Does not mean that I find that proof intuitive or certain.) 1:13 it starts here. We can prove that a system must be Inconsistent or Incomplete, by proving that the system's Consistency implies its Incompleteness. (Cons⇒Inco ≡ ¬Cons∨Inco) To achieve this, we assume math is Consistent, and prove its Incompleteness from it. Create the program smith (Oh how much I hate those TH-camrs giving nonsensical names to functions!): function smith(M: program) { for each proof P { if (P proves M(M) loops) return; if (P proves M(M) halts) loop(); } } (I have 2 comments here, firstly, if we changed M(M) to smith(M) it would be undecidable on EVERY input, which I guess is not what we want. Secondly, if we want this to be operable by a Turing Machine, we actually need to arithmetize the function being inputted, M(⸢M⸣).) 2:29 I will just call smith s. If s(s) would find a proof that it loops, then it would immediately halt. This would mean that there must exist a proof proving that it halts, the brute-force proof, that just runs the program until it finds a return; or a loop();. Since our system, by assumption, is Consistent, there cannot exist a proof s(s) loops because it would lead to a contradiction. The mirror situation happens if we find a proof that s(s) halts. Then it enters a loop, and brute-force confirms it. The conclusion is that in a Consistent system there must exist neither of those proofs, and the system must be Incomplete. Cons⇒Inco 4:18 Since we have neither of those proofs, s(s) will actually loop forever over all the proofs, trying to find one. Does this count as a proof, and therefore makes s immediately halt? Not exactly. In order to formulate the new proof, we need to have earlier proven ¬∃(proof of s(s) looping), and in order to have proven ¬∃(proof of s(s) looping) we needed to prove our system to be Consistent. Therefore, if we prove that our system is Consistent, it immediately entails its own inconsistency. What I am confused about, is why does this mean "(Peano) system cannot prove its consistency" instead of "(Peano) system cannot be consistent". Ok, I figured it may work like this: When we do not know whether or not our system is Considtent, and we run s(s) and do not see it stop, we do not know whether it will stop tomorrow (Inconsistent system), or never (Consistent system). Only if we were certain that it would never stop, would we be able to formulate the proof that it loops. Only when we did somehow prove our system Consistent, would we know it loops without waiting an eternity, and would it implode, becoming Inconsistent. The understanding that UndefBehavior proposed, is that the fact that our system is Consistent ⊢ ¬∃(proof of s(s) looping) ⊢ s(s) loops (bc can't find proof) ⊢ our system is Inconsistent. Therefore, we are not concerned with the truth of "our system is Consistent", only its proof! And if we can prove our system's Consistency, then we can infer its Inconsistency from it! The only non-contradictory Consistent system is an Incomplete one, in which "our system is Consistent" is one of the true&unprovable statements. Whoosh, I finally understand this Turing's (and also more of Gödel's) Incompleteness!
What if you have two different math systems that can prove each other consistent, and they are complete in the ways that the other one is incomplete in? Would this mean that the math systems were inconsistent, (because it would define them as being in the same system, so therefore inconsistent because the overall system proved itself correct?), or would it result in two different systems that were both correct and can be used to complete math by using different systems?
I know it's been 3 years, but here goes for anyone else who just watched the video and is wondering (disclaimer: this is a simplified explanation and isn't perfect. I can't link resources but I'll name them below for further reading): First of all, the two systems are both "incomplete", they are not incomplete just in a certain way, they are fundamentally incomplete. Some math systems are considered complete by Gödel's own definition, but those are not robust enough and do not allow for enough Arithmetic, so aren't of much use for complex cases. So if you're systems can carry out a certain necessary amount of arithmetic, they are necessarily incomplete at large. Second, about having two systems proving each other. The second incompleteness theorem implies that a system F1 cannot prove the consistency of another system F2 which proves the consistency of F1. So we cannot prove the consistency of systems in a loop. For further reading check the articles on Gödel's incompleteness theorems on both Stanford Encyclopedia of Philosophy and on Wikipedia (Wikipedia especially answers your question under the subsection "Implications for consistency proofs")
I think you oversimplified the end. It's not technically impossible to prove a math system is consistent, but you have to do that outside of the math system, just like you did when you showed the Smith program would loop forever and that it could not ever return that result. That was one of the weird quirks of Godel's theorem. He had to prove his self-referential theorems were true statements in Russell's math system _and_ that they could not be proved by the system's axioms.
I've read a lot on Godel's incompleteness theorems, but what I love so much about what you do here is that you address those "but what if I try..." questions that naturally follow. Something I'd be interested to see more clarity on is: where do these proofs fall apart for simpler math systems such as Euclidean geometry + parallel postulate, or arithmetic with addition but no multiplication?
Those systems cannot fully describe algorithms. Our proofs rely on the fact that we can make proofs about algorithms (like prove that an algoithm halts or loops), but if the system is not expressive enough to do that the proofs do not apply to it.
Mind blown. I'm sure I need to watch this video couple of times again with lots of pauses, but still... This is one of the best explanations I've seen so far. Thank you so much for your efforts and these amazing videos, please keep them coming!
Technically you should say "NONHALTING" instead of "LOOPS" to be clear that it captures aperiodic behavior (which smith(M) certainly has when enumerating proofs). Otherwise, great explanation!
@@imadhamaidi Nothing is practical here. :-D But yes, e.g. trying to complete an incompletable proof in a sufficiently complicated system must be aperiodic.
Great great great man! Can you tell me how this explanation relates to the formal proof of gödels 2. incompleteness theorem? Because to me its mostly wizardry involving Löbs Theorem.
I could be wrong but since maths builds on axioms, wouldn't that mean that maths must remain incomplete and/or inconsistent by necessity? The axioms limit the abilities of a system that is built on them. In other words: some or all the axioms are incomplete and/or inconsystent. Perhaps certain pairings of axioms cause it.
Andras Jonas: Basically any sufficiently expressive axiomatic system is incomplete or else inconsistent. There are axiomatic systems which are complete and consistent (classical propositional logic & first-order classical logic), but they can't do very much mathematics and so aren't very useful all things considered. So it's not because mathematics has axioms that the Gödel theorems hold.
It means that, if maths is consistent, there are infinite non enumerable axioms. The isAxiom(x) function would be infinite and non reducible. Maths would be founded on an infinite non reducible base: a giant super axiom not definable in finite terms.
I'm confused about smith(smith). The type definition of smith is "smith: (M: program) => halt | loop". So in order to even begin running smith, you need to pass in the argument, which is also smith. However, smith is a function which takes in a program, M. Without defining what M is, you haven't completly defined the input argument. So it's incorrect to say we are running smith(smith). What you mean to say is that we want to run smith(smith(smith(smith(smith(...))))). It's clear to me that this program will loop. In fact, not only does it loop, it never even gets past constructing the memory needed to hold the input argument. It's forever allocating memory.
At 3:17 you make a statement that is not clear to me. If the program finds a proof that it halts, and consequently loops, do we really know exactly what it does on each step? If I'm an external observer, how do I know the problem isn't still searching for a proof instead of looping?
Great question! You can imagine that the function "loop" is defined in a very naive way, e.g.: function loop() while (true) { print ("Hello world!"); } end Our proof that smith(smith) loops would look something like: Follow the execution trace of the program. At some proof P, we will find a proof that smith(smith) halts (since we assume this proof exists). We will then execute loop, which clearly puts us into an infinite loop state. Therefore, smith(smith) must loop. To get really technical, you can draw out the finite state machine of this program (which I haven't discussed on this channel), and see that it will get stuck in the infinite loop state, and we just translate the process of it getting into that state directly to a proof.
This is very nice. The trick that I had seen is this (using self-reference) Let G be the sentence: "If there is a proof of G, then there is an earlier proof of not G" (Earlier meaning that the code of the one proof comes before the code of the second proof, if we code proofs as natural numbers) Then we can reason: Suppose there is a proof of G. Then we can check all earlier proofs to see if any of them is a proof of not G. Case 1: There is an earlier proof of not G. Then we have proved a contradiction, since our system proves both G and not G. Case 2: There is no earlier proof of not G. In that case, G can be seen to be false (since it claims that there is an earlier proof of not G). So we can prove that G is false. So we can prover not G. Again, we've proved a contradiction--we've proved G and we've also proved not G. So if there is a proof of G, then our system is inconsistent. Now, suppose there is a proof of not G. Then again we can reason as follows: Case 1: There is an earlier proof of G. Then our system is inconsistent, since it proves both not G and G. Case 2: There is no earlier proof of G. Then what G says is true (since it says there is no proof of G earlier than the shortest proof of not G). So we can prove G. Again, our system proves both G and not G. It took me a while to see it, but your construction has the same property. Let G for your construction be the statement "Program P will never halt". Then it is the case that G is true if there is a proof of G, then there is a shorter proof of not G.
Let p = "Our math system is consistent" Let q = " a proof exists showing that Smith program loops" (so ~q = "no proof exists showing that Smith program loops") At 4:58 you explain how: *(p --> q) AND (p--> ~q)*. But from those two it follows that ~p. The proof is by contradiction: Assume p. Then the first implication gives us q, and the second gives us ~q. We reach a contradiction. Therefore ~p Obviously I have something wrong here. So please point it out. My guess is that the implications I wrote are wrong...but if they are right...then ~p follows.
I apologize but I am not following your last comment. Initially you said that my opening comment correctly concludes ~p....and now you are saying that it would only be correct if our math system can prove p? A prove by contradiction (used in my opening comment) allows you to start with any assumption (p)... and as soon as you reach any contradiction, then you dismiss the assumption (p) [thus, no requirement to prove it] ... and validly conclude the negation ~p. The end. I am sure I should be misinterpreting you...Please clarify.
These vedeos explaining these theorems are just great. I've looked for some and never found any and today while I wasn't looking for anything, TH-cam targets me with these. Awesomely clear that I need to rewatch the 3 key ones but it's there. One thing remains, what is exactly sufficiently expressive ?
+Carlos Eduardo Carmona Ortega While that would work, you would then have another problem of "is the system I just used to prove math consistent, also consistent?" And how would you go about proving that? Because if the system you used to "prove" math is correct is NOT consistent, then it's proof is no good. So in order to show that the proof is a good proof, you would need to show that system is also a consistent system. But wait, we can't prove that system is a consistent system either! At least, not without some other even bigger system. And then you'd have no way to prove THAT one is consistent either, leading to an infinite regress in which there is always at least one math system that you cannot prove is consistent. This is the crux of the problem with trying to beat the incompleteness theorem. Every attempt will always involve something from outside your math system which just pushes the question of consistency back one level and never actually answers it.
What is the most fundamental metric? It must exist apart from any other metric (it is sufficient when compared against no other metric) It must be be the fundamental part, that supports all other metrics. What is the fundamental metric? If it exists, then why does it exist?
i have been working through this for a while now and hope someone will comment on the following. i do think that this theorem makes no sense and rather than discovering a hole in logic/math, introduces one which is pure sophistry. 1. If we consider the statement of Goedel’s that “this statement is true but cannot be proved” it can only mean; That the statement “this statement is true but cannot be proved” is itself the statement of issue (self-referencing). So, “if” “this statement is true but cannot be proved” is true as per its own definition/claim then the why and how would have to be known and discernable or all this makes no sense. I might be in error but I think that Goedel presented this “this statement is true but cannot be proved” via his translation of some mathematical statements by his number system as the calculated consequence of some “hole” in mathematics and logic, not a word puzzle/paradox. It is supposed to be a structural phenomenon of mathematics/logic. This would mean that the mathematical statement from which it was translated claimed to be true, (before translation) but could not be determined as such by means of any reasons as to how or why. This is illogical. • The problem is that the conclusion of his translation claims to be merely just that, a translation such that the mathematical statement from which the translation was the extension, had to be known to be true but also to have no proof before the translation was executed or it could not have been reflected in that translation and that makes no sense. Here we have the tail wagging the dog. • The problem is that this mathematical statement which when translated claimed that it was true, would have to have had some reason to be considered true, some structural reason which again, would mean that in reflection of that structure, we would have the how and why it was true which would be the proof. I am sorry but I do think that given the above, it is clear that this entire scheme is sophistry, i.e., that the logic is backward in the acceptance of this scheme. The logic by which his very propositions were defined is being denied in the manipulation of the components of the scheme that it might be asserted and accepted. Can anyone review this one last post and let me know what you think? I just think that if there is math to prove this, it must be in error or the semantic analogy would make sense and it decidedly does not. Thanks.
wait a second in an earlier video you said that you cant prove or disprove the continuum hypothesis witch means that math system is incomplete and in a different video you said inconsistent math system can prove anything so if that math system cant prove the continuum hypotheses its not inconstant or in other words it is consistent but a consistent system cant prove its own consistency and as you said in this video that means it has to be inconstant therefor breaking math entirely
4:53 The proof that smith(smith) is circular logic. You can't enter the ring of circular logic unless you decide to. First look at the proof. Here is the proof that smith(smith) loops: 1. Notice out of the three options, the only way for it to halt is if there is a proof that smith(smith) loops 2. Assume there is a proof that smith(smith) loops. 3. In this case, smith(smith) halts. 4. If it halted, that must mean our assumption is untrue. Therefore there is no proof that smith(smith) loops. ...... (other stuff) 6. Therefore this proves that smith(smith) loops. Look at step 2. In order to use proof by contradiction, you MUST first assume that there is a proof that smith loops. But you are currently trying to make a proof that smith loops. You are already assuming your conclusion is true. Even though in this example you are just saying it to show that it leads to a contraction, you are still assuming it. You can't suppose "p" in order to prove "p". In reality, does smith(smith) loop? Only if you say it does. It's circular logic.
In reality, does smith(smith) loop? Yes, because we assumed the system is consistent and therefore we can't have a proof that it either loops or halts, without breaking consistency. We don't assume that smith(smith) loops, we assume there exists a logical proof P that proves smith(smith) loops, thus halting it (also proving otherwise) and breaking the fundamental assumption that the system is consistent (the system has proof for both smith(smith) looping and halting). This means you can't have any proof of smith(smith) looping or halting, to keep the system consistent. With neither proof avaliable, you therefore prove that it loops forever. Which ultimately means the system can never prove itself consistent (the fundamental assumption) without you finding a proof that smith(smith) loops, contradicting the whole systems consistency.
The proof of the first IT using a variant of the undecidabilty proof of the Halting Problem is nice. Regarding the proof of the second IT I wonder whether you do not use 1-consistency of the theory ("all provable purely existential formulas are true"). You seem to need this to formalise the argument that the assumption that there is a proof that Smith halts leads to a contradiction. I may be wrong but at least making your proof watertight seems to require at least as much effort as Goedel's original proof.
You can't prove consistency from a consistent system. You can prove: inconsistency from an inconsistent theory, consistency from an inconsistent theory and most surprisingly inconsistency from some consistent theories like PA+~cons(PA).
You have assumed that there are countably infinite valid mathematical proofs. But earlier in the halting problem video you proved with Cantor diagonalization that there are uncountable infinite programs. I am concerned that the axiom that there are countable infinite valid proofs is not a solid one
Now what I want to know... If it's inconsistent, unicorns exist? Whatever we want can be proven? We can't proof anything we want, so it has to be consistent. It must be incomplete.
It's a linguistic semantic problem. For example, if it's NOT good, then does mean it's bad? It goes the same as computer language too or any language for that matter. IMHO.
greeks simply didnt understand infinitesimal values can add up to a finite value. if you press a key on your keyboard, your finger eventually reaches the key, even if it had to pass the first half of the distance, and the half of the half of the distance.... they couldnt grasp how the finger would ever reach the key, if there is always "another" half of distance left. but im not sure how you think this circumstance fits to this theorem.
And people suppose that calculus resolves this issue in a logical manner. Yet, when entertaining mathematics at a philosophical level, the very foundations of it is conveyed not only by logicism, but also by formalism. Unfortunately, most people prefer to not entertain the notion of paradoxes as exemplified here with the assertion stated above. Heck, even simple things like 3/3 actually being 0.9999 that has to be approximated to being 1 is just ignored, which is, to me, a micro-scaled incongruency (simple arithmetic) that will come to be magnified with it's macro-cosmic projection in any math related endeavour (physics, chemistry, architechture,etc.). Godel's incompleteness theorems, the Halting problem, Cantor's set theory, the Munchausen trilemma, Tarski's theorem and of course, Zeno's paradoxes are NOT disparate observations/formulations. They demonstrate the nature of knowledge (e.g. truth, fact, etc.) at it's most basic levels wherein our knowledge papers the cracks of epistemic indeterminance found in ALL our observations, using tools like unqualified tautologies and the dogmatic systemization of incomplete observations (from periodic tables to taxonomies, social norms to school curriculum). And thus, our progression in this concocted reality is far from being stable; it is a-chasing after the wind.
This math system proves Gödel's theorem (M→G) Gödel's theorem proves that a math system cannot prove its own consistency (G→-Pc) (1) Therefore, M→-Pc Gödel's theorem also proved that a math system that can prove its own consistency is not consistent (G→(Pc→-C)) (2) Therefore, M→(Pc→-C) But if Pc→-C and M→-Pc, this means M→-Pc→-(-C) Which means M→C Which means this math system proves it's own consistency Therefore, either this math system is inconsistent or Gödel's theorem has a fallacy
This doesn't make sense at all, just as the video for the 1st theorem. This statement 'for each proof P' only makes sense if the proofs are finite number. If there are infite number of proofs this will always loop, which makes any further steps of the proof meaningless.
@@SvetlozarArgirov I wouldn't say that. You'll reach the valid proof after a finite number of iterations. You just need a while loop that doesn't specify the number of iterations ahead of time.
"In order for our math system to be correct we have to hope that its own correctness is a problem that remains unsolved" This is where atheists become believers without a proof and contradict with theirselves. All reasoning ultimately traces back to faith in something that you cannot prove.
Yeah, no. Just because you can't prove something doesn't mean it's equally as likely as some random garbage you made up, or that it's unreasonable to act as though something that is overwhelmingly likely is true.
Blown away by how well you understand and explain it, as well as by the care put in the production and metaphors. Tough stuff but you made me get it! (I guess)
more people need to watch videos from this channel....
wow. these 3 videos of gödel incompleteness theorem blew my mind. your explanations are marvelous. thank god i runned into this channel! thank you so much, its an amazing work!
Undefined Behavior seems a nice mid-point between mathematical complexity and layman understandability. (Worse than Hamkins of course.) Here are the notes from the last 3 videos of his playlist, the only ones innovative to me. (Actually only the last 2 are such.)
th-cam.com/video/YrKLy4VN-7k/w-d-xo.html
Video nicely and quickly explaining what Gödel proved, but not HOW he proved it. There are dozens of other videos out there doing the same.
th-cam.com/video/svOTZEbj3ys/w-d-xo.html
As explained in this video, it only proves Completeness VS Soundness, not Consistency. It is explained in his next video.
1:00 An example of an unsound but consistent system, is one that proves only one statement, and that statement is false. Obviously unsound, but neither does it lead to a contradiction.
2:55 Reducing Completeness to the Halting Problem.
Our program takes a code C and an input X, and goes down the list of all possible proofs, until it finds a proof that C(X) halts or a proof that C(X) never halts. One of the above two must be true, so the program would have to eventually halt. The amount of time it would take would be finite, but it could be any amount! However long we would wait for the answer, we would not be able to say how much time is left.
So while I cannot refute this proof, it is certainly sketchy.
3:40 This is where the video outdoes Hamkins. It explains clearly how our program going through all the proofs would fail, and how it directly corresponds to what Gödel proved. This is the best Turing-Gödel comparison I found on the internet.
5:00 The program on which our program could not decide whether it loops or halts, must actually loop, because if it halted, there would exist a proof of it, a proof that just runs the program and follows its every step to the point of halting.
Therefore there must exist a program that loops, but we cannot prove it. A beautiful parallel to the Gödel sentence, that is not bouncing like "heterological" or Russell's Paradox, but just is true while being unprovable. That difference bugged me, finally I see the perfect similarity of the 2 theorems.
th-cam.com/video/7fvkbvWaRPk/w-d-xo.html
To my surprise, this video goes around the hard math, and proves the "inability of proving Consistency" with Turing again! (Does not mean that I find that proof intuitive or certain.)
1:13 it starts here. We can prove that a system must be Inconsistent or Incomplete, by proving that the system's Consistency implies its Incompleteness. (Cons⇒Inco ≡ ¬Cons∨Inco)
To achieve this, we assume math is Consistent, and prove its Incompleteness from it. Create the program smith (Oh how much I hate those TH-camrs giving nonsensical names to functions!):
function smith(M: program)
{
for each proof P
{
if (P proves M(M) loops) return;
if (P proves M(M) halts) loop();
}
}
(I have 2 comments here, firstly, if we changed M(M) to smith(M) it would be undecidable on EVERY input, which I guess is not what we want. Secondly, if we want this to be operable by a Turing Machine, we actually need to arithmetize the function being inputted, M(⸢M⸣).)
2:29 I will just call smith s. If s(s) would find a proof that it loops, then it would immediately halt. This would mean that there must exist a proof proving that it halts, the brute-force proof, that just runs the program until it finds a return; or a loop();. Since our system, by assumption, is Consistent, there cannot exist a proof s(s) loops because it would lead to a contradiction. The mirror situation happens if we find a proof that s(s) halts. Then it enters a loop, and brute-force confirms it.
The conclusion is that in a Consistent system there must exist neither of those proofs, and the system must be Incomplete. Cons⇒Inco
4:18 Since we have neither of those proofs, s(s) will actually loop forever over all the proofs, trying to find one. Does this count as a proof, and therefore makes s immediately halt? Not exactly.
In order to formulate the new proof, we need to have earlier proven ¬∃(proof of s(s) looping), and in order to have proven ¬∃(proof of s(s) looping) we needed to prove our system to be Consistent. Therefore, if we prove that our system is Consistent, it immediately entails its own inconsistency.
What I am confused about, is why does this mean "(Peano) system cannot prove its consistency" instead of "(Peano) system cannot be consistent".
Ok, I figured it may work like this: When we do not know whether or not our system is Considtent, and we run s(s) and do not see it stop, we do not know whether it will stop tomorrow (Inconsistent system), or never (Consistent system). Only if we were certain that it would never stop, would we be able to formulate the proof that it loops.
Only when we did somehow prove our system Consistent, would we know it loops without waiting an eternity, and would it implode, becoming Inconsistent.
The understanding that UndefBehavior proposed, is that the fact that our system is Consistent ⊢
¬∃(proof of s(s) looping) ⊢
s(s) loops (bc can't find proof) ⊢
our system is Inconsistent.
Therefore, we are not concerned with the truth of "our system is Consistent", only its proof! And if we can prove our system's Consistency, then we can infer its Inconsistency from it!
The only non-contradictory Consistent system is an Incomplete one, in which "our system is Consistent" is one of the true&unprovable statements. Whoosh, I finally understand this Turing's (and also more of Gödel's) Incompleteness!
This channel should really be more popular. Please keep the excellent work, sir
What if you have two different math systems that can prove each other consistent, and they are complete in the ways that the other one is incomplete in? Would this mean that the math systems were inconsistent, (because it would define them as being in the same system, so therefore inconsistent because the overall system proved itself correct?), or would it result in two different systems that were both correct and can be used to complete math by using different systems?
I know it's been 3 years, but here goes for anyone else who just watched the video and is wondering (disclaimer: this is a simplified explanation and isn't perfect. I can't link resources but I'll name them below for further reading):
First of all, the two systems are both "incomplete", they are not incomplete just in a certain way, they are fundamentally incomplete. Some math systems are considered complete by Gödel's own definition, but those are not robust enough and do not allow for enough Arithmetic, so aren't of much use for complex cases.
So if you're systems can carry out a certain necessary amount of arithmetic, they are necessarily incomplete at large.
Second, about having two systems proving each other. The second incompleteness theorem implies that a system F1 cannot prove the consistency of another system F2 which proves the consistency of F1. So we cannot prove the consistency of systems in a loop.
For further reading check the articles on Gödel's incompleteness theorems on both Stanford Encyclopedia of Philosophy and on Wikipedia (Wikipedia especially answers your question under the subsection "Implications for consistency proofs")
@@amrabdelazeem9689 in the articles you cite is there also a section about complete systems as per gödel's theorem ?
I think you oversimplified the end. It's not technically impossible to prove a math system is consistent, but you have to do that outside of the math system, just like you did when you showed the Smith program would loop forever and that it could not ever return that result.
That was one of the weird quirks of Godel's theorem. He had to prove his self-referential theorems were true statements in Russell's math system _and_ that they could not be proved by the system's axioms.
I think he said that it is impossible to prove the consistency of the system inside of said system, so it's basically the same thing you just said.
I've read a lot on Godel's incompleteness theorems, but what I love so much about what you do here is that you address those "but what if I try..." questions that naturally follow.
Something I'd be interested to see more clarity on is: where do these proofs fall apart for simpler math systems such as Euclidean geometry + parallel postulate, or arithmetic with addition but no multiplication?
Those systems cannot fully describe algorithms. Our proofs rely on the fact that we can make proofs about algorithms (like prove that an algoithm halts or loops), but if the system is not expressive enough to do that the proofs do not apply to it.
Mind blown. I'm sure I need to watch this video couple of times again with lots of pauses, but still... This is one of the best explanations I've seen so far. Thank you so much for your efforts and these amazing videos, please keep them coming!
Technically you should say "NONHALTING" instead of "LOOPS" to be clear that it captures aperiodic behavior (which smith(M) certainly has when enumerating proofs). Otherwise, great explanation!
is there a practical apperiodic nonhalting program?
@@imadhamaidi Nothing is practical here. :-D But yes, e.g. trying to complete an incompletable proof in a sufficiently complicated system must be aperiodic.
Great great great man!
Can you tell me how this explanation relates to the formal proof of gödels 2. incompleteness theorem?
Because to me its mostly wizardry involving Löbs Theorem.
I could be wrong but since maths builds on axioms, wouldn't that mean that maths must remain incomplete and/or inconsistent by necessity? The axioms limit the abilities of a system that is built on them. In other words: some or all the axioms are incomplete and/or inconsystent. Perhaps certain pairings of axioms cause it.
Andras Jonas: Basically any sufficiently expressive axiomatic system is incomplete or else inconsistent. There are axiomatic systems which are complete and consistent (classical propositional logic & first-order classical logic), but they can't do very much mathematics and so aren't very useful all things considered. So it's not because mathematics has axioms that the Gödel theorems hold.
It means that, if maths is consistent, there are infinite non enumerable axioms. The isAxiom(x) function would be infinite and non reducible. Maths would be founded on an infinite non reducible base: a giant super axiom not definable in finite terms.
The whole youtube exists just to host this video, it would still worth it.
I'm confused about smith(smith). The type definition of smith is "smith: (M: program) => halt | loop". So in order to even begin running smith, you need to pass in the argument, which is also smith. However, smith is a function which takes in a program, M. Without defining what M is, you haven't completly defined the input argument. So it's incorrect to say we are running smith(smith). What you mean to say is that we want to run smith(smith(smith(smith(smith(...))))). It's clear to me that this program will loop. In fact, not only does it loop, it never even gets past constructing the memory needed to hold the input argument. It's forever allocating memory.
if you mean "it goes into infinite recursion" then you are correct.
At 3:17 you make a statement that is not clear to me. If the program finds a proof that it halts, and consequently loops, do we really know exactly what it does on each step? If I'm an external observer, how do I know the problem isn't still searching for a proof instead of looping?
Great question! You can imagine that the function "loop" is defined in a very naive way, e.g.:
function loop()
while (true)
{
print ("Hello world!");
}
end
Our proof that smith(smith) loops would look something like: Follow the execution trace of the program. At some proof P, we will find a proof that smith(smith) halts (since we assume this proof exists). We will then execute loop, which clearly puts us into an infinite loop state. Therefore, smith(smith) must loop.
To get really technical, you can draw out the finite state machine of this program (which I haven't discussed on this channel), and see that it will get stuck in the infinite loop state, and we just translate the process of it getting into that state directly to a proof.
These videos are very good!
This is very nice. The trick that I had seen is this (using self-reference)
Let G be the sentence: "If there is a proof of G, then there is an earlier proof of not G"
(Earlier meaning that the code of the one proof comes before the code of the second proof, if we code proofs as natural numbers)
Then we can reason: Suppose there is a proof of G. Then we can check all earlier proofs to see if any of them is a proof of not G.
Case 1: There is an earlier proof of not G. Then we have proved a contradiction, since our system proves both G and not G.
Case 2: There is no earlier proof of not G. In that case, G can be seen to be false (since it claims that there is an earlier proof of not G). So we can prove that G is false. So we can prover not G. Again, we've proved a contradiction--we've proved G and we've also proved not G.
So if there is a proof of G, then our system is inconsistent.
Now, suppose there is a proof of not G. Then again we can reason as follows:
Case 1: There is an earlier proof of G. Then our system is inconsistent, since it proves both not G and G.
Case 2: There is no earlier proof of G. Then what G says is true (since it says there is no proof of G earlier than the shortest proof of not G). So we can prove G. Again, our system proves both G and not G.
It took me a while to see it, but your construction has the same property. Let G for your construction be the statement "Program P will never halt". Then it is the case that
G is true if there is a proof of G, then there is a shorter proof of not G.
this channel is gold, you are awesome.
Let p = "Our math system is consistent"
Let q = " a proof exists showing that Smith program loops"
(so ~q = "no proof exists showing that Smith program loops")
At 4:58 you explain how: *(p --> q) AND (p--> ~q)*. But from those two it follows that ~p.
The proof is by contradiction:
Assume p. Then the first implication gives us q, and the second gives us ~q. We reach a contradiction. Therefore ~p
Obviously I have something wrong here. So please point it out. My guess is that the implications I wrote are wrong...but if they are right...then ~p follows.
So ~p = "Our math system is NOT consistent"
is correct?
I apologize but I am not following your last comment. Initially you said that my opening comment correctly concludes ~p....and now you are saying that it would only be correct if our math system can prove p?
A prove by contradiction (used in my opening comment) allows you to start with any assumption (p)... and as soon as you reach any contradiction, then you dismiss the assumption (p) [thus, no requirement to prove it] ... and validly conclude the negation ~p. The end.
I am sure I should be misinterpreting you...Please clarify.
That Smith(Smith) actually loops doesn't mean there is a proof that Smith(Smith) loops. The converse is true, though.
These are great videos! Always a pleasure to see a new one in my sub box.
These vedeos explaining these theorems are just great.
I've looked for some and never found any and today while I wasn't looking for anything, TH-cam targets me with these.
Awesomely clear that I need to rewatch the 3 key ones but it's there.
One thing remains, what is exactly sufficiently expressive ?
So it would be possible to prove the correctness of the math system with another bigger (including more axioms) math system?
+Carlos Eduardo Carmona Ortega While that would work, you would then have another problem of "is the system I just used to prove math consistent, also consistent?" And how would you go about proving that? Because if the system you used to "prove" math is correct is NOT consistent, then it's proof is no good. So in order to show that the proof is a good proof, you would need to show that system is also a consistent system.
But wait, we can't prove that system is a consistent system either! At least, not without some other even bigger system. And then you'd have no way to prove THAT one is consistent either, leading to an infinite regress in which there is always at least one math system that you cannot prove is consistent. This is the crux of the problem with trying to beat the incompleteness theorem. Every attempt will always involve something from outside your math system which just pushes the question of consistency back one level and never actually answers it.
What is the most fundamental metric?
It must exist apart from any other metric (it is sufficient when compared against no other metric)
It must be be the fundamental part, that supports all other metrics.
What is the fundamental metric?
If it exists, then why does it exist?
i have been working through this for a while now and hope someone will comment on the following. i do think that this theorem makes no sense and rather than discovering a hole in logic/math, introduces one which is pure sophistry.
1. If we consider the statement of Goedel’s that “this statement is true but cannot be proved” it can only mean;
That the statement “this statement is true but cannot be proved” is itself the statement of issue (self-referencing). So, “if” “this statement is true but cannot be proved” is true as per its own definition/claim then the why and how would have to be known and discernable or all this makes no sense. I might be in error but I think that Goedel presented this “this statement is true but cannot be proved” via his translation of some mathematical statements by his number system as the calculated consequence of some “hole” in mathematics and logic, not a word puzzle/paradox. It is supposed to be a structural phenomenon of mathematics/logic. This would mean that the mathematical statement from which it was translated claimed to be true, (before translation) but could not be determined as such by means of any reasons as to how or why. This is illogical.
• The problem is that the conclusion of his translation claims to be merely just that, a translation such that the mathematical statement from which the translation was the extension, had to be known to be true but also to have no proof before the translation was executed or it could not have been reflected in that translation and that makes no sense. Here we have the tail wagging the dog.
• The problem is that this mathematical statement which when translated claimed that it was true, would have to have had some reason to be considered true, some structural reason which again, would mean that in reflection of that structure, we would have the how and why it was true which would be the proof.
I am sorry but I do think that given the above, it is clear that this entire scheme is sophistry, i.e., that the logic is backward in the acceptance of this scheme. The logic by which his very propositions were defined is being denied in the manipulation of the components of the scheme that it might be asserted and accepted.
Can anyone review this one last post and let me know what you think? I just think that if there is math to prove this, it must be in error or the semantic analogy would make sense and it decidedly does not. Thanks.
wait a second in an earlier video you said that you cant prove or disprove the continuum hypothesis witch means that math system is incomplete
and in a different video you said inconsistent math system can prove anything
so if that math system cant prove the continuum hypotheses its not inconstant or in other words it is consistent
but a consistent system cant prove its own consistency and as you said in this video that means it has to be inconstant therefor breaking math entirely
4:53
The proof that smith(smith) is circular logic. You can't enter the ring of circular logic unless you decide to. First look at the proof.
Here is the proof that smith(smith) loops:
1. Notice out of the three options, the only way for it to halt is if there is a proof that smith(smith) loops
2. Assume there is a proof that smith(smith) loops.
3. In this case, smith(smith) halts.
4. If it halted, that must mean our assumption is untrue. Therefore there is no proof that smith(smith) loops.
...... (other stuff)
6. Therefore this proves that smith(smith) loops.
Look at step 2. In order to use proof by contradiction, you MUST first assume that there is a proof that smith loops. But you are currently trying to make a proof that smith loops. You are already assuming your conclusion is true. Even though in this example you are just saying it to show that it leads to a contraction, you are still assuming it. You can't suppose "p" in order to prove "p".
In reality, does smith(smith) loop? Only if you say it does. It's circular logic.
In reality, does smith(smith) loop? Yes, because we assumed the system is consistent and therefore we can't have a proof that it either loops or halts, without breaking consistency.
We don't assume that smith(smith) loops, we assume there exists a logical proof P that proves smith(smith) loops, thus halting it (also proving otherwise) and breaking the fundamental assumption that the system is consistent (the system has proof for both smith(smith) looping and halting).
This means you can't have any proof of smith(smith) looping or halting, to keep the system consistent.
With neither proof avaliable, you therefore prove that it loops forever.
Which ultimately means the system can never prove itself consistent (the fundamental assumption) without you finding a proof that smith(smith) loops, contradicting the whole systems consistency.
UB I really appreciate your time for doing these videos. It really helped me a lot to understand the concept much more clearly.
The proof of the first IT using a variant of the undecidabilty proof of the Halting Problem is nice. Regarding the proof of the second IT I wonder whether you do not use 1-consistency of the theory ("all provable purely existential formulas are true"). You seem to need this to formalise the argument that the assumption that there is a proof that Smith halts leads to a contradiction. I may be wrong but at least making your proof watertight seems to require at least as much effort as Goedel's original proof.
You can't prove consistency from a consistent system. You can prove: inconsistency from an inconsistent theory, consistency from an inconsistent theory and most surprisingly inconsistency from some consistent theories like PA+~cons(PA).
but could we proof math is consistent using a different system?
You have assumed that there are countably infinite valid mathematical proofs. But earlier in the halting problem video you proved with Cantor diagonalization that there are uncountable infinite programs. I am concerned that the axiom that there are countable infinite valid proofs is not a solid one
Please help me, Im confused on what consistent means. Im also confused about a implies b. If a is false then how can it imply something?
A false statement, otherwise known as a contradiction, implies everything. This property is known as Principle of Explosion.
Which programming language did you use?
MLG_Rainbow_Doritoz Haha, I didn't. It's a weird pseudocode blend of Lua and C++.
also called lucy ;)
Now what I want to know... If it's inconsistent, unicorns exist? Whatever we want can be proven? We can't proof anything we want, so it has to be consistent. It must be incomplete.
Havid we sure hope it's consistent and only incomplete
It's a linguistic semantic problem. For example, if it's NOT good, then does mean it's bad? It goes the same as computer language too or any language for that matter. IMHO.
Great videos. I was wondering how would you would link Zeno's paradoxes to all this?
greeks simply didnt understand infinitesimal values can add up to a finite value. if you press a key on your keyboard, your finger eventually reaches the key, even if it had to pass the first half of the distance, and the half of the half of the distance.... they couldnt grasp how the finger would ever reach the key, if there is always "another" half of distance left. but im not sure how you think this circumstance fits to this theorem.
And people suppose that calculus resolves this issue in a logical manner. Yet, when entertaining mathematics at a philosophical level, the very foundations of it is conveyed not only by logicism, but also by formalism. Unfortunately, most people prefer to not entertain the notion of paradoxes as exemplified here with the assertion stated above.
Heck, even simple things like 3/3 actually being 0.9999 that has to be approximated to being 1 is just ignored, which is, to me, a micro-scaled incongruency (simple arithmetic) that will come to be magnified with it's macro-cosmic projection in any math related endeavour (physics, chemistry, architechture,etc.).
Godel's incompleteness theorems, the Halting problem, Cantor's set theory, the Munchausen trilemma, Tarski's theorem and of course, Zeno's paradoxes are NOT disparate observations/formulations. They demonstrate the nature of knowledge (e.g. truth, fact, etc.) at it's most basic levels wherein our knowledge papers the cracks of epistemic indeterminance found in ALL our observations, using tools like unqualified tautologies and the dogmatic systemization of incomplete observations (from periodic tables to taxonomies, social norms to school curriculum). And thus, our progression in this concocted reality is far from being stable; it is a-chasing after the wind.
As a SW engineer, i still dont believe to the fact that I did not learn (A->B) = not(A) + B in school or university
This math system proves Gödel's theorem (M→G)
Gödel's theorem proves that a math system cannot prove its own consistency (G→-Pc)
(1) Therefore, M→-Pc
Gödel's theorem also proved that a math system that can prove its own consistency is not consistent (G→(Pc→-C))
(2) Therefore, M→(Pc→-C)
But if Pc→-C and M→-Pc, this means M→-Pc→-(-C)
Which means M→C
Which means this math system proves it's own consistency
Therefore, either this math system is inconsistent or Gödel's theorem has a fallacy
I feel like this relates to how you're not logically able to use a religious text (e.g., the Bible) to prove that the religion is true.
Exactly in the same way one does with math, science and every form of knowledge.
saw this video about 15 times still stuck on the smith function :)
love this proof
Godel is the first to call dibs on math!
Do it by not doing your homework!
Great videos but... am I the only one who finds the oracle intensely irritating? xD
cool cool lcool . nice
This doesn't make sense at all, just as the video for the 1st theorem.
This statement 'for each proof P' only makes sense if the proofs are finite number. If there are infite number of proofs this will always loop, which makes any further steps of the proof meaningless.
As long as the proofs are countable (which they are), you can loop through them. If one of them is valid, the looping will stop when it's reached.
@@GumbyTheGreen1 :) ah yes, you simply need to count up to infinity.
@@SvetlozarArgirov I wouldn't say that. You'll reach the valid proof after a finite number of iterations. You just need a while loop that doesn't specify the number of iterations ahead of time.
Doesn’t have to be finite. As long as it’s countably infinite, if there is a proof of something you are trying to find, you’ll eventually find it.
@@regalrayquaza7609 you can't count to infinity.
"In order for our math system to be correct we have to hope that its own correctness is a problem that remains unsolved"
This is where atheists become believers without a proof and contradict with theirselves. All reasoning ultimately traces back to faith in something that you cannot prove.
Yeah, no. Just because you can't prove something doesn't mean it's equally as likely as some random garbage you made up, or that it's unreasonable to act as though something that is overwhelmingly likely is true.