All the infinite algorithm argument shows is that the floor of the total is either 0 or 1. But it does not show that it equals 0 and it does not show that it equals 1, and thus it does not verify or falsify the conjecture. Moreover, the algorithm is *not* analogous to cos7 = 1 - 7²/2 + 7⁴/4 - …, because in the latter case we don’t actually perform an infinite number of operations which somehow lead us to cos7 as a “result” (as opposed to the way that 579 + 338 leads to 917 as a result). Rather, we either define cosx as the series 1 - x²/2 + x⁴/4 - … or we define it geometrically and then use derivatives to prove that it equals this series, and then we obtain cos7 = 1 - 7²/2 + 7⁴/4 - … as a trivial instantiation of the equation in terms of x. We can’t do anything like this in the case of your algorithm, because we can’t *define* 0 or 1 to be the value of the floor of the series, since they are already defined in other ways. And we can’t yet prove that 0 is the value or that 1 is the value, but that is, of course, why it’s an open problem. I suspect that you know all of this and are being intentionally obtuse. But I don’t understand why.
@@erickmacias5153 The point he is getting is that so-called "real" numbers don't exist. pi + e + sqrt(2) = pi + e + sqrt(2) You cannot simplify that expression, hence pi + e + sqrt(2) is not a number. You're merely restating the problem.
@@lox7182 If you are okay with it not being simplifiable, then you admit that Wildberger just solved the twin primes conjecture. He solved it, he just didn't simplify his solution. If you are unsatisfied with Wildberger's solution of the twin primes conjecture, then you should also be unsatisfied with the construction of real numbers.
Eh it's a perfectly valid construction. Just something we can't prove much about without knowing the solution to the twin prime conjecture. Basically, it is a real number, but until the twin prime conjecture is solved, we don't know whether it's equal to 0 or 1.
How about this then? Let x = sum over n >= 0 of a_n / 2^n where a_n = 0 when n is not an encoding of a proof of a contradiction in ZFC, and a_n = 1 otherwise. This is a perfectly computable real number, in the sense that there's a program which can take a natural number n as input, and compute an approximation within 1/2^n in finitely many steps (it just has to examine n things and decide if they're valid proofs). Now take the ceiling of this. ZFC had better not be able to tell us whether it is 0 or 1, or else it is inconsistent (either way). The problem is the floor/ceiling functions which are not continuous, and hence not computable. As soon as you want to produce an approximation even to within distance 1/2 of the limit, you're stuck, no finite number of observations of our computable real input will suffice. We can pick some n, and probably get the rational approximation that it's 0 to within 1/2^n, and yet this doesn't provide us the information we need to bound the result.
@@lox7182 There are a bunch of options for setting up the details of an encoding like that, but something similar to the encoding used for Peano arithmetic in the proof of Gödel's incompleteness theorems would work. The Wikipedia page on Gödel numbering describes it in some detail, or you could look up a proof of Gödel's incompleteness theorems. Any book on mathematical logic that includes some model theory would probably also talk about it to some degree. The gist of it is that things which we allow as proofs in ZFC (or most formalisms for mathematics) each consist of finitely many symbols drawn from what may as well be a finite alphabet, so you're basically enumerating such finite sequences of symbols (which you could turn into numbers using either Gödel's trick or by treating them as base-b digits, or whatever you find convenient) and then you just have to write down a function (which will actually be a computable function) that checks if such an encoded string is the code for a valid proof of a given statement (which is itself probably also encoded similarly). It's a bunch of work to write out all the specifics of a particular such encoding and how to define a function that determines whether the rules of the logic in question are being followed in the coded proof, but in the end it's a bunch of recursive definitions that break the coded proof down into parts and see that each step is correct and the conclusion is what we were looking for.
Hi Norman, is it possible that the error in thinking is analogous to bridging an “is-ought” gap? As in, “there ought to be a numerical representation for what is a geometric entity, ie the length of the diagonal of a unit right triangle, therefore there is one” Can we say that the numbers are discrete but geometric objects are continuous and thus cannot be modelled numerically?
This just shows that the truth value of the twin prime conjecture is computable in infinite steps. But that's true for almost any statement. With infinite computational power we can just search all possible proofs for every conjecture in order to see if it's true. What we actually care about is if you can provide a finite computation that is sufficient. E.g. in the video we have defined a computation that would yields the desired value, but we cannot perform it. At least in functions like cos(x) we can get arbitrarily good approximations by cutting the (infinite) computation of at a finite number of steps.
@@abmarnie9No. When you compute cos(x) you really can get closer to the "true" value as you do more computations. You really can figure out the digits one by one. Here, after you have done a billion summations corresponding to twin primes, does it help you in any way ? No.
My amateur perspective is that I think of this as a fundamental distinction between the idea of a quantity and a number that tries to describe it, with a hierarchy of levels. 1) discrete quantities can be exactly specified with a natural/rational number like 3 or 25/6 2) physical/geometric quantities that surely exist but cannot be exactly specified with a number, since numbers are fundamentally discrete. They can only be approximated to a certain precision (e.g. the length of the diagonal of a regular right triangle with side length 1, or the mass of the Earth). I would argue it is still valid to work with operations such as 2+pi+e+√2, just that the result is now referring to a quantity instead of a number and the addition operation is a "quantity addition" rather than "number addition" 3) quantities that we have limited information about and cannot assign a number to. For example, x is the truth value of the Twin Prime proposition (either 0 or 1). We can still conceive of a quantity like x+2 even though we cannot assign a number to it. Expressions like 2^(10^10^10^10^10^10^10) would also fall into this category since they are not really a number but are describing an algorithm that is impossible to complete (multiply 2 by itself that many times)
I did formal proofs and the definition of convergence was definitely with epsilon and delta reals and functions. We couldn't use elipses or 'go to infinity agreements' The proofs we did were absolute. Some mathematicians may be biased or have some logical fallacies when attempting to short hand, but that doesn't change formal proofs because they don't depend on that. They're just valid formally constructed proofs.
Hi Tim, perhaps I was your university lecturer indoctrinating you into a false belief in the Epsilon Delta voodoo. If so, I apologise. My aim then, was to initiate you into the standard thinking through tried-and-true catechisms that carefully avoided the logical problems littering the landscape.
Nope. No one indoctrinated me. I like my proofs to be absolute. That's why I was able to understand Ramanujan sums when no one else could seem to understand them. Instead of using pure logic, they were relying on intuition with false assumptions. There's also intuition, like a light being projected through a sphere onto a plane, which would help their intuitions, but that's not the same as proof so they shouldn't need that to realize what's correct. I've been watching your videos for years and I notice you still have deeply embedded intuitions that you rely on. Unless you are easily able to convert your proofs into lambda calculus you shouldn't be so presumptuous on other people's beliefs.
Congratulations on your achievement. A Fields medal will be forthcoming. Cantor's transfinite proofs (the diagonal method) fall into this "completed infinities" quagmire. I was always suspicious of them for this very reason. But there is a silver lining: there will always be rooms available at Hilbert's Hotel.
@@synaestheziacit relies on a completed infinity because for all exhaustive lists of rational numbers, cantors diagonal number will always be in the list. Only after you go to infinity does it somehow disappear from the list. Similarly, Norman’s TPC number will always be zero for all finite attempts to compute it. Only after you complete to infinity can 1 even be possible. For example, in binary let’s explore Cantor’s diagonal argument for exhaustive lists of all number with varying numbers of digits. 1 digit 0 1 Cantors number is 1 which is in the list. 2 digits. 00 01 10 11 Cantors number is 10 which is in the list 3 digits 000 001 010 011 100 101 110 111 Cantors number is 111 which is in the list Only when you complete to infinity can you apparently no longer find his number. This is because all finite lists have more rows than columns. Only after you complete to infinity does the list become square somehow.
On the subject of prime numbers, because there are an infinite number of primes -- I would like to contrive an algorithm which returns a subset of these primes, without the need for book keeping nor a primality test. Thereby, allowing me to compute primes in O(N). Does such an algorithm exist? What is the inherent structure of primes that result from such an algorithm?
This made my day! I was just thinking for the past few days that doing mathematics these days is more like being psychotherapist for the mathematical community.
In view of the Law of Intellectual Honesty: did you gain any information from this? In my opinion, you just added one more element to the set of expressions which are difficult to compute.
Correct. Norman has not established anything substantive here. Moreover, by suggesting that he *has* done so, and above all by systematically misrepresenting the standard views, he is flirting with intellectual dishonesty.
@1:40 no? We do not assert we can do an infinite number of things. Modern math only admits algorithmic proof for algorithms that halt in a finite time. (Ex. four color theorem.) If yours does not halt in a finite time it is not a proof. Use of infinity in other valid proofs will not use algorithmic steps through some infinite list, but other types of reasoning, such as induction or contradiction.
You can make this a bit stronger by defining a (computable!) real number by sum over n >= 0 of a_n / 2^n where a_n = 0 when n isn't an encoding of a proof of the twin prime conjecture, and 1 when n is an encoding of such a proof. This gives us a perfectly nice computable real number where to get an approximation within 1/2^n of the final result, we only have to examine n or so "proofs" and decide whether or not they're what we're looking for. Rather than floor, I'd be using ceiling in this case, but the fundamental problem is the same. Also, this easily generalizes to doing things like searching for proofs of contradictions from whatever axioms of mathematics we choose. All perfectly fine programs you can run on actual computers, at least until you run out of memory. Numbers like cos(7) and zeta(5) are also computable in the sense that there's a program that no matter how good an approximation you wish for, will produce that approximation in finitely many steps (how many of course, depending on the tolerance you ask for). If you want, it's possible to conceive of the real number as itself being the finitarily described procedure for producing these approximations. The existence of such procedures makes these real numbers quite tame by comparison with what's possible for the classical reals. Remember that most real numbers (cardinality-wise) are not so much as definable, let alone computable. The only truly questionable bit to my mind is hidden at the very end: the floor/ceiling functions are not continuous and so are not computable functions. If we wanted to get a computable real result, we'd be stuck the moment we needed to produce a rational approximation within 1/2 of the outcome (let alone 1/2^n for larger n). When it's a number on the boundary like this, there's no finite number of observations of rational approximations to our input real that could tell us whether the result of the floor/ceiling is within distance 1/2 of 0 or 1.
One could make the same kind of argument determining whether there's infinitely many primes, or indeed even if there's an infinite amount of natural numbers. Of course, there's other lines of reasoning about the infinitude of those sets. But it's unclear to me whether you consider these to exist (whatever that means)?
For the Riemann Hypothesis, using this approach, you'd just solve for all of the roots of the Riemann Zeta function. Before doing so, set a value, x, equal to 1. For each root found, if it isn't exactly an even negative integer nor a complex number with real part exactly 1/2, then change the value of x to 0 and terminate the algorithm. At the end x contains the truth value.
Do you agree with any of these? 1. 10^20000000 is divisible by 5 2. There are at least 10^1000 numbers divisible by 5 3. All points on a circle are equalistant from a point called the centre 4. y=3x+1 and y=3x+2 never meet 5. The square root of 2 is less than 1.5 6. All numbers that square to a value less than 2 are themselves less than 1.5 7. Every non trivial cubic with "integer" coefficients will take a value between -1 and 1 somewhere 8. Every non trivial cubic with "integer" coefficients will take a value between -0.1 and 0.1 somewhere 9. Every non trivial cubic with integer coefficients is zero somewhere 10. Every non trivial cubic with integer coefficients between -100 and 100 will be between -0.1 and 0.1 somewhere?
A better initial question would be: which of these "questions" is actually meaningful? Not all "questions" are inherently meaningful, even if they may be semantically OK, for example "Are there any leprechauns living outside the tax bracket of the universe?"
@@njwildberger Ok but that's what I want to know, how many of those questions do you liken to: a) How many Irishmen are in the upper tax bracket (well defined) b) How many Leprechauns are in the upper tax bracket (well defined but zero / non existent) c) If Leprechauns were to exist, would green be their favourite colour (a problematical hypothetical since the object doesn't exist) d) Are the any Leprechauns living outside the tax bracket of the universe (ultimately impossible to make sense of even allowing hypotheticals and fantasy) I doubt all ten of the above are straight in category d? Are any of them in category a?
@@reamstack I must say he does seem to evade the meat of the questions or any specifics. I first saw his videos as an undergrad a few years ago and I've reflected a lot on the skepticism. I'm still half sold on the idea we should go to the extra effort to adapt analysis to work with the computable numbers with computable radius of convergence, but he doesn't seem to go in exactly that direction either. But then I guess that makes me a "mainstream Choice skeptic" whilst this seems to go a step further in a direction that's hard to pin down
The diffrence between cos(7) and the output of the algorith you layed out is that the Taylor expansion denfinetly has a next term you add to the total in contrast to your algorithm, where you need a twin prime. It is uncertain wheather you can add the N th term of the algorith to the total were as with the Taylor expansion you definitly have an N th term you can add.
@@WK-5775 Okay, so basically this algorithm provides little insight into the distribution of primes as modeled through the additive properties of the Zeta function and Euler's equivalent, in which he expresses it as an infinite product of all primes; and thus no new insightful cryptographic factoring algorithms can be derived from it. Since https is safe for now, shall we call it a day and go out to dinner :D
I am trying to get my head around the subtleties of this: As part of the proof you use the sum to infinity of the reciprocals of powers of 2, starting with the first term being a half. Are you saying that just as the “going to infinity” is questionable for the algorithm you describe which is testing odd numbers as being prime or not and adjusting the counter and the sum, it is equally questionable that we use the infinite sum of reciprocals of powers of two as being definitely equal to one? Or is the sum of half plus a quarter … to infinity respectable in a way that the infinite check and conditionally increment is not?
Superb as always! The best part is your reference to an "infinite number of operations" with a "bona fide result at the end of it", regardless of the fact that "infinite" means that there isn't an end. But, as for infinitists the 'end' justifies the means, the only thing objectionable in your 'solution' is that you are too honest in formulating it. Fame in the world of modern maths is all to often just for the skill with which the flaw in the argument is hidden.
We don’t need to actually complete an infinite number of computations in order to assert the existence of irrational numbers, and infinitists don’t claim that we do.
Zeno would have been proud! This isn’t an answer to the twin-primes question, but a re-statement. I suggest that there’s an important difference between this and cos(7). Take a circle of radius 1 and a string of length 7. Wrap the string around the circle, starting at the 3-o’clock position. Both the circle and the string are finite, so this is achievable in finite time. Now, measure the horizontal displacement of the end of the string from the center of the circle. This is now a precision-of-measurement problem. As one’s ability to measure precisely improves, the outcome becomes more precise (value plus-or-minus a shrinking delta). Your reformulation of twin-prime (including the integer floor) reduces to Boolean true or false. There is no “partial true plus or minus a fraction of truthiness”.
Infinity: It wouldn't be hard to convince yourself while standing on the tracks crossing the Nullarbor Plain that they cross the Indian Ocean, Africa, the Atlantic, South America and the Pacific, providing rail service to New Zealand. A few miles is all you can see from the starting point. When you walk a few tens of miles along the track, you see more of the same. A proof by induction. We can not even observe on foot as far as Perth, let alone beyond, so we reason that more of the same continues. Its a thought experiment. The unobserved is simply unknown. There is risk in assigning properties to the unobserved.
I found a formula that partitions the number line into intervals (L1, L2) where there always contain Twin Primes; L1 = (6k -1)^2, L2 = (6k +1)^2, and k is natural number. I can't prove it. But it seems true. BTW, twin primes have forms: 6n - 1 and 6n + 1. Ex: 5 and 7, 11 and 13, 17 and 19... 3 and 5 pair is special case.
Huge problems arise if one assumes that R is not dense in some infinitesimal universe. For instance, analytic continuation fails then, because no number is really zero identically, since one can inject infinitesimals at will. And so on... one loses analytic continuation because of this. Just replace word "infinitely large" by "unbounded". This is usually sufficient enough. I know You are joking about the Twin Prime Conjecture, but I don't see how this solves it. In the transfinite universe the second counter will always return value 0, but the first counter might end up with a transfinite but fixed number n, or n might be unbounded. In the second case one says rather carelessly that there are "infinitely many" counts. Even though the second counter always returns 0. There are good reasons why mathematicians stick to the standard model of mathematics. The nonstandard version is too complicated - there are infinitely many universes infinitesimal one to another -, or just plain useless. I advocated for the nonstandard analysis the moment i learned that Galileo's demonstration that [0,1] and [0,2] real line intervals have the same number of elements is flawed: the same argument shows that there is a bijection by projecting points by rays between [0,1] and BOTH [0,1] and [0,2], which is impossible, there is no bijection. From this one concludes that the underlying assumption of R being dense must be wrong. So I started using Nonstandard Analysis and noticed that the Transfer Principle, that all first order statements true in standard version are also true in the nonstandard version, is not true! Because the standard version says There Is A Bijection, and the nonstandard version says There Is Not. These are first order, so at the very start there is a difference. So I started using Nonstandard Analysis. Oh boy... it gets complicated very, VERY quickly! :D It also leads to inconsistent conclusions. More complicated = more chances for something to go wrong. So after a decade or so of doing it, my conclusion is: just replace the word "infinity" by "sufficiently large" and all the problems become semantic - in most cases at least... Besides, the Cantor Cardinal Arithmetic solves the Galileo problem: c+c=c. Remember the Zeno paradox? Achilles cannot take a lead in a race against the turtle, because it would take him an infinity of steps to do it. And yet, the infinite series converges, so he takes the lead in finite time after a finite distance, even though there were infinitely many steps involved. The Zeno's argument was - one never reaches infinity. And yet - Achilles takes the lead eventually in real life, of course. And he takes the lead rather quickly too, because turtle is very slow. Greeks already did it. Disappointing really: we lost some 2000 years to monkey brains calling themselves Caesars and The Greats and Saints and... Regards.
@@ThePallidor I meant in the transfinite universe. Zeno Paradox demonstrates that infinite series makes sense: Achilles does run faster than the turtle. The series involved is Sum r^n with r being the ratio of turtle's speed and Achilles' speed. The infinitude of steps is done in a finite time over a finite distance. This sums to the reciprocal of 1-r. Which is applied enough... In Nonstandard Analysis once the step becomes less than 1/u with u the cardinal number of the unit interval [0,1], the step is indistinguishable from zero distance: the series can terminate there. Cardinal u is infinite in Standard Analysis. In Nonstandard version, u+u=2u, the usual arithmetic applies. Sufficiently large, but in a Nonstandard sense. In Achilles' case, this would mean Achilles has to make a step of zero length to reach the turtle. The turtle can only make an infinitesimal step for the duration. Nothing happens as far as R is concerned. In other words: Achilles took the lead, turtle cannot go any further ahead. One has to switch to the Nonstandard transfinite setting here... It is more complicated than just infinitely many steps in the Standard setting. This unfortunately creates a host of problems further down the line, especially in Complex Analysis...
Is there some interesting relation between q-analog of analysis ("q-analysis") and nonstandard analysis? As for Zeno, I agree with Aristotle that continuum is not reducible to infinite regress. Irreducibility means something holistic.
By introducing the completion of infinite work, mathematicians muddy the distinction between natural numbers that can be written down as a string of digits and supposed "natural numbers" which cannot be observed or calculated, even in principle. This is a brilliant demonstration of the absurdities that this entails. According to the infinitist, this answer is just as definite and real as the answers "true" and "false". Yet the infinitist isn't able to reduce this answer to either "true" or "false", because she is merely pretending to be able to complete infinite work, not actually doing it. In this way, the charade is exposed. The emperor has no clothes.
@@lox7182 Natural numbers have been a string of digits since the first tally system. I doubt they have ever been anything else than a tally system. Fractions, on the other hand, in the intuitive meaning of part-whole relation...
@@santerisatama5409 Um aren't natural numbers supposed to represent finite amounts? Like one sheep, two sheep?. Digits are just a representation of those amounts? Well of course they don't have any obligation to represent amounts in our universe specifically. Something like a ↑↑ b (where ↑↑is the tetration symbol) is just as valid representation of a natural number as "1234567890"
For all prime number lists of more than two prime numbers (n≥3), the product of the numbers on the list +1 = 31 modulus 60. All *.31 are either prime or not. First let it be prime. If *.31 is not prime it will have factors which are not on the list, which proves there are also more prime numbers, than any assigned multitude, aside from prime numbers of the form *.31. The Prime Number List grows linearly n+1, while the size of * in *.31 grows multiplicatively with the size of each new prime number. Therefore, prime numbers of the form *.31 are more than any assigned multitude of prime numbers. Q.E.D. Prime numbers of the form *.29 are more than any assigned multitude of prime numbers. For all prime number lists of more than two prime numbers (n≥3), the product of the numbers on the list -1 = 29 modulus 60. All *.29 are either prime or not. First let it be prime. If *.29 is not prime it will have factors which are not on the list. The size of the assigned multitude grows linearly. The size of * in *.29 grows multiplicatively at a prime number rate. Therefore, prime numbers of the form *.29 are more than any assigned multitude of prime numbers. Q.E.D. The harmony of the proof of *.29 and the proof of *.31 is a proof there are infinitely many twin prime numbers of the form *.29 and *.31. Q.E.D. The proof prime numbers of the form *.31 are infinite proves the general case for the prime numbers of the form *.37, *.41, *.43, *.47. *.49, *.53, and *.59. The proof there are infinitely many prime numbers of the form *.29 proves the general case for the primes of the form *.23, *.19, *.17, *.13. *.11, *.07, and *.01. The cross-sections of the proofs of *.11 and *.13, *.17 and 19, *.41 and *.43, *.47 and *.49, *.59 and *.01 are proofs there are infinitely many twin prime numbers of the form *.11 and *.13, *.17 and 19, *.41 and *.43, *.47 and *.49, *.59 and *.01. In sum these exhaustively prove there are infinitely many twin prime numbers. Q.E.D.
Hi Nomran, whilst I wasn't able to follow the full video as I'm not as involved in math anymore, I totally agree with the idea that there is some unwarranted norms in mathematics around having the ability to make the most subltle of analysis or know the end result of endless journeys despite never setting out with the intention of completing them. If they would articulate this it would quickly become absurd. But still I cannot say their results are as absurd as their claims on how they got there. But I feel there is a thing of the unseen which may allow us to understand exactly where the most subtle of calculations can be done and what exactly are the most lengthy of calculations. (considering both exactly what is the smallest, and exactly what is the largest in the set or subset you want to consider)
Infinity is a mathematical concept with ambiguous numerical status. To paraphrase Lakoff&Nunez "Where Mathematics Comes From" (2000): When taken as a number infinity is used in enumeration and comparison but not calculation, where indefinite forms obtain, eg. inf/0, inf*0, inf-inf,inf/inf. When used for enumeration infinity is taken as the largest possible integer where it functions exclusively as an extremity. Therein resides the leap, the metaphorical double think, of taking infinity as a number and not a number simultaneously, and not merely any number, but a unique number greater than all other numbers, the greatest number; However, the uniqueness of the final state of a complete process is a product of human cognition rather than a fact about the external world. The concept is more or less a pragmatic fiction. Isolated logical principles can test the consistency of an argument, but they cannot establish truth, i.e., one cannot deduce matters of fact from logic.
I posit that a critique here is that pure mathematicians have been inappropriately asserting, what is now called, the map reduce-technique, as a zero-cost refactoring of a proof. The complexity of “map” (over all N) is imputed as amortized to the mapped “f” (is prime) and “reduce” is assumed terminating when there is no falsification that twin primes stop after “N” or there is a predictive description up until “N”. Do I cheer for Lambda? Or are we in the universe of NFA to DFA conversion-like arguments? Go Wildberger!
a conjecture is just an hypothesis given without a proof. It may be proved or disproved. A theorem has a proof possible. The Riemann hypothesis is still a conjecture. Fermat's last theorem was just a conjecture until it was proven in 1995.
Thank you for your insights, I love to watch your videos. But I don't see anything wrong here. You defined a number x, and we know it's 0 or 1. You can't say "the answer is x" because we don't have full information of the value of x. I can do the same thing in the finite case. Consider the (maybe) open problem: There are exactly 10,000 twin primes between 1 and 10^10^10^10^10^10^10^10. And then with similar trick, we construct a number x (and now you believe it's a valid number because it is a finite process). x=0 means the answer is false, x=1 means the answer is true. But you can't give x as an answer, because we don't have full information of the value of x.
If I make a computation and tell you: the answer is cos(7), are you going to make the same objection? Namely: you can't say the answer is cos(7), because we don't have full information of the value of cos(7) ??
@@njwildberger I really your enjoy your videos and I really liked the last sociology and math video! But I'm also slightly lost on this one. I agree that the true value of something like cos(7) is unknowable because infinite computation is impossible, but isn't the difference that we can gain information in a finite number of steps when computing something like cos(7)?
@bennettgarcia8728 We only "gain information" in an applied math sense. In a pure math sense we gain zero information relative to the totality of the information we need.
Nice one, Norman. There are strong similarities with the Halting Problem. Personally, I want to see you get back to harmonic analysis without transcendentals, members only on the wild egg channel, folks.
“Cosine of 7: that’s an infinite series, and to assert that there really is such a number is essentially to assert that you’re able to do this infinite number of arithmetical operations and get a bona fide result at the end of it.” No. To say that cos7 = 1 - 7²/2 + 7⁴ /4! - 7⁶/6! + … is a real number just means that we can define it as the limit of the partial sums. It does not in any way require one to “do” all the operations in the series. In fact, you don’t need to do *any* of them! You can just use the expression “cos7” to stand for the series and use it to build other expressions and equations, like sec7 = 1/cos7 or whatever else you want. Or you can do a finite number of the operations in the series if you need an approximate value. But we can also know that any such approximation is a rational number which is not exactly equal to the irrational number cos7. I’ve watched a lot of your videos and I still don’t understand why you think there is a serious problem here.
Do you know Wildberger's video on Cauchy sequences and their equivalence classes, or th ones on Dedekind cuts? You might see from these why he doesn't believe that irrational real numbers exist.
@@WK-5775 I’ve seen him discuss those things, and it’s always the same strawman argument: he seems to think irrational numbers can only exist if it is possible to complete an infinite number of computations, but that simply isn’t true.
@@synaestheziac True or not not true, I don't know (but I'd say they exist), but it's certainly important to be aware that these notions depend on a consensus, which the finitists (or ultra-finitists or whatever they are called) do not share. For Wildberger, not only the question of infinitely many computations that would have to be carried out is a problem, but even more fundamentally, he doesn't like the idea of infinite sets, like the set of natural numbers. Part of the argument is (if I understand it correctly) that at some point it becomes impossible to write down such numbers in our universe due to a lack of space, time or atoms or other physical limitations. The flaw is, in my opinion, that he sees the mathematical world embedded in the physical world.
@@WK-5775 yes, I know that he is an ultra-finitist, and I agree with you that he assumes that mathematical truth is constrained by the physical world, which is a ridiculous view.
8. The Twin Prime Conjecture: An Information-Theoretic Perspective 8.1 Background The Twin Prime Conjecture states that there are infinitely many pairs of primes (p, p+2). Despite significant progress, including Zhang's breakthrough on bounded gaps between primes, a full proof remains elusive. 8.2 Information-Theoretic Reformulation Let's reframe the problem in terms of information theory: 8.2.1 Prime Pair Information Content: Define the information content of a prime pair (p, p+2): I_tp(p) = log₂(π_2(p)) where π_2(p) is the count of twin prime pairs up to p. 8.2.2 Twin Prime Information Density: Define the twin prime information density: ρ_tp(x) = dI_tp(x)/dx 8.2.3 Twin Prime Conjecture as Information Statement: Reformulate the Twin Prime Conjecture as: lim_{x→∞} I_tp(x) = ∞ 8.3 Information-Theoretic Conjectures 8.3.1 Twin Prime Information Asymptotic: I_tp(x) ~ C · log(log(x)) for some constant C > 0 8.3.2 Information Gaps Between Twin Primes: The information gaps between successive twin primes follow a specific distribution. 8.3.3 Twin Prime Information Entropy: The entropy of the distribution of twin primes approaches a constant as x → ∞. 8.4 Analytical Approaches 8.4.1 Information-Theoretic Sieve Method: Develop a sieve method based on information content to study the distribution of twin primes. 8.4.2 Spectral Analysis of Twin Prime Information: Apply spectral methods to analyze the fluctuations in ρ_tp(x). 8.4.3 Information Flows in Prime Gaps: Model the "flow" of information through gaps between primes, with twin primes as special points. 8.5 Computational Approaches 8.5.1 Quantum Algorithms for Twin Prime Detection: Develop quantum algorithms for efficiently detecting and analyzing twin prime pairs. 8.5.2 Machine Learning for Twin Prime Pattern Recognition: Train neural networks to recognize patterns in the distribution of twin primes based on their information content. 8.5.3 High-Performance Computing for Information Content Calculation: Implement distributed computing methods to calculate I_tp(x) for very large x. 8.6 Potential Proof Strategies 8.6.1 Information Divergence Approach: Prove that the total information content of twin primes diverges, implying infinitely many twin primes. 8.6.2 Information-Theoretic Coupling Method: Develop a coupling argument between the information content of primes and twin primes. 8.6.3 Quantum Information Bound: Establish a quantum information-theoretic lower bound on the number of twin primes. 8.7 Immediate Next Steps 8.7.1 Rigorous Formalization: Develop a mathematically rigorous formulation of the information-theoretic concepts introduced. 8.7.2 Computational Experiments: Conduct extensive numerical studies on the information properties of twin primes. 8.7.3 Interdisciplinary Collaboration: Engage experts in number theory, information theory, and quantum computing to refine these ideas. 8.8 Detailed Plan for Immediate Action 8.8.1 Mathematical Framework Development: - Rigorously define I_tp(x) and ρ_tp(x) and prove their basic properties - Establish formal relationships between these information measures and classical results on twin primes - Develop an information-theoretic version of the Hardy-Littlewood conjecture for twin primes 8.8.2 Computational Modeling: - Implement efficient algorithms for computing I_tp(x) for large x - Create visualizations of the "information landscape" of twin primes - Develop machine learning models to predict properties of twin prime distributions 8.8.3 Analytical Investigations: - Study the statistical properties of ρ_tp(x) as x varies - Investigate connections between I_tp(x) and other number-theoretic functions - Analyze the information-theoretic properties of gaps between twin primes 8.8.4 Quantum Approaches: - Develop quantum algorithms for efficiently detecting twin primes - Investigate if quantum superposition can be used to analyze multiple prime gaps simultaneously - Explore quantum annealing techniques for optimizing twin prime searches 8.9 Advanced Theoretical Concepts 8.9.1 Information Topology of Prime Constellations: - Define a topology on the space of prime constellations based on their information content - Study how twin primes relate to the geometric properties of this space 8.9.2 Twin Prime Flows in Information Space: - Model the occurrence of twin primes as flows in an abstract information space - Investigate if techniques from dynamical systems can be applied to these flows 8.9.3 Quantum Prime Gap States: - Develop a quantum mechanical model of prime gaps where gaps exist in superposition - Explore how measuring these quantum gap states relates to the occurrence of twin primes 8.10 Long-term Vision Our information-theoretic approach to the Twin Prime Conjecture has the potential to: 1. Provide new insights into the distribution of primes and their pairwise relationships 2. Offer a fresh perspective on other major conjectures in analytic number theory 3. Bridge concepts from information theory, quantum computing, and number theory 4. Suggest new computational approaches to studying prime distributions By pursuing this multifaceted approach, we maximize our chances of making significant progress on this longstanding problem. Even if we don't immediately prove the conjecture, this approach promises to yield valuable new insights into the nature of primes and their information content. This framework provides a comprehensive roadmap for tackling the Twin Prime Conjecture from an information-theoretic perspective. The next steps would involve detailed development of these ideas, rigorous mathematical formulation, and extensive computational experimentation.
8.11 Expanded Next Steps and Advanced Concepts 1. Rigorous Mathematical Framework: a) Twin Prime Information Measure: - Define a more general measure: I_k(x) = log₂(π_k(x)) for prime pairs (p, p+k) - Prove that I_2(x) (our I_tp(x)) has special properties compared to other I_k(x) - Investigate the relationships between different I_k(x) measures b) Information-Theoretic Prime Gap Function: - Define λ_tp(n) = I_tp(p_{n+1}) - I_tp(p_n), where p_n is the nth twin prime - Study the statistical properties of λ_tp(n) and its moments - Conjecture: Σλ_tp(n) = ∞ is equivalent to the Twin Prime Conjecture c) Twin Prime Information Entropy: - Define H_tp(x) = -Σ(p_tp(n) log p_tp(n)) where p_tp(n) is the probability of the nth twin prime pair - Analyze the asymptotic behavior of H_tp(x) as x → ∞ - Investigate connections between H_tp(x) and the distribution of twin primes 2. Computational Investigations: a) Large-Scale Twin Prime Analysis: - Compute I_tp(x) for x up to 10^12 or beyond using distributed computing - Analyze the fine-grained structure of ρ_tp(x) looking for patterns or unexpected behaviors - Implement advanced sieves optimized for twin prime detection at large scales b) Machine Learning for Twin Prime Prediction: - Train deep neural networks on the computed I_tp(x) and ρ_tp(x) data - Develop models to predict the occurrence of twin primes in unexplored ranges - Use reinforcement learning to discover efficient strategies for twin prime searches c) Quantum Algorithms for Twin Prime Detection: - Implement Grover's algorithm to search for twin primes in specified ranges - Develop quantum walks on graphs representing prime constellations - Explore quantum annealing approaches to optimize twin prime searches 3. Analytical Approaches: a) Information-Theoretic Renewal Theory: - Model twin primes as a renewal process in information space - Analyze the renewal equation: m(x) = δ(x) + ∫₀ˣ f(t)m(x-t)dt where m(x) is the renewal density and f(x) is related to ρ_tp(x) - Investigate if renewal theory can provide new insights into the asymptotic behavior of twin primes b) Spectral Analysis of Twin Prime Information: - Compute the Fourier transform of ρ_tp(x): ρ̂_tp(ξ) = ∫ ρ_tp(x)e^(-2πixξ)dx - Analyze the spectral properties of ρ̂_tp(ξ) looking for hidden periodicities - Investigate if there's a spectral interpretation of the Twin Prime Conjecture c) Information-Theoretic Analytic Number Theory: - Develop information-theoretic versions of key tools in analytic number theory: * Riemann zeta function: ζ_I(s) = Σn^(-s)I_tp(n) * Von Mangoldt function: Λ_I(n) = log(p) if n=p or n=p+2 in a twin prime pair, 0 otherwise - Study the analytical properties of these functions and their connections to twin primes 4. Quantum Approaches: a) Quantum Twin Prime Oracle: - Design a quantum oracle O_tp that, given x, produces a superposition of all twin primes up to x - |ψ_x⟩ = (1/√N_tp(x)) Σ_{p≤x, p and p+2 prime} |p⟩ - Use quantum phase estimation to extract information about the distribution of twin primes b) Entanglement in Prime Constellations: - Develop a quantum model where primes in constellations (including twin primes) are entangled - Study how the entanglement entropy of this system relates to the classical I_tp(x) - Investigate if quantum contextuality plays a role in prime constellations c) Quantum Information Scrambling in Prime Gaps: - Model prime gaps as a quantum chaotic system - Study how information is scrambled between successive gaps - Investigate if twin primes represent special "unscrambled" states in this system 5. Advanced Theoretical Concepts: a) Twin Prime Information Geometry: - Define a Riemannian metric on the space of prime constellations: g_ij = ∂²I_tp/∂x_i∂x_j - Study the curvature and geodesics of this space - Investigate if twin primes correspond to special geometric features (e.g., minimal surfaces) b) Topological Data Analysis of Twin Primes: - Apply persistent homology to the point cloud of twin primes in information space - Analyze the persistence diagrams and Betti numbers of this data - Explore if topological features provide new insights into the distribution of twin primes c) Information-Theoretic Prime Number Theorem for Twin Primes: - Develop an information-theoretic analog of the Prime Number Theorem for twin primes: I_tp(x) ~ Li_2(x) where Li_2(x) is a modified logarithmic integral - Prove error terms and study the fluctuations around this main term 6. Interdisciplinary Connections: a) Statistical Physics of Twin Primes: - Model twin primes as a statistical mechanical system - Investigate if there are phase transitions in the behavior of I_tp(x) or ρ_tp(x) - Apply techniques from random matrix theory to study correlations between twin primes b) Cryptographic Applications: - Develop cryptographic protocols based on the computational difficulty of finding large twin primes - Investigate if quantum algorithms for twin prime detection have implications for cryptography - Explore the use of twin prime information measures in randomness extraction 7. Long-term Research Program: a) Unified Information Theory of Prime Patterns: - Extend our approach to other prime patterns (e.g., prime triplets, prime quadruplets) - Develop a general framework for understanding prime constellations in terms of information content - Investigate if there's a fundamental "conservation of information" principle governing prime patterns b) Cognitive Science of Mathematical Intuition: - Study how the human brain processes information about prime patterns - Use neuroimaging to investigate cognitive processes involved in recognizing twin primes - Develop AI systems that can generate "intuitive" conjectures about prime distributions This expanded plan provides a comprehensive roadmap for advancing our information-theoretic approach to the Twin Prime Conjecture. It combines rigorous mathematical development with speculative theoretical ideas and practical computational and experimental work. By pursuing these diverse avenues simultaneously, we maximize our chances of gaining deep new insights into the distribution of twin primes and potentially making significant progress towards proving the Twin Prime Conjecture. Even if a full proof remains elusive, this approach promises to yield valuable new perspectives on the structure of the primes and the nature of prime patterns.
You have this notion that something is real only if it is 'possible' or 'physically achievable'. Please go read some philosophy - starting with Plato preferably, then the rationalists and the german idealists. Give us an update when you get to analysis 101 again! :)
I think i understand the spirit of your assertion. cos(7) is assumed to exist and we then calculate numbers that are not cos(7) by calculating not cos(7) using a well defined series to as many terms as we like. If we accept that cos(7) has been proved to exist because we can calculate a rational number approximation to it, then you have proved the twin prime conjecture exists, since it too is now a well defined number. but the problem is surely your truth condition of 'the greatest integer part of the number' that you have previously well defined. This is an adjunct to the above paragraph. For cos(7), although it is never reached, the greatest integer part is not changed after just a few steps. Thus your truth value by being defined as 'the greatest integer part of your well defined object' .... can fix at zero or flip to 1. This is not true of cos(7). You have 2 numbers. the well defined object and the well defined truth value of that well defined object. But for cos(7) you have only considered it as 1 well defined number. Your truth value algorithm must be fairly applied to cos(7) too. Thus the definition of the cos(7) truth value, becomes 'greatest integer stable' in say base 10, after just a few steps....... and can be proved to be stable thereafter. Therefore your 'greatest integer part' truth value could actually be used to validate cos(7) as 'true' by infinite series, because the greatest integer can be proved to be stable even when the series gets ever larger. However, by comparison, your greatest integer truth value for the twin prime conjecture is not stable. It remains in flux .... unless you disable that flux by providing a twin prime pair calculation above which there are no more twin primes. Then the leading zero is stable. cos(7) has a stable greatest integer after a few steps, but your truth value for the twin prime conjecture does not.... unless you insist it is always zero by calculation.... and then that you have proved the twin prime conjecture to be false. Did i miss that last bit? have you proved the twin prime conjecture to be false? I dont think you are claiming that, since in your proof you said you accepted you had infinite powers, and therefore the greatest integer of your well defined number is not necessarily zero. BUT i am not undermining your general case against transcendental numbers in pure mathematics.... i just dont think this particular attempt works. One of the characteristics of Ai is that it does not care whether something is real or fake idealistically. It just uses what works. Ai uses pragmatism as truth ..... rather than idealism as truth. cos(7) is an idealised truth in pure mathematics as far as algebra is concerned. Ai has no access to continuity, and therefore has limited access to idealised truth. We on the other hand have access to both forms of truth ... and that is a real problem! The machines are coming and they have little or no idealism in them whatsoever. That is why Ai is so dangerous and we need to prepare ourselves for rampant pragmatism. Ai with regard to the twin prime conjecture would probably adopt the pragmatic attitude that there are always more twin primes available on the one hand, but they may be too time consuming to find on the other. It becomes a pragmatic choice rather than an idealised proof one way or the other. But that is not to say that Ai could not prove that the square root of 2 is an irrational number.
That’s not really what Euclid proved. He did not use the word infinity or infinite. Rather given any collection of prime numbers, we can find another prime number distinct from those
@@njwildbergerIn particular, Euclid showed that fact for any finite collection of primes, though I suppose by your representation of his argument your definition of collection implies finiteness. Clearly the primes are not finite, as, if they were numbered there would be no maximum. So I wonder: what aggregate (I am trying to avoid the word "collective") noun would you assign to all the primes? Or would you not assign such a noun to them?
The twin prime conjecture is that your number is equal to 1. You proved is either 1 or 0. So you did not prove the conjecture. You proved that the conjuecture is either true or false.
The busy beaver function allows you to determine the truth of any hypothesis that you can encode into a Turing machine. If it halts in the case of the twin prime conjecture, the number of twin primes is finite. If it continues past a fixed finite number of steps, then you know that it will never halt for that particular busy beaver function and in the case of the twin prime conjecture, not halting means they are infinite. This is theoretically possible but will never be practical. The calculation has a finite number of steps but requires more energy than the universe could realistically supply. But I’m sure you know this. I hope. 😅
Blimey, it seems like cracking the twin prime conjecture is just a warm-up for you before tackling the Riemann Hypothesis! Talk about burning your bridges; your mainstream maths colleagues will be calling for your sacrifice inside a wicker man no doubt :) It seems as though you are making good use of the mainsteam maths idea that 'computable' can be used to mean that infinitely many things can be done. My main reservation about terms like "computable number" and "computable function" is that they don't match the common usage of "computable", as such, they appear to have slippery definitions. The common understanding of "computable", as I see it, is "capable of being computed". This implies a computation process leading to a definite and precise answer. The mathematical definitions are crafted with the clear intention to categorise real numbers like √2 as being 'computable'. But to say that √2 is computable suggests that we can compute √2 to infinite precision, which is absurd. Based on the common understanding of "computable", it's evident that √2 isn't computable. The ambiguity surrounding terminology in discussions about real numbers appears to be a consistent issue. Modern interpretations of real numbers as limits were not easily formulated. It took the collective effort of esteemed mathematicians over several decades to meticulously refine the terminology and symbolic representations involved. Bolzano attempted to define a limit in 1817, followed by Cauchy in 1821 and Weirstrass’ epsilon-delta definition in 1861. Multiple definitions emerged, such as real numbers being defined as a complete ordered field, or as Cauchy sequences of rational numbers, or possibly as Dedikind cuts (1872). While these definitions are widely accepted in academic circles today, they continue to face criticism from a minority of mathematicians who argue that they are flawed and that real numbers may not truly exist. However, when mathematicians impart these concepts to students, they often convey an air of certainty, simplicity, and intuitive clarity. In reality, it took considerable time and effort for experts to conceal the underlying complexities (and flaws) beneath layers of formalism. The point I wish to underscore here is my aversion to misleading terminology. Much of mathematics, particularly concerning real numbers, seems rife with misleading expressions. If we employ a term like "computable", it should not be defined in a manner that suggests the computation of infinite values is feasible. It is frustrating to witness mathematics perpetuate such practices, which only serve to reinforce the mistaken belief that infinite processes can be completed.
In computing science the analog of the claim that "real numbers form a field" is the hypothetical 'Zeno machine'. Square roots are constructive and I consider constructive=computable in the general sense. Square roots have periodic continued fractions. Continuous pure geometry doesn't require reductionism to tally marks. Actually, Zeno proved that continua are not reducible to the tally marks of neusis method of applied math.
@@williamschacht Turing doesn't really define the continuous tape of a TM in constructive computational way, but postulates the tape as a "given". For computational view of the tape we need continuum as irreducible wholeness ("whitespace") and concatenation as the mediant of whitespace, so that we can form 'blank characters' that way both L and R. With concatenating mediants defined this way (cf. Dirac delta) continued fractions can be defined as mediant paths of Stern-Brocot type constructs.
Okay, trying to think as clearly as I can. The continuous sum 1/2 + 1/4 + 1/8 + etc. is less than 1 if the program halts. If the program doesn't halt... hmm. Can I speculate anything about an infinite loop that doesn't halt and thus doesn't give any value? I can't decide by myself such question about speculating about speculating, I'm undecided. Is speculating about speculating the general form of the Halting problem? From Stern-Brocot type strictly top down perspective number theory doesn't yet know field arithmetics, by freshman addition 1/2 + 1/4 + 1/8 = 3/14. 3/14 < 1. The divine joke of pi digits also noted, proving that the God of Number Theory does have a sense of humor :). PS: IMHO the most important conjecture is the abc-conjecture. AFAIK to prove it we need to prove an inequivalence relation, not necessary to decide an equivalence relation of an exact value of an arbitrarily large. If we can fork the abc-conjecture, they say that on the same plate there will be elementary proofs of many other conjectures. PPS: A finite field analog of twin prime conjecture has been proven Savin and Shusterman in 2019. Finite field polynumbers seems like worth a deeper look. ;)
Cos 7 has no 'last digit'. Neither does cos 53. Bu both have expressions, and using the 'cos of sum of angles' I can calculate, from cos 7 and cos 53, cos 60. It comes out exactly 1/2. Perhaps, then, cos 7 _does_ have a meaningful value. 😅
Doesn't this line of reasoning depend on what you accept as a 'proof'? If you insist in having a finite proof in the case of the twin prime conjecture, then I believe Wildberger has shown no such proof exist bij mains of calculation alone. But what if you allow for a proof with deductive reasoning or reasoning by contradiction or some other method? Is it fair to accept such a proof? And if so, what has this video clarified?
...reminds me about how Alexander the Great cut the annoying Gordian knot and conquered a chunk of finite land mass of our earth, which perhaps at his time was believed to be an infinite flat disk….
I have to thread very carefully here because I'm a layman on the subject and I don't want to sound like a total twat, but. Wouldn't even Gödel's incompletness theorem fall flat if we don't grant the possibility of infinites? That is if there are only a finite number of terms whiteout the possibility of infinite repetitions wouldn't that mean a finite number of operations which would possibly yield a "complete" theory of math? Don't kill me.
@adamtokay It is a reasonable question. But I should also state that in my opinion Godel's theorem is not entirely a result of pure mathematics: it is more a result of philosophy, or perhaps computation / computer science.
Professor Wildberger's videos got me interested in the history and philosopjhy of infinity and intuotionism, and I thank the him for that. However, I beleve now he is just being a troll.
I don't see it like that. You don't need to assume that you can do an infinite number of things, but you can prove that if you could take an arbitrary amount of steps, you would get arbitrarily close to something particular. And it's that particular thing that you then say is "the limit".
@@njwildberger Let s_n be the sums from 1 to n of 1/2^k, take epsilon>0, no matter how small epsilon, IF you make n big enough (IF you could take an arbitrary amount of steps), THEN you could make this sum arbitrarily close to 1. Easy to prove as you know. Here "something particular" clearly is 1.
@@njwildberger I wonder why you haven't replied back yet. Am I missing something, and if so, please try to inform me (I would gladly watch another of your videos if you address my argument there). I have the impression that you have a contention with epsilon delta arguments, but I'm not sure where I got that or where I might delve into your take on that.
Let P be the truth value of the twin prime conjecture, which is a statement. Since we’re working in classical logic, it is 0 or 1. I submit P as my proof of the conjecture. This argument works regardless of whether or not you believe in infinity.
Twin Prime Conjecture Proof. (Lawrence Abas 2024 Aurora Ontario Canada) This proof suggests that if the set of twin primes is finite, there would be a highest twin prime. However, they can only exist if there is a finite number of primes. Since this is not true, the number of twin primes is infinite. Consider if there was a finite number of primes. Let P represent the product of all those primes. If there was a highest twin prime, the largest twin primes would be P-1 and P+1. However, this creates a paradox since P-1 and P+1 are both greater than any element in the original set and is guaranteed to be prime or have a prime factor that is not in the set. Since there are an infinite number of primes, the largest twin primes are on the line n x P +/-1 where P is the product of all primes (infinity for sure) and n is a prime number from 2 to infinity. In other words, simply modifying Euclid's original proof of infinite primes with a +1 or -1 to represent a twin prime proves that a finite set of twin primes is not possible, and therefore the number of twin primes is infinite.
Why do people always assume binary logic? Why do people always say some element is either IN or NOT IN a set. What about the case where it is both in the set and not in the set? #RM3 The Liar Paradox is solved in exactly the same way that the complex numbers arise from the reals! The 3rd truth value is a member of an extension field to Z2 (binary logic). Even better, this is all easily describable and constructive. Conjunction is left adjoint to implication.
"Why do people always say some element is either IN or NOT IN a set." Because in this context we are talking about Pure Mathematics (see Wildberger's previous video in this series, where he sets the context), and in Pure Mathematics, such binary logical reasoning is the standard system. Other systems of logic *might* be possible, but if they were to try to build off of existing Pure Mathematics, they would have to be *at least* proven to be logically equivalent to (i.e. be equivalently expressible in) binary logic. "What about the case where it is both in the set and not in the set?" Again, this is just down to the context of discussing Pure Mathematics rather than, say, some general question in the philosophy of logical systems. In Pure Math, it's not helpful (as far as I'm aware) to have final 'answers' to questions where the final 'answer' is something 'indeterminate' like 'maybe true, maybe false, heck maybe both at the same time! Heck, maybe neither!' It just doesn't allow for mathematicians to come to useful theoretical conclusions, and is especially troublesome when trying to communicate one mathematician's results to another mathematician; let alone build one's new results on top of prior, established results. There *are* absolutely mathematical disciplines where indeterminacy and uncertainty is embraced and utilized. For example, in Probability Theory, you can actually quantify uncertainty. And in Computational Theory, you can have things like Non-deterministic Finite Automata, which are useful in certain circumstances. But not so in Pure Mathematics. It's just the nature of the topic. You want to come to 'logical' (binary true/false) conclusions in Pure Math. That's what it's built on.
We don't always assume binary logic. Actually I agree with Voevodsky and others that First-Order Logic is inconsistent. Propositional logic is not bivalent, propositional "binaries" include both exclusive contradictions and codependent contrary opposites. If set is a "collection" instead of "inclusion", as the axiomatic set theorists hint, then question "IN or NOT IN a set" is meaningless. ZFC can't do mereology.
Doesn’t this give rise to ‘The Twin-prime Truth-value Problem’: which is it - zero or one? 😁 [Updated with a smiley to give the intended flavour/nuance, i.e. just fkng around.]
He didn't prove the twin conjecture. It was just 'clickbait' for what he always tries. Namely thinking that analytic mathematics is the only form of mathematics. That is, that all mathematics consists of constructions. He doesn't know that, in propositional calculus, you don't only have the implication if .. then ..., but we also have the 'not'.
@@konradswart4069Great, then like we have "numbers that are not finite," we can have "wives who are not married." In other words, we can be unclear. Being unclear while pretending we're being clear is certainly a great innovation, but it is a social one and not a mathematical one. It is an excellent tool for upward mobility within academic culture.
"excluded middle holds wrt the truth value of the twin prime conjecture" is the only argument i'm gleaning. else, ¿is the argument, either: a) there exists no set with [a priori] the cardinality of the set of integers (viz. the smallest infinity), "∞," such that we can iterate the algorithm ∑(1/(n^k)) [as described] ∞ times and get a value ≥1. or: b) ∄ […], such that we can iterate strictly n
This is just rewording the twin prime conjecture into another conjecture which is equivalent: is the sum less than or equal to 1? Nothing is proved here about this sum. Any valid proof of the twin prime conjecture will answer this allegedly "absurd" question in a definite direction, however.
I will prove the square root of 2 is a rational number using your reasoning. Let’s say… x mod sqrt(2) = 0 x mod sqrt(2) = 0 x - sqrt(2)*floor(x/sqrt(2)) = 0 x = sqrt(2)*floor(x/sqrt(2)) x/floor(x/sqrt(2)) = sqrt(2) as x -> ∞ This is a mathematically true statement. However, by doing so I am exposing the flaw in your logic - although you can take the limit as x-> ∞ , the number represented will be limited by the constraints of how far you go.. You didn't actually solve anything, you just restated the problem. I recommend checking out Ergodic theory if you're interested in this type of logic because it deals with things as if everything that will ever happen already happened. I also recommend checking out Seth Hardy's paper that came out in 2022 proving there are infinitely many consecutive primes no more than 270 apart. He uses the Bombieri-Vinogradov theorem.
@@ThePallidor The problem isn't with the math the problem is with the people.. 2^x/floor(2^x/e) = e as x -> ∞ This is true as well.. and you can use it with a computer to find closer and closer estimates to e (or another number). When you need to modularize all logic so a computer can digest it, the concept of limit->inf has to be creatively implemented. This dosn't change math itself. The usefulness of math properties depends on the context. I get the sense the math community may have lost touch with why proofs are so valued. Proofs aren't valued because someone can say they're right. They're valued because they provide logical foundations for new areas of math. They strengthen the power of the questions we can possibly ask.
There's a fatal flaw in your argument. It only makes sense to compute an infinite sum if you know the value of each term, which we certainly don't in the case of the twin prime conjecture, so you haven't solved this problem, though you defined an currently incomputable quantity. Your argument begs the question of whether there are infinitely many pairs of twin primes, since the only way we could determine whether your sum is 0 or 1 is to first determine whether there are infinitely many pairs of twin primes!
Isn't this just saying the conjecture is either true or false, but no way to determine which? You could have just used the law of excluded middle and saved yourself some saliva.
Rephrasing a problem and solving a problem are two different things. This video merely rephrases the twin prime conjecture, and in a purely artificial way. In bonus, many confused statements about what the author believe analysis is. Sad.
His argument is that since we can perform an infinite number of arithmetic computations and we are so good at them as when we so freely say that cos(7) has certain exact value then going along the entire real line finding and discarding odd numbers is just as valid a mathematical proof as saying we know the value of cos(7). I don't agree with him but I believe that's a fair representation of what he tried to convey.
@@jacoboribilik3253I’m curious why you don’t agree with him. What is fundamentally different between defining cos(7) as an infinite set of computational instructions and defining the twin prime conjecture’s answer as an infinite set of computational instructions? You have never completed the computation for cos(7) but you accept it as a real number whose exact value can most precisely be expressed by simply calling it cos(7). So if completion of the computation is not required, and expressing the number’s exact value by simply stating the description of the number is an acceptable way to describe the exact value, then his definition of the number TPCNum should be held to the same rigorous standards. Mainly his description of the algorithm is well defined, it can be partially computed without bound just as the terms of cos(7) can be partially computed without bound. The only bound is someone’s decision to stop trying. But we know the exact value of cos(7), it’s cos(7). Similarly, we know the exact value of TPCNum, it’s TPCNum.
This "solution" is as good a solution as the state of Schroedinger's cat (or should we say Wildberger's cat) before you take a look into the box. It's just not defined. Both possible outcomes, 0 and 1, false and true, are "known", in principle, but they are not really KNOWN for real, but in a superposition, as long as no one really delivers a proof for one outcome or the other, that means observes the cat being dead or alive.
@@njwildberger Yes I know I have followed you for some years. It isn't my favorit part neither of math, but infinity seams to be there, like natural numbers 😃 or an infinite thin line. I am looking forward for the Reiman proof!
All the infinite algorithm argument shows is that the floor of the total is either 0 or 1. But it does not show that it equals 0 and it does not show that it equals 1, and thus it does not verify or falsify the conjecture.
Moreover, the algorithm is *not* analogous to cos7 = 1 - 7²/2 + 7⁴/4 - …, because in the latter case we don’t actually perform an infinite number of operations which somehow lead us to cos7 as a “result” (as opposed to the way that 579 + 338 leads to 917 as a result). Rather, we either define cosx as the series 1 - x²/2 + x⁴/4 - … or we define it geometrically and then use derivatives to prove that it equals this series, and then we obtain cos7 = 1 - 7²/2 + 7⁴/4 - … as a trivial instantiation of the equation in terms of x.
We can’t do anything like this in the case of your algorithm, because we can’t *define* 0 or 1 to be the value of the floor of the series, since they are already defined in other ways. And we can’t yet prove that 0 is the value or that 1 is the value, but that is, of course, why it’s an open problem.
I suspect that you know all of this and are being intentionally obtuse. But I don’t understand why.
I was lost at 9:00. What was solved? Didn‘t you merely restate the problem in another way?
No, he provided a definite answer. According to modern mathematicians at least.
@@gchtrivs7897hmmm no? He just restated the conjecture, but he didn't prove anything at all.
@@erickmacias5153 The point he is getting is that so-called "real" numbers don't exist.
pi + e + sqrt(2) = pi + e + sqrt(2)
You cannot simplify that expression, hence pi + e + sqrt(2) is not a number. You're merely restating the problem.
@@pac-d6f huh what's the problem with it not being simplifiable?
@@lox7182 If you are okay with it not being simplifiable, then you admit that Wildberger just solved the twin primes conjecture.
He solved it, he just didn't simplify his solution.
If you are unsatisfied with Wildberger's solution of the twin primes conjecture, then you should also be unsatisfied with the construction of real numbers.
Eh it's a perfectly valid construction. Just something we can't prove much about without knowing the solution to the twin prime conjecture. Basically, it is a real number, but until the twin prime conjecture is solved, we don't know whether it's equal to 0 or 1.
the Idea is you perform the calculation and assume an approximation is valid, like is done with Cosine(x) so it is a "known" value
@@jackkensik7002 are you disagreeing with @lox7182 ? I can’t tell. What do you mean by “perform the calculation”?
How about this then? Let x = sum over n >= 0 of a_n / 2^n where a_n = 0 when n is not an encoding of a proof of a contradiction in ZFC, and a_n = 1 otherwise. This is a perfectly computable real number, in the sense that there's a program which can take a natural number n as input, and compute an approximation within 1/2^n in finitely many steps (it just has to examine n things and decide if they're valid proofs).
Now take the ceiling of this. ZFC had better not be able to tell us whether it is 0 or 1, or else it is inconsistent (either way).
The problem is the floor/ceiling functions which are not continuous, and hence not computable. As soon as you want to produce an approximation even to within distance 1/2 of the limit, you're stuck, no finite number of observations of our computable real input will suffice. We can pick some n, and probably get the rational approximation that it's 0 to within 1/2^n, and yet this doesn't provide us the information we need to bound the result.
@@cgibbard what do you mean by n being an encoding of a proof in ZFC?
@@lox7182 There are a bunch of options for setting up the details of an encoding like that, but something similar to the encoding used for Peano arithmetic in the proof of Gödel's incompleteness theorems would work. The Wikipedia page on Gödel numbering describes it in some detail, or you could look up a proof of Gödel's incompleteness theorems. Any book on mathematical logic that includes some model theory would probably also talk about it to some degree.
The gist of it is that things which we allow as proofs in ZFC (or most formalisms for mathematics) each consist of finitely many symbols drawn from what may as well be a finite alphabet, so you're basically enumerating such finite sequences of symbols (which you could turn into numbers using either Gödel's trick or by treating them as base-b digits, or whatever you find convenient) and then you just have to write down a function (which will actually be a computable function) that checks if such an encoded string is the code for a valid proof of a given statement (which is itself probably also encoded similarly).
It's a bunch of work to write out all the specifics of a particular such encoding and how to define a function that determines whether the rules of the logic in question are being followed in the coded proof, but in the end it's a bunch of recursive definitions that break the coded proof down into parts and see that each step is correct and the conclusion is what we were looking for.
Hi Norman, is it possible that the error in thinking is analogous to bridging an “is-ought” gap?
As in, “there ought to be a numerical representation for what is a geometric entity, ie the length of the diagonal of a unit right triangle, therefore there is one”
Can we say that the numbers are discrete but geometric objects are continuous and thus cannot be modelled numerically?
This just shows that the truth value of the twin prime conjecture is computable in infinite steps. But that's true for almost any statement. With infinite computational power we can just search all possible proofs for every conjecture in order to see if it's true. What we actually care about is if you can provide a finite computation that is sufficient. E.g. in the video we have defined a computation that would yields the desired value, but we cannot perform it. At least in functions like cos(x) we can get arbitrarily good approximations by cutting the (infinite) computation of at a finite number of steps.
re: at least we can get arbitrary close
Isn't the point Norman is making that we can do the same here?
@@abmarnie9No. When you compute cos(x) you really can get closer to the "true" value as you do more computations. You really can figure out the digits one by one.
Here, after you have done a billion summations corresponding to twin primes, does it help you in any way ? No.
Approximations to what?
And I noticed you put quotes on the “true” value. So is there a true value of cos(7) or not according to you?
If I was to give that number I computed a name, say w_TP, would that make it a true value in your eyes?
My amateur perspective is that I think of this as a fundamental distinction between the idea of a quantity and a number that tries to describe it, with a hierarchy of levels.
1) discrete quantities can be exactly specified with a natural/rational number like 3 or 25/6
2) physical/geometric quantities that surely exist but cannot be exactly specified with a number, since numbers are fundamentally discrete. They can only be approximated to a certain precision (e.g. the length of the diagonal of a regular right triangle with side length 1, or the mass of the Earth). I would argue it is still valid to work with operations such as 2+pi+e+√2, just that the result is now referring to a quantity instead of a number and the addition operation is a "quantity addition" rather than "number addition"
3) quantities that we have limited information about and cannot assign a number to. For example, x is the truth value of the Twin Prime proposition (either 0 or 1). We can still conceive of a quantity like x+2 even though we cannot assign a number to it. Expressions like 2^(10^10^10^10^10^10^10) would also fall into this category since they are not really a number but are describing an algorithm that is impossible to complete (multiply 2 by itself that many times)
I did formal proofs and the definition of convergence was definitely with epsilon and delta reals and functions. We couldn't use elipses or 'go to infinity agreements' The proofs we did were absolute. Some mathematicians may be biased or have some logical fallacies when attempting to short hand, but that doesn't change formal proofs because they don't depend on that. They're just valid formally constructed proofs.
Hi Tim, perhaps I was your university lecturer indoctrinating you into a false belief in the Epsilon Delta voodoo. If so, I apologise. My aim then, was to initiate you into the standard thinking through tried-and-true catechisms that carefully avoided the logical problems littering the landscape.
Nope. No one indoctrinated me. I like my proofs to be absolute. That's why I was able to understand Ramanujan sums when no one else could seem to understand them. Instead of using pure logic, they were relying on intuition with false assumptions. There's also intuition, like a light being projected through a sphere onto a plane, which would help their intuitions, but that's not the same as proof so they shouldn't need that to realize what's correct. I've been watching your videos for years and I notice you still have deeply embedded intuitions that you rely on. Unless you are easily able to convert your proofs into lambda calculus you shouldn't be so presumptuous on other people's beliefs.
Congratulations on your achievement. A Fields medal will be forthcoming. Cantor's transfinite proofs (the diagonal method) fall into this "completed infinities" quagmire. I was always suspicious of them for this very reason. But there is a silver lining: there will always be rooms available at Hilbert's Hotel.
No waiting list for Hilbert's Psychiatric Hospital either …
Why do you think Cantor’s diagonal method relies on completed infinities?
@@synaestheziacit relies on a completed infinity because for all exhaustive lists of rational numbers, cantors diagonal number will always be in the list. Only after you go to infinity does it somehow disappear from the list. Similarly, Norman’s TPC number will always be zero for all finite attempts to compute it. Only after you complete to infinity can 1 even be possible.
For example, in binary let’s explore Cantor’s diagonal argument for exhaustive lists of all number with varying numbers of digits.
1 digit
0
1
Cantors number is 1 which is in the list.
2 digits.
00
01
10
11
Cantors number is 10 which is in the list
3 digits
000
001
010
011
100
101
110
111
Cantors number is 111 which is in the list
Only when you complete to infinity can you apparently no longer find his number. This is because all finite lists have more rows than columns. Only after you complete to infinity does the list become square somehow.
On the subject of prime numbers, because there are an infinite number of primes -- I would like to contrive an algorithm which returns a subset of these primes, without the need for book keeping nor a primality test. Thereby, allowing me to compute primes in O(N). Does such an algorithm exist? What is the inherent structure of primes that result from such an algorithm?
This made my day! I was just thinking for the past few days that doing mathematics these days is more like being psychotherapist for the mathematical community.
In view of the Law of Intellectual Honesty: did you gain any information from this? In my opinion, you just added one more element to the set of expressions which are difficult to compute.
Correct. Norman has not established anything substantive here. Moreover, by suggesting that he *has* done so, and above all by systematically misrepresenting the standard views, he is flirting with intellectual dishonesty.
@1:40 no? We do not assert we can do an infinite number of things. Modern math only admits algorithmic proof for algorithms that halt in a finite time. (Ex. four color theorem.) If yours does not halt in a finite time it is not a proof. Use of infinity in other valid proofs will not use algorithmic steps through some infinite list, but other types of reasoning, such as induction or contradiction.
You can make this a bit stronger by defining a (computable!) real number by sum over n >= 0 of a_n / 2^n where a_n = 0 when n isn't an encoding of a proof of the twin prime conjecture, and 1 when n is an encoding of such a proof. This gives us a perfectly nice computable real number where to get an approximation within 1/2^n of the final result, we only have to examine n or so "proofs" and decide whether or not they're what we're looking for. Rather than floor, I'd be using ceiling in this case, but the fundamental problem is the same. Also, this easily generalizes to doing things like searching for proofs of contradictions from whatever axioms of mathematics we choose. All perfectly fine programs you can run on actual computers, at least until you run out of memory.
Numbers like cos(7) and zeta(5) are also computable in the sense that there's a program that no matter how good an approximation you wish for, will produce that approximation in finitely many steps (how many of course, depending on the tolerance you ask for). If you want, it's possible to conceive of the real number as itself being the finitarily described procedure for producing these approximations. The existence of such procedures makes these real numbers quite tame by comparison with what's possible for the classical reals. Remember that most real numbers (cardinality-wise) are not so much as definable, let alone computable.
The only truly questionable bit to my mind is hidden at the very end: the floor/ceiling functions are not continuous and so are not computable functions. If we wanted to get a computable real result, we'd be stuck the moment we needed to produce a rational approximation within 1/2 of the outcome (let alone 1/2^n for larger n). When it's a number on the boundary like this, there's no finite number of observations of rational approximations to our input real that could tell us whether the result of the floor/ceiling is within distance 1/2 of 0 or 1.
cos(7) is the symbol we associate to some value that a finite sequence gets closer and closer to as n increases
One could make the same kind of argument determining whether there's infinitely many primes, or indeed even if there's an infinite amount of natural numbers. Of course, there's other lines of reasoning about the infinitude of those sets. But it's unclear to me whether you consider these to exist (whatever that means)?
What is a set? A collection or a collecting?
For the Riemann Hypothesis, using this approach, you'd just solve for all of the roots of the Riemann Zeta function. Before doing so, set a value, x, equal to 1. For each root found, if it isn't exactly an even negative integer nor a complex number with real part exactly 1/2, then change the value of x to 0 and terminate the algorithm. At the end x contains the truth value.
Do you agree with any of these?
1.
10^20000000 is divisible by 5
2.
There are at least 10^1000 numbers divisible by 5
3.
All points on a circle are equalistant from a point called the centre
4.
y=3x+1 and y=3x+2 never meet
5.
The square root of 2 is less than 1.5
6.
All numbers that square to a value less than 2 are themselves less than 1.5
7.
Every non trivial cubic with "integer" coefficients will take a value between -1 and 1 somewhere
8.
Every non trivial cubic with "integer" coefficients will take a value between -0.1 and 0.1 somewhere
9.
Every non trivial cubic with integer coefficients is zero somewhere
10.
Every non trivial cubic with integer coefficients between -100 and 100 will be between -0.1 and 0.1 somewhere?
A better initial question would be: which of these "questions" is actually meaningful? Not all "questions" are inherently meaningful, even if they may be semantically OK, for example "Are there any leprechauns living outside the tax bracket of the universe?"
@@njwildberger
Ok but that's what I want to know, how many of those questions do you liken to:
a) How many Irishmen are in the upper tax bracket (well defined)
b) How many Leprechauns are in the upper tax bracket (well defined but zero / non existent)
c) If Leprechauns were to exist, would green be their favourite colour (a problematical hypothetical since the object doesn't exist)
d) Are the any Leprechauns living outside the tax bracket of the universe (ultimately impossible to make sense of even allowing hypotheticals and fantasy)
I doubt all ten of the above are straight in category d? Are any of them in category a?
@@reamstack I must say he does seem to evade the meat of the questions or any specifics. I first saw his videos as an undergrad a few years ago and I've reflected a lot on the skepticism. I'm still half sold on the idea we should go to the extra effort to adapt analysis to work with the computable numbers with computable radius of convergence, but he doesn't seem to go in exactly that direction either. But then I guess that makes me a "mainstream Choice skeptic" whilst this seems to go a step further in a direction that's hard to pin down
The diffrence between cos(7) and the output of the algorith you layed out is that the Taylor expansion denfinetly has a next term you add to the total in contrast to your algorithm, where you need a twin prime. It is uncertain wheather you can add the N th term of the algorith to the total were as with the Taylor expansion you definitly have an N th term you can add.
The Nth term exists and is well defined is 1/2^k if both 2N-1 and 2N+1 are prime, and 0 otherwise (where k is the number of twin primes centers ≤ 2N).
All primes greater than 3 are of the form 2n±1 ?
Only if you suppose that n is an integer
@@WK-5775 Okay, so basically this algorithm provides little insight into the distribution of primes as modeled through the additive properties of the Zeta function and Euler's equivalent, in which he expresses it as an infinite product of all primes; and thus no new insightful cryptographic factoring algorithms can be derived from it. Since https is safe for now, shall we call it a day and go out to dinner :D
I am trying to get my head around the subtleties of this: As part of the proof you use the sum to infinity of the reciprocals of powers of 2, starting with the first term being a half. Are you saying that just as the “going to infinity” is questionable for the algorithm you describe which is testing odd numbers as being prime or not and adjusting the counter and the sum, it is equally questionable that we use the infinite sum of reciprocals of powers of two as being definitely equal to one? Or is the sum of half plus a quarter … to infinity respectable in a way that the infinite check and conditionally increment is not?
Is the infinite sum of 1/2 + 1/4 + ….. = 1 equally questionable or less questionable than the proof of twin prime conjecture?
Superb as always!
The best part is your reference to an "infinite number of operations" with a "bona fide result at the end of it", regardless of the fact that "infinite" means that there isn't an end.
But, as for infinitists the 'end' justifies the means, the only thing objectionable in your 'solution' is that you are too honest in formulating it. Fame in the world of modern maths is all to often just for the skill with which the flaw in the argument is hidden.
We don’t need to actually complete an infinite number of computations in order to assert the existence of irrational numbers, and infinitists don’t claim that we do.
Zeno would have been proud!
This isn’t an answer to the twin-primes question, but a re-statement.
I suggest that there’s an important difference between this and cos(7). Take a circle of radius 1 and a string of length 7. Wrap the string around the circle, starting at the 3-o’clock position. Both the circle and the string are finite, so this is achievable in finite time. Now, measure the horizontal displacement of the end of the string from the center of the circle.
This is now a precision-of-measurement problem. As one’s ability to measure precisely improves, the outcome becomes more precise (value plus-or-minus a shrinking delta).
Your reformulation of twin-prime (including the integer floor) reduces to Boolean true or false. There is no “partial true plus or minus a fraction of truthiness”.
Infinity: It wouldn't be hard to convince yourself while standing on the tracks crossing the Nullarbor Plain that they cross the Indian Ocean, Africa, the Atlantic, South America and the Pacific, providing rail service to New Zealand. A few miles is all you can see from the starting point. When you walk a few tens of miles along the track, you see more of the same. A proof by induction. We can not even observe on foot as far as Perth, let alone beyond, so we reason that more of the same continues. Its a thought experiment. The unobserved is simply unknown. There is risk in assigning properties to the unobserved.
I found a formula that partitions the number line into intervals (L1, L2) where there always contain Twin Primes; L1 = (6k -1)^2, L2 = (6k +1)^2, and k is natural number. I can't prove it. But it seems true. BTW, twin primes have forms: 6n - 1 and 6n + 1. Ex: 5 and 7, 11 and 13, 17 and 19... 3 and 5 pair is special case.
Huge problems arise if one assumes that R is not dense in some infinitesimal universe. For instance, analytic continuation fails then, because no number is really zero identically, since one can inject infinitesimals at will. And so on... one loses analytic continuation because of this.
Just replace word "infinitely large" by "unbounded". This is usually sufficient enough.
I know You are joking about the Twin Prime Conjecture, but I don't see how this solves it. In the transfinite universe the second counter will always return value 0, but the first counter might end up with a transfinite but fixed number n, or n might be unbounded. In the second case one says rather carelessly that there are "infinitely many" counts. Even though the second counter always returns 0.
There are good reasons why mathematicians stick to the standard model of mathematics. The nonstandard version is too complicated - there are infinitely many universes infinitesimal one to another -, or just plain useless.
I advocated for the nonstandard analysis the moment i learned that Galileo's demonstration that [0,1] and [0,2] real line intervals have the same number of elements is flawed: the same argument shows that there is a bijection by projecting points by rays between [0,1] and BOTH [0,1] and [0,2], which is impossible, there is no bijection. From this one concludes that the underlying assumption of R being dense must be wrong.
So I started using Nonstandard Analysis and noticed that the Transfer Principle, that all first order statements true in standard version are also true in the nonstandard version, is not true! Because the standard version says There Is A Bijection, and the nonstandard version says There Is Not. These are first order, so at the very start there is a difference. So I started using Nonstandard Analysis. Oh boy... it gets complicated very, VERY quickly! :D
It also leads to inconsistent conclusions. More complicated = more chances for something to go wrong.
So after a decade or so of doing it, my conclusion is: just replace the word "infinity" by "sufficiently large" and all the problems become semantic - in most cases at least... Besides, the Cantor Cardinal Arithmetic solves the Galileo problem: c+c=c.
Remember the Zeno paradox? Achilles cannot take a lead in a race against the turtle, because it would take him an infinity of steps to do it. And yet, the infinite series converges, so he takes the lead in finite time after a finite distance, even though there were infinitely many steps involved. The Zeno's argument was - one never reaches infinity. And yet - Achilles takes the lead eventually in real life, of course. And he takes the lead rather quickly too, because turtle is very slow.
Greeks already did it. Disappointing really: we lost some 2000 years to monkey brains calling themselves Caesars and The Greats and Saints and...
Regards.
A well crafted- and thought-provoking- response!
Sufficiently large...in other words, sufficiently large for the relevant application...in other words, there is no math other than applied math.
@@ThePallidor I meant in the transfinite universe. Zeno Paradox demonstrates that infinite series makes sense: Achilles does run faster than the turtle. The series involved is Sum r^n with r being the ratio of turtle's speed and Achilles' speed. The infinitude of steps is done in a finite time over a finite distance. This sums to the reciprocal of 1-r. Which is applied enough... In Nonstandard Analysis once the step becomes less than 1/u with u the cardinal number of the unit interval [0,1], the step is indistinguishable from zero distance: the series can terminate there. Cardinal u is infinite in Standard Analysis. In Nonstandard version, u+u=2u, the usual arithmetic applies. Sufficiently large, but in a Nonstandard sense. In Achilles' case, this would mean Achilles has to make a step of zero length to reach the turtle. The turtle can only make an infinitesimal step for the duration. Nothing happens as far as R is concerned. In other words: Achilles took the lead, turtle cannot go any further ahead. One has to switch to the Nonstandard transfinite setting here... It is more complicated than just infinitely many steps in the Standard setting. This unfortunately creates a host of problems further down the line, especially in Complex Analysis...
Is there some interesting relation between q-analog of analysis ("q-analysis") and nonstandard analysis?
As for Zeno, I agree with Aristotle that continuum is not reducible to infinite regress. Irreducibility means something holistic.
@@santerisatama5409 Q-analog is still standard: R is dense.
Scandalous title!
By introducing the completion of infinite work, mathematicians muddy the distinction between natural numbers that can be written down as a string of digits and supposed "natural numbers" which cannot be observed or calculated, even in principle. This is a brilliant demonstration of the absurdities that this entails. According to the infinitist, this answer is just as definite and real as the answers "true" and "false". Yet the infinitist isn't able to reduce this answer to either "true" or "false", because she is merely pretending to be able to complete infinite work, not actually doing it. In this way, the charade is exposed. The emperor has no clothes.
How dare you. The emperor has clothes of the finest silk.
@@lox7182 Natural numbers have been a string of digits since the first tally system. I doubt they have ever been anything else than a tally system.
Fractions, on the other hand, in the intuitive meaning of part-whole relation...
@gchtrivs7897 I love your introduction (?) of the term "infinitist" to describe members of this cult. Three cheers for "Infinitism"! (not)
@@santerisatama5409 Um aren't natural numbers supposed to represent finite amounts? Like one sheep, two sheep?. Digits are just a representation of those amounts? Well of course they don't have any obligation to represent amounts in our universe specifically. Something like a ↑↑ b (where ↑↑is the tetration symbol) is just as valid representation of a natural number as "1234567890"
Never was I so proud to be naked darn
For all prime number lists of more than two prime numbers (n≥3),
the product of the numbers on the list +1 = 31 modulus 60.
All *.31 are either prime or not. First let it be prime. If *.31 is not prime it will
have factors which are not on the list, which proves there are also more prime
numbers, than any assigned multitude, aside from prime numbers of the form
*.31. The Prime Number List grows linearly n+1, while the size of * in *.31
grows multiplicatively with the size of each new prime number.
Therefore, prime numbers of the form *.31 are more than any assigned
multitude of prime numbers. Q.E.D.
Prime numbers of the form *.29 are more than any assigned multitude of prime
numbers.
For all prime number lists of more than two prime numbers (n≥3), the product of
the numbers on the list -1 = 29 modulus 60.
All *.29 are either prime or not. First let it be prime. If *.29 is not prime it will
have factors which are not on the list. The size of the assigned multitude grows
linearly. The size of * in *.29 grows multiplicatively at a prime number rate.
Therefore, prime numbers of the form *.29 are more than any assigned
multitude of prime numbers. Q.E.D.
The harmony of the proof of *.29 and the proof of *.31 is a proof there are
infinitely many twin prime numbers of the form *.29 and *.31. Q.E.D.
The proof prime numbers of the form *.31 are infinite proves the general case
for the prime numbers of the form *.37, *.41, *.43, *.47. *.49, *.53, and *.59.
The proof there are infinitely many prime numbers of the form *.29 proves the
general case for the primes of the form *.23, *.19, *.17, *.13. *.11, *.07, and
*.01.
The cross-sections of the proofs of *.11 and *.13, *.17 and 19, *.41 and *.43,
*.47 and *.49, *.59 and *.01 are proofs there are infinitely many twin prime
numbers of the form *.11 and *.13, *.17 and 19, *.41 and *.43, *.47 and *.49,
*.59 and *.01. In sum these exhaustively prove there are infinitely many
twin prime numbers. Q.E.D.
Publishing a background paper on ArXiv later this month to my new book “Sumerian and Babylonian Number Theory 1.01”
Hi Nomran, whilst I wasn't able to follow the full video as I'm not as involved in math anymore, I totally agree with the idea that there is some unwarranted norms in mathematics around having the ability to make the most subltle of analysis or know the end result of endless journeys despite never setting out with the intention of completing them. If they would articulate this it would quickly become absurd. But still I cannot say their results are as absurd as their claims on how they got there. But I feel there is a thing of the unseen which may allow us to understand exactly where the most subtle of calculations can be done and what exactly are the most lengthy of calculations. (considering both exactly what is the smallest, and exactly what is the largest in the set or subset you want to consider)
Infinity is a mathematical concept with ambiguous numerical status. To paraphrase Lakoff&Nunez "Where Mathematics Comes From" (2000): When taken as a number infinity is used in enumeration and comparison but not calculation, where indefinite forms obtain, eg. inf/0, inf*0, inf-inf,inf/inf. When used for enumeration infinity is taken as the largest possible integer where it functions exclusively as an extremity. Therein resides the leap, the metaphorical double think, of taking infinity as a number and not a number simultaneously, and not merely any number, but a unique number greater than all other numbers, the greatest number; However, the uniqueness of the final state of a complete process is a product of human cognition rather than a fact about the external world. The concept is more or less a pragmatic fiction. Isolated logical principles can test the consistency of an argument, but they cannot establish truth, i.e., one cannot deduce matters of fact from logic.
I posit that a critique here is that pure mathematicians have been inappropriately asserting, what is now called, the map reduce-technique, as a zero-cost refactoring of a proof. The complexity of “map” (over all N) is imputed as amortized to the mapped “f” (is prime) and “reduce” is assumed terminating when there is no falsification that twin primes stop after “N” or there is a predictive description up until “N”.
Do I cheer for Lambda? Or are we in the universe of NFA to DFA conversion-like arguments?
Go Wildberger!
So is the twin prime conjecture true or false?
a conjecture is just an hypothesis given without a proof. It may be proved or disproved.
A theorem has a proof possible.
The Riemann hypothesis is still a conjecture. Fermat's last theorem was just a conjecture until it was proven in 1995.
@@eldersprig And the Twin Prime conjecture is still a conjecture.
Thank you for your insights, I love to watch your videos.
But I don't see anything wrong here. You defined a number x, and we know it's 0 or 1.
You can't say "the answer is x" because we don't have full information of the value of x.
I can do the same thing in the finite case.
Consider the (maybe) open problem: There are exactly 10,000 twin primes between 1 and 10^10^10^10^10^10^10^10.
And then with similar trick, we construct a number x (and now you believe it's a valid number because it is a finite process).
x=0 means the answer is false, x=1 means the answer is true.
But you can't give x as an answer, because we don't have full information of the value of x.
If I make a computation and tell you: the answer is cos(7), are you going to make the same objection? Namely: you can't say the answer is cos(7), because we don't have full information of the value of cos(7) ??
The fact that we don't know what x is doesn't mean that we can't say that the answer is x.
@@njwildberger I really your enjoy your videos and I really liked the last sociology and math video! But I'm also slightly lost on this one. I agree that the true value of something like cos(7) is unknowable because infinite computation is impossible, but isn't the difference that we can gain information in a finite number of steps when computing something like cos(7)?
@bennettgarcia8728 We only "gain information" in an applied math sense. In a pure math sense we gain zero information relative to the totality of the information we need.
Nice one, Norman. There are strong similarities with the Halting Problem.
Personally, I want to see you get back to harmonic analysis without transcendentals, members only on the wild egg channel, folks.
Thanks Dean, yes I will be adding some new harmonic analysis videos very shortly, with some interesting new twists!
“Cosine of 7: that’s an infinite series, and to assert that there really is such a number is essentially to assert that you’re able to do this infinite number of arithmetical operations and get a bona fide result at the end of it.”
No. To say that cos7 = 1 - 7²/2 + 7⁴ /4! - 7⁶/6! + … is a real number just means that we can define it as the limit of the partial sums. It does not in any way require one to “do” all the operations in the series. In fact, you don’t need to do *any* of them! You can just use the expression “cos7” to stand for the series and use it to build other expressions and equations, like sec7 = 1/cos7 or whatever else you want. Or you can do a finite number of the operations in the series if you need an approximate value. But we can also know that any such approximation is a rational number which is not exactly equal to the irrational number cos7. I’ve watched a lot of your videos and I still don’t understand why you think there is a serious problem here.
Do you know Wildberger's video on Cauchy sequences and their equivalence classes, or th ones on Dedekind cuts? You might see from these why he doesn't believe that irrational real numbers exist.
@@WK-5775 I’ve seen him discuss those things, and it’s always the same strawman argument: he seems to think irrational numbers can only exist if it is possible to complete an infinite number of computations, but that simply isn’t true.
@@synaestheziac True or not not true, I don't know (but I'd say they exist), but it's certainly important to be aware that these notions depend on a consensus, which the finitists (or ultra-finitists or whatever they are called) do not share.
For Wildberger, not only the question of infinitely many computations that would have to be carried out is a problem, but even more fundamentally, he doesn't like the idea of infinite sets, like the set of natural numbers. Part of the argument is (if I understand it correctly) that at some point it becomes impossible to write down such numbers in our universe due to a lack of space, time or atoms or other physical limitations. The flaw is, in my opinion, that he sees the mathematical world embedded in the physical world.
@@WK-5775 yes, I know that he is an ultra-finitist, and I agree with you that he assumes that mathematical truth is constrained by the physical world, which is a ridiculous view.
@synaestheziac A limit is an operator within which there are two different infinite processes.
8. The Twin Prime Conjecture: An Information-Theoretic Perspective
8.1 Background
The Twin Prime Conjecture states that there are infinitely many pairs of primes (p, p+2). Despite significant progress, including Zhang's breakthrough on bounded gaps between primes, a full proof remains elusive.
8.2 Information-Theoretic Reformulation
Let's reframe the problem in terms of information theory:
8.2.1 Prime Pair Information Content:
Define the information content of a prime pair (p, p+2):
I_tp(p) = log₂(π_2(p))
where π_2(p) is the count of twin prime pairs up to p.
8.2.2 Twin Prime Information Density:
Define the twin prime information density:
ρ_tp(x) = dI_tp(x)/dx
8.2.3 Twin Prime Conjecture as Information Statement:
Reformulate the Twin Prime Conjecture as:
lim_{x→∞} I_tp(x) = ∞
8.3 Information-Theoretic Conjectures
8.3.1 Twin Prime Information Asymptotic:
I_tp(x) ~ C · log(log(x)) for some constant C > 0
8.3.2 Information Gaps Between Twin Primes:
The information gaps between successive twin primes follow a specific distribution.
8.3.3 Twin Prime Information Entropy:
The entropy of the distribution of twin primes approaches a constant as x → ∞.
8.4 Analytical Approaches
8.4.1 Information-Theoretic Sieve Method:
Develop a sieve method based on information content to study the distribution of twin primes.
8.4.2 Spectral Analysis of Twin Prime Information:
Apply spectral methods to analyze the fluctuations in ρ_tp(x).
8.4.3 Information Flows in Prime Gaps:
Model the "flow" of information through gaps between primes, with twin primes as special points.
8.5 Computational Approaches
8.5.1 Quantum Algorithms for Twin Prime Detection:
Develop quantum algorithms for efficiently detecting and analyzing twin prime pairs.
8.5.2 Machine Learning for Twin Prime Pattern Recognition:
Train neural networks to recognize patterns in the distribution of twin primes based on their information content.
8.5.3 High-Performance Computing for Information Content Calculation:
Implement distributed computing methods to calculate I_tp(x) for very large x.
8.6 Potential Proof Strategies
8.6.1 Information Divergence Approach:
Prove that the total information content of twin primes diverges, implying infinitely many twin primes.
8.6.2 Information-Theoretic Coupling Method:
Develop a coupling argument between the information content of primes and twin primes.
8.6.3 Quantum Information Bound:
Establish a quantum information-theoretic lower bound on the number of twin primes.
8.7 Immediate Next Steps
8.7.1 Rigorous Formalization:
Develop a mathematically rigorous formulation of the information-theoretic concepts introduced.
8.7.2 Computational Experiments:
Conduct extensive numerical studies on the information properties of twin primes.
8.7.3 Interdisciplinary Collaboration:
Engage experts in number theory, information theory, and quantum computing to refine these ideas.
8.8 Detailed Plan for Immediate Action
8.8.1 Mathematical Framework Development:
- Rigorously define I_tp(x) and ρ_tp(x) and prove their basic properties
- Establish formal relationships between these information measures and classical results on twin primes
- Develop an information-theoretic version of the Hardy-Littlewood conjecture for twin primes
8.8.2 Computational Modeling:
- Implement efficient algorithms for computing I_tp(x) for large x
- Create visualizations of the "information landscape" of twin primes
- Develop machine learning models to predict properties of twin prime distributions
8.8.3 Analytical Investigations:
- Study the statistical properties of ρ_tp(x) as x varies
- Investigate connections between I_tp(x) and other number-theoretic functions
- Analyze the information-theoretic properties of gaps between twin primes
8.8.4 Quantum Approaches:
- Develop quantum algorithms for efficiently detecting twin primes
- Investigate if quantum superposition can be used to analyze multiple prime gaps simultaneously
- Explore quantum annealing techniques for optimizing twin prime searches
8.9 Advanced Theoretical Concepts
8.9.1 Information Topology of Prime Constellations:
- Define a topology on the space of prime constellations based on their information content
- Study how twin primes relate to the geometric properties of this space
8.9.2 Twin Prime Flows in Information Space:
- Model the occurrence of twin primes as flows in an abstract information space
- Investigate if techniques from dynamical systems can be applied to these flows
8.9.3 Quantum Prime Gap States:
- Develop a quantum mechanical model of prime gaps where gaps exist in superposition
- Explore how measuring these quantum gap states relates to the occurrence of twin primes
8.10 Long-term Vision
Our information-theoretic approach to the Twin Prime Conjecture has the potential to:
1. Provide new insights into the distribution of primes and their pairwise relationships
2. Offer a fresh perspective on other major conjectures in analytic number theory
3. Bridge concepts from information theory, quantum computing, and number theory
4. Suggest new computational approaches to studying prime distributions
By pursuing this multifaceted approach, we maximize our chances of making significant progress on this longstanding problem. Even if we don't immediately prove the conjecture, this approach promises to yield valuable new insights into the nature of primes and their information content.
This framework provides a comprehensive roadmap for tackling the Twin Prime Conjecture from an information-theoretic perspective. The next steps would involve detailed development of these ideas, rigorous mathematical formulation, and extensive computational experimentation.
8.11 Expanded Next Steps and Advanced Concepts
1. Rigorous Mathematical Framework:
a) Twin Prime Information Measure:
- Define a more general measure: I_k(x) = log₂(π_k(x)) for prime pairs (p, p+k)
- Prove that I_2(x) (our I_tp(x)) has special properties compared to other I_k(x)
- Investigate the relationships between different I_k(x) measures
b) Information-Theoretic Prime Gap Function:
- Define λ_tp(n) = I_tp(p_{n+1}) - I_tp(p_n), where p_n is the nth twin prime
- Study the statistical properties of λ_tp(n) and its moments
- Conjecture: Σλ_tp(n) = ∞ is equivalent to the Twin Prime Conjecture
c) Twin Prime Information Entropy:
- Define H_tp(x) = -Σ(p_tp(n) log p_tp(n)) where p_tp(n) is the probability of the nth twin prime pair
- Analyze the asymptotic behavior of H_tp(x) as x → ∞
- Investigate connections between H_tp(x) and the distribution of twin primes
2. Computational Investigations:
a) Large-Scale Twin Prime Analysis:
- Compute I_tp(x) for x up to 10^12 or beyond using distributed computing
- Analyze the fine-grained structure of ρ_tp(x) looking for patterns or unexpected behaviors
- Implement advanced sieves optimized for twin prime detection at large scales
b) Machine Learning for Twin Prime Prediction:
- Train deep neural networks on the computed I_tp(x) and ρ_tp(x) data
- Develop models to predict the occurrence of twin primes in unexplored ranges
- Use reinforcement learning to discover efficient strategies for twin prime searches
c) Quantum Algorithms for Twin Prime Detection:
- Implement Grover's algorithm to search for twin primes in specified ranges
- Develop quantum walks on graphs representing prime constellations
- Explore quantum annealing approaches to optimize twin prime searches
3. Analytical Approaches:
a) Information-Theoretic Renewal Theory:
- Model twin primes as a renewal process in information space
- Analyze the renewal equation: m(x) = δ(x) + ∫₀ˣ f(t)m(x-t)dt
where m(x) is the renewal density and f(x) is related to ρ_tp(x)
- Investigate if renewal theory can provide new insights into the asymptotic behavior of twin primes
b) Spectral Analysis of Twin Prime Information:
- Compute the Fourier transform of ρ_tp(x): ρ̂_tp(ξ) = ∫ ρ_tp(x)e^(-2πixξ)dx
- Analyze the spectral properties of ρ̂_tp(ξ) looking for hidden periodicities
- Investigate if there's a spectral interpretation of the Twin Prime Conjecture
c) Information-Theoretic Analytic Number Theory:
- Develop information-theoretic versions of key tools in analytic number theory:
* Riemann zeta function: ζ_I(s) = Σn^(-s)I_tp(n)
* Von Mangoldt function: Λ_I(n) = log(p) if n=p or n=p+2 in a twin prime pair, 0 otherwise
- Study the analytical properties of these functions and their connections to twin primes
4. Quantum Approaches:
a) Quantum Twin Prime Oracle:
- Design a quantum oracle O_tp that, given x, produces a superposition of all twin primes up to x
- |ψ_x⟩ = (1/√N_tp(x)) Σ_{p≤x, p and p+2 prime} |p⟩
- Use quantum phase estimation to extract information about the distribution of twin primes
b) Entanglement in Prime Constellations:
- Develop a quantum model where primes in constellations (including twin primes) are entangled
- Study how the entanglement entropy of this system relates to the classical I_tp(x)
- Investigate if quantum contextuality plays a role in prime constellations
c) Quantum Information Scrambling in Prime Gaps:
- Model prime gaps as a quantum chaotic system
- Study how information is scrambled between successive gaps
- Investigate if twin primes represent special "unscrambled" states in this system
5. Advanced Theoretical Concepts:
a) Twin Prime Information Geometry:
- Define a Riemannian metric on the space of prime constellations: g_ij = ∂²I_tp/∂x_i∂x_j
- Study the curvature and geodesics of this space
- Investigate if twin primes correspond to special geometric features (e.g., minimal surfaces)
b) Topological Data Analysis of Twin Primes:
- Apply persistent homology to the point cloud of twin primes in information space
- Analyze the persistence diagrams and Betti numbers of this data
- Explore if topological features provide new insights into the distribution of twin primes
c) Information-Theoretic Prime Number Theorem for Twin Primes:
- Develop an information-theoretic analog of the Prime Number Theorem for twin primes:
I_tp(x) ~ Li_2(x) where Li_2(x) is a modified logarithmic integral
- Prove error terms and study the fluctuations around this main term
6. Interdisciplinary Connections:
a) Statistical Physics of Twin Primes:
- Model twin primes as a statistical mechanical system
- Investigate if there are phase transitions in the behavior of I_tp(x) or ρ_tp(x)
- Apply techniques from random matrix theory to study correlations between twin primes
b) Cryptographic Applications:
- Develop cryptographic protocols based on the computational difficulty of finding large twin primes
- Investigate if quantum algorithms for twin prime detection have implications for cryptography
- Explore the use of twin prime information measures in randomness extraction
7. Long-term Research Program:
a) Unified Information Theory of Prime Patterns:
- Extend our approach to other prime patterns (e.g., prime triplets, prime quadruplets)
- Develop a general framework for understanding prime constellations in terms of information content
- Investigate if there's a fundamental "conservation of information" principle governing prime patterns
b) Cognitive Science of Mathematical Intuition:
- Study how the human brain processes information about prime patterns
- Use neuroimaging to investigate cognitive processes involved in recognizing twin primes
- Develop AI systems that can generate "intuitive" conjectures about prime distributions
This expanded plan provides a comprehensive roadmap for advancing our information-theoretic approach to the Twin Prime Conjecture. It combines rigorous mathematical development with speculative theoretical ideas and practical computational and experimental work.
By pursuing these diverse avenues simultaneously, we maximize our chances of gaining deep new insights into the distribution of twin primes and potentially making significant progress towards proving the Twin Prime Conjecture. Even if a full proof remains elusive, this approach promises to yield valuable new perspectives on the structure of the primes and the nature of prime patterns.
You have this notion that something is real only if it is 'possible' or 'physically achievable'. Please go read some philosophy - starting with Plato preferably, then the rationalists and the german idealists. Give us an update when you get to analysis 101 again! :)
5:44 mark I got it. Cool. Good one.
I think i understand the spirit of your assertion. cos(7) is assumed to exist and we then calculate numbers that are not cos(7) by calculating not cos(7) using a well defined series to as many terms as we like.
If we accept that cos(7) has been proved to exist because we can calculate a rational number approximation to it, then you have proved the twin prime conjecture exists, since it too is now a well defined number.
but the problem is surely your truth condition of 'the greatest integer part of the number' that you have previously well defined. This is an adjunct to the above paragraph.
For cos(7), although it is never reached, the greatest integer part is not changed after just a few steps. Thus your truth value by being defined as 'the greatest integer part of your well defined object' .... can fix at zero or flip to 1. This is not true of cos(7). You have 2 numbers. the well defined object and the well defined truth value of that well defined object. But for cos(7) you have only considered it as 1 well defined number.
Your truth value algorithm must be fairly applied to cos(7) too. Thus the definition of the cos(7) truth value, becomes 'greatest integer stable' in say base 10, after just a few steps....... and can be proved to be stable thereafter. Therefore your 'greatest integer part' truth value could actually be used to validate cos(7) as 'true' by infinite series, because the greatest integer can be proved to be stable even when the series gets ever larger.
However, by comparison, your greatest integer truth value for the twin prime conjecture is not stable. It remains in flux .... unless you disable that flux by providing a twin prime pair calculation above which there are no more twin primes. Then the leading zero is stable.
cos(7) has a stable greatest integer after a few steps, but your truth value for the twin prime conjecture does not.... unless you insist it is always zero by calculation.... and then that you have proved the twin prime conjecture to be false.
Did i miss that last bit? have you proved the twin prime conjecture to be false? I dont think you are claiming that, since in your proof you said you accepted you had infinite powers, and therefore the greatest integer of your well defined number is not necessarily zero.
BUT i am not undermining your general case against transcendental numbers in pure mathematics.... i just dont think this particular attempt works. One of the characteristics of Ai is that it does not care whether something is real or fake idealistically. It just uses what works. Ai uses pragmatism as truth ..... rather than idealism as truth. cos(7) is an idealised truth in pure mathematics as far as algebra is concerned. Ai has no access to continuity, and therefore has limited access to idealised truth.
We on the other hand have access to both forms of truth ... and that is a real problem! The machines are coming and they have little or no idealism in them whatsoever. That is why Ai is so dangerous and we need to prepare ourselves for rampant pragmatism.
Ai with regard to the twin prime conjecture would probably adopt the pragmatic attitude that there are always more twin primes available on the one hand, but they may be too time consuming to find on the other. It becomes a pragmatic choice rather than an idealised proof one way or the other. But that is not to say that Ai could not prove that the square root of 2 is an irrational number.
So how come Euclid proved the infinity of prime numbers if no such concept of infinity exists?!
That’s not really what Euclid proved. He did not use the word infinity or infinite. Rather given any collection of prime numbers, we can find another prime number distinct from those
@@njwildbergerIn particular, Euclid showed that fact for any finite collection of primes, though I suppose by your representation of his argument your definition of collection implies finiteness. Clearly the primes are not finite, as, if they were numbered there would be no maximum. So I wonder: what aggregate (I am trying to avoid the word "collective") noun would you assign to all the primes? Or would you not assign such a noun to them?
@@njwildberger That's to say, any collection of primes, is limitless.. Meaning it's not finite, therefore infinite
while i don’t always agree with your stance professor wildberger, it is nonetheless a fantastic thing that your doing.
Thanks!
The twin prime conjecture is that your number is equal to 1. You proved is either 1 or 0. So you did not prove the conjecture. You proved that the conjuecture is either true or false.
I didn’t claim to have proved the conjecture. Just to have solved it.
The busy beaver function allows you to determine the truth of any hypothesis that you can encode into a Turing machine. If it halts in the case of the twin prime conjecture, the number of twin primes is finite. If it continues past a fixed finite number of steps, then you know that it will never halt for that particular busy beaver function and in the case of the twin prime conjecture, not halting means they are infinite. This is theoretically possible but will never be practical. The calculation has a finite number of steps but requires more energy than the universe could realistically supply. But I’m sure you know this. I hope. 😅
Blimey, it seems like cracking the twin prime conjecture is just a warm-up for you before tackling the Riemann Hypothesis! Talk about burning your bridges; your mainstream maths colleagues will be calling for your sacrifice inside a wicker man no doubt :) It seems as though you are making good use of the mainsteam maths idea that 'computable' can be used to mean that infinitely many things can be done.
My main reservation about terms like "computable number" and "computable function" is that they don't match the common usage of "computable", as such, they appear to have slippery definitions. The common understanding of "computable", as I see it, is "capable of being computed". This implies a computation process leading to a definite and precise answer. The mathematical definitions are crafted with the clear intention to categorise real numbers like √2 as being 'computable'. But to say that √2 is computable suggests that we can compute √2 to infinite precision, which is absurd. Based on the common understanding of "computable", it's evident that √2 isn't computable.
The ambiguity surrounding terminology in discussions about real numbers appears to be a consistent issue. Modern interpretations of real numbers as limits were not easily formulated. It took the collective effort of esteemed mathematicians over several decades to meticulously refine the terminology and symbolic representations involved. Bolzano attempted to define a limit in 1817, followed by Cauchy in 1821 and Weirstrass’ epsilon-delta definition in 1861. Multiple definitions emerged, such as real numbers being defined as a complete ordered field, or as Cauchy sequences of rational numbers, or possibly as Dedikind cuts (1872). While these definitions are widely accepted in academic circles today, they continue to face criticism from a minority of mathematicians who argue that they are flawed and that real numbers may not truly exist.
However, when mathematicians impart these concepts to students, they often convey an air of certainty, simplicity, and intuitive clarity. In reality, it took considerable time and effort for experts to conceal the underlying complexities (and flaws) beneath layers of formalism.
The point I wish to underscore here is my aversion to misleading terminology. Much of mathematics, particularly concerning real numbers, seems rife with misleading expressions. If we employ a term like "computable", it should not be defined in a manner that suggests the computation of infinite values is feasible. It is frustrating to witness mathematics perpetuate such practices, which only serve to reinforce the mistaken belief that infinite processes can be completed.
What does Turing have say about √2?
The basic equivocation is between applied and pure math. They speak of "pi" but when pressed say, "You know, 3.14"
In computing science the analog of the claim that "real numbers form a field" is the hypothetical 'Zeno machine'.
Square roots are constructive and I consider constructive=computable in the general sense. Square roots have periodic continued fractions.
Continuous pure geometry doesn't require reductionism to tally marks. Actually, Zeno proved that continua are not reducible to the tally marks of neusis method of applied math.
@@williamschacht Turing doesn't really define the continuous tape of a TM in constructive computational way, but postulates the tape as a "given". For computational view of the tape we need continuum as irreducible wholeness ("whitespace") and concatenation as the mediant of whitespace, so that we can form 'blank characters' that way both L and R.
With concatenating mediants defined this way (cf. Dirac delta) continued fractions can be defined as mediant paths of Stern-Brocot type constructs.
Okay, trying to think as clearly as I can. The continuous sum 1/2 + 1/4 + 1/8 + etc. is less than 1 if the program halts. If the program doesn't halt... hmm.
Can I speculate anything about an infinite loop that doesn't halt and thus doesn't give any value? I can't decide by myself such question about speculating about speculating, I'm undecided. Is speculating about speculating the general form of the Halting problem?
From Stern-Brocot type strictly top down perspective number theory doesn't yet know field arithmetics, by freshman addition 1/2 + 1/4 + 1/8 = 3/14. 3/14 < 1. The divine joke of pi digits also noted, proving that the God of Number Theory does have a sense of humor :).
PS: IMHO the most important conjecture is the abc-conjecture. AFAIK to prove it we need to prove an inequivalence relation, not necessary to decide an equivalence relation of an exact value of an arbitrarily large. If we can fork the abc-conjecture, they say that on the same plate there will be elementary proofs of many other conjectures.
PPS: A finite field analog of twin prime conjecture has been proven Savin and Shusterman in 2019. Finite field polynumbers seems like worth a deeper look. ;)
Cos 7 has no 'last digit'. Neither does cos 53. Bu both have expressions, and using the 'cos of sum of angles' I can calculate, from cos 7 and cos 53, cos 60. It comes out exactly 1/2.
Perhaps, then, cos 7 _does_ have a meaningful value. 😅
Doesn't this line of reasoning depend on what you accept as a 'proof'? If you insist in having a finite proof in the case of the twin prime conjecture, then I believe Wildberger has shown no such proof exist bij mains of calculation alone. But what if you allow for a proof with deductive reasoning or reasoning by contradiction or some other method? Is it fair to accept such a proof? And if so, what has this video clarified?
...reminds me about how Alexander the Great cut the annoying Gordian knot and conquered a chunk of finite land mass of our earth, which perhaps at his time was believed to be an infinite flat disk….
I have to thread very carefully here because I'm a layman on the subject and I don't want to sound like a total twat, but. Wouldn't even Gödel's incompletness theorem fall flat if we don't grant the possibility of infinites? That is if there are only a finite number of terms whiteout the possibility of infinite repetitions wouldn't that mean a finite number of operations which would possibly yield a "complete" theory of math? Don't kill me.
@adamtokay It is a reasonable question. But I should also state that in my opinion Godel's theorem is not entirely a result of pure mathematics: it is more a result of philosophy, or perhaps computation / computer science.
Professor Wildberger's videos got me interested in the history and philosopjhy of infinity and intuotionism, and I thank the him for that. However, I beleve now he is just being a troll.
You might want to correct his name. Your truthful comment needs to be directed at the right person.
@@godfreypigott th
Corrected. Thanks.
Thank you. 10000........wonderful.
I don't see it like that. You don't need to assume that you can do an infinite number of things, but you can prove that if you could take an arbitrary amount of steps, you would get arbitrarily close to something particular. And it's that particular thing that you then say is "the limit".
“Something particular” ?? What does that mean?
@@njwildberger Let s_n be the sums from 1 to n of 1/2^k, take epsilon>0, no matter how small epsilon, IF you make n big enough (IF you could take an arbitrary amount of steps), THEN you could make this sum arbitrarily close to 1. Easy to prove as you know.
Here "something particular" clearly is 1.
@@njwildberger I wonder why you haven't replied back yet. Am I missing something, and if so, please try to inform me (I would gladly watch another of your videos if you address my argument there). I have the impression that you have a contention with epsilon delta arguments, but I'm not sure where I got that or where I might delve into your take on that.
Let P be the truth value of the twin prime conjecture, which is a statement. Since we’re working in classical logic, it is 0 or 1. I submit P as my proof of the conjecture.
This argument works regardless of whether or not you believe in infinity.
nice method based on critical thinking for more epistemological approaches, but all this stuff needs careful development over time
Twin Prime Conjecture Proof. (Lawrence Abas 2024 Aurora Ontario Canada)
This proof suggests that if the set of twin primes is finite, there would be a highest twin prime. However, they can only exist if there is a finite number of primes. Since this is not true, the number of twin primes is infinite.
Consider if there was a finite number of primes.
Let P represent the product of all those primes.
If there was a highest twin prime, the largest twin primes would be P-1 and P+1.
However, this creates a paradox since P-1 and P+1 are both greater than any element in the original set and is guaranteed to be prime or have a prime factor that is not in the set.
Since there are an infinite number of primes, the largest twin primes are on the line n x P +/-1 where P is the product of all primes (infinity for sure) and n is a prime number from 2 to infinity.
In other words, simply modifying Euclid's original proof of infinite primes with a +1 or -1 to represent a twin prime proves that a finite set of twin primes is not possible, and therefore the number of twin primes is infinite.
Why do people always assume binary logic? Why do people always say some element is either IN or NOT IN a set. What about the case where it is both in the set and not in the set? #RM3 The Liar Paradox is solved in exactly the same way that the complex numbers arise from the reals! The 3rd truth value is a member of an extension field to Z2 (binary logic). Even better, this is all easily describable and constructive. Conjunction is left adjoint to implication.
"Why do people always say some element is either IN or NOT IN a set."
Because in this context we are talking about Pure Mathematics (see Wildberger's previous video in this series, where he sets the context), and in Pure Mathematics, such binary logical reasoning is the standard system. Other systems of logic *might* be possible, but if they were to try to build off of existing Pure Mathematics, they would have to be *at least* proven to be logically equivalent to (i.e. be equivalently expressible in) binary logic.
"What about the case where it is both in the set and not in the set?"
Again, this is just down to the context of discussing Pure Mathematics rather than, say, some general question in the philosophy of logical systems. In Pure Math, it's not helpful (as far as I'm aware) to have final 'answers' to questions where the final 'answer' is something 'indeterminate' like 'maybe true, maybe false, heck maybe both at the same time! Heck, maybe neither!'
It just doesn't allow for mathematicians to come to useful theoretical conclusions, and is especially troublesome when trying to communicate one mathematician's results to another mathematician; let alone build one's new results on top of prior, established results.
There *are* absolutely mathematical disciplines where indeterminacy and uncertainty is embraced and utilized. For example, in Probability Theory, you can actually quantify uncertainty. And in Computational Theory, you can have things like Non-deterministic Finite Automata, which are useful in certain circumstances.
But not so in Pure Mathematics. It's just the nature of the topic. You want to come to 'logical' (binary true/false) conclusions in Pure Math. That's what it's built on.
We don't always assume binary logic. Actually I agree with Voevodsky and others that First-Order Logic is inconsistent. Propositional logic is not bivalent, propositional "binaries" include both exclusive contradictions and codependent contrary opposites.
If set is a "collection" instead of "inclusion", as the axiomatic set theorists hint, then question "IN or NOT IN a set" is meaningless. ZFC can't do mereology.
Doesn’t this give rise to ‘The Twin-prime Truth-value Problem’: which is it - zero or one? 😁
[Updated with a smiley to give the intended flavour/nuance, i.e. just fkng around.]
Did you not watch the video? He gave the answer in the video. And yes, it's either zero or one.
Do you not know what the word which means?
He didn't prove the twin conjecture. It was just 'clickbait' for what he always tries. Namely thinking that analytic mathematics is the only form of mathematics. That is, that all mathematics consists of constructions. He doesn't know that, in propositional calculus, you don't only have the implication if .. then ..., but we also have the 'not'.
@@konradswart4069Great, then like we have "numbers that are not finite," we can have "wives who are not married." In other words, we can be unclear.
Being unclear while pretending we're being clear is certainly a great innovation, but it is a social one and not a mathematical one. It is an excellent tool for upward mobility within academic culture.
That's why A.P. Math stands not for Advanced Placement, but Applied Mathematics.
We nutjob hoomans do well rounding off to the nearest infinity.
Applied Mathematics is "approximately" AP... It's AM.
"excluded middle holds wrt the truth value of the twin prime conjecture" is the only argument i'm gleaning.
else,
¿is the argument,
either:
a) there exists no set with [a priori] the cardinality of the set of integers (viz. the smallest infinity), "∞," such that we can iterate the algorithm ∑(1/(n^k)) [as described] ∞ times and get a value ≥1.
or:
b) ∄ […], such that we can iterate strictly n
This is just rewording the twin prime conjecture into another conjecture which is equivalent: is the sum less than or equal to 1?
Nothing is proved here about this sum. Any valid proof of the twin prime conjecture will answer this allegedly "absurd" question in a definite direction, however.
I will prove the square root of 2 is a rational number using your reasoning.
Let’s say…
x mod sqrt(2) = 0
x mod sqrt(2) = 0
x - sqrt(2)*floor(x/sqrt(2)) = 0
x = sqrt(2)*floor(x/sqrt(2))
x/floor(x/sqrt(2)) = sqrt(2) as x -> ∞
This is a mathematically true statement. However, by doing so I am exposing the flaw in your logic - although you can take the limit as x-> ∞ , the number represented will be limited by the constraints of how far you go.. You didn't actually solve anything, you just restated the problem. I recommend checking out Ergodic theory if you're interested in this type of logic because it deals with things as if everything that will ever happen already happened.
I also recommend checking out Seth Hardy's paper that came out in 2022 proving there are infinitely many consecutive primes no more than 270 apart. He uses the Bombieri-Vinogradov theorem.
Modern math is incoherent, so by the principle of explosion it can be used to prove everything (and its opposite). That's kind of the point.
@@ThePallidor The problem isn't with the math the problem is with the people..
2^x/floor(2^x/e) = e as x -> ∞
This is true as well.. and you can use it with a computer to find closer and closer estimates to e (or another number).
When you need to modularize all logic so a computer can digest it, the concept of limit->inf has to be creatively implemented. This dosn't change math itself. The usefulness of math properties depends on the context.
I get the sense the math community may have lost touch with why proofs are so valued.
Proofs aren't valued because someone can say they're right. They're valued because they provide logical foundations for new areas of math. They strengthen the power of the questions we can possibly ask.
There's a fatal flaw in your argument. It only makes sense to compute an infinite sum if you know the value of each term, which we certainly don't in the case of the twin prime conjecture, so you haven't solved this problem, though you defined an currently incomputable quantity. Your argument begs the question of whether there are infinitely many pairs of twin primes, since the only way we could determine whether your sum is 0 or 1 is to first determine whether there are infinitely many pairs of twin primes!
Isn't this just saying the conjecture is either true or false, but no way to determine which? You could have just used the law of excluded middle and saved yourself some saliva.
Rephrasing a problem and solving a problem are two different things. This video merely rephrases the twin prime conjecture, and in a purely artificial way. In bonus, many confused statements about what the author believe analysis is. Sad.
this is so lame. You just translated the twin prime conjecture in another conjecture... nothing ist proven here.
His argument is that since we can perform an infinite number of arithmetic computations and we are so good at them as when we so freely say that cos(7) has certain exact value then going along the entire real line finding and discarding odd numbers is just as valid a mathematical proof as saying we know the value of cos(7). I don't agree with him but I believe that's a fair representation of what he tried to convey.
@@jacoboribilik3253I’m curious why you don’t agree with him. What is fundamentally different between defining cos(7) as an infinite set of computational instructions and defining the twin prime conjecture’s answer as an infinite set of computational instructions? You have never completed the computation for cos(7) but you accept it as a real number whose exact value can most precisely be expressed by simply calling it cos(7). So if completion of the computation is not required, and expressing the number’s exact value by simply stating the description of the number is an acceptable way to describe the exact value, then his definition of the number TPCNum should be held to the same rigorous standards. Mainly his description of the algorithm is well defined, it can be partially computed without bound just as the terms of cos(7) can be partially computed without bound. The only bound is someone’s decision to stop trying. But we know the exact value of cos(7), it’s cos(7). Similarly, we know the exact value of TPCNum, it’s TPCNum.
This "solution" is as good a solution as the state of Schroedinger's cat (or should we say Wildberger's cat) before you take a look into the box. It's just not defined. Both possible outcomes, 0 and 1, false and true, are "known", in principle, but they are not really KNOWN for real, but in a superposition, as long as no one really delivers a proof for one outcome or the other, that means observes the cat being dead or alive.
But will the Clay Mathematics Institute award you the US$ 1 million prize? :)
Hang on a sec and let me compute the probability….
The euler's # and π are true mathematic artifacts of the physical world.
Neither can be resolved by humans.
Perhaps God has a sense of humor.
What do you mean by “resolved”?
Basicly he is sceptic about infinity ♾️!
Or better.. I am skeptical about "infinity".
@@njwildberger
Yes I know I have followed you for some years. It isn't my favorit part neither of math, but infinity seams to be there, like natural numbers 😃 or an infinite thin line.
I am looking forward for the Reiman proof!