To add to my comment, this is the first time when I disagreed with Michael when he said "... and that is a good place to stop". I rather thought it to be a good place to start!
I may be wrong, but I think generally it's the sum of all the combinations of it's prime factors with lenght one less than it's own prime factor decomposition. E.g: D(abcde) = abde+acde+adbc+aebc+bcde, with a,b,c,d,e all primes
@@The_Aleph_Null I don't think Surfin is looking for _practical real-world_ applications for this idea, but rather how this idea can be related to other bits of math. Can we extend this definition any further? Can we rigorously define an antiderivative definition? Are there hidden links between this idea and other concepts? These are all good questions that don't need considerations from the "real world" to be worth asking.
Any number of the form p^p, where p is prime, is fixed for D, because D(p^p) = p*p^(p-1) = p^p (in particular, 4 = 2^2 is fixed) Wonder if there are more fixed points tho...
So if a positive integer n is divisible by p^p, then its arithmetic derivative is always greater than or equal to n, correct? And also any even number n=2k must have arithmetic derivative greater than or equal to k+2 ?
p^p is the only case. We can do a proof by contradiction. Assume there is a stationary point x with a prime factorisation of p1^n1 * p2^n2 *... * pm^nm. Then x = D(x) = [n1*p1^(n1-1) * p2^n2 *... * pm^nm] + [p1^n1 * n2*p2^(n2-1) *... * pm^nm] +... + [p1^n1 * p2^n2 *... * nm*pm^(nm-1)]. Taking the simple case when m=2 we get p1^n1 * p2^n2 = x = D(x) = n1*p1^(n1-1) * p2^n2 + p1^n1 * n2*p2^(n2-1). If we divide the far left and far right by p1^(n1-1)*p2^(n2-1), we get p1*p2 = n1*p2 + n2*p1. Here, p1 and p2 are different primes and n1 and n2 are integers > 0. If we solve for n1 we get n1=p1/p2 * (p2-n1). Since p1 and p2 are different primes, they share no prime factors and similarly, since n2>0, (p2-n2) and p2 share no prime factors. So we cannot cancel out the p2 in the denominator. Hence n1 cannot be an integer. This violates our assumption that n1 is an integer for x to have a valid prime factorisation. Hence, there is no stationary point that can be represented as a product of powers of 2 different primes. A similar argument follows for n primes. Just solve for n1, factor out p1 in the numerator and you'll find that the denominator shares no factors with the numerator. The p1 won't share factors with the product in the denominator and the bracket will be less than the product in the denominator (the maths is just a mission to type out).
This is probably one of the most interesting things I've seen for calculus outside of a real analysis class. So wild for a derivative to be expressed in prime factorization over the integers; I've never seen this and I love it. Keep up the great content!
Yes, very interesting concept. I did not come across this when doing my math studies, not even during my time as a grad student. It feels like it should have important consequences. Perhaps transforming number theory into something more calculus like?
@@TheIllerX I wish I saw something like this in my number theory class. I feel like learning calc III theory with this on the back-end would help a lot of students.
It is really cool! This sort of idea shows up in some pretty advanced mathematics. For example, in one of my topics courses in commutative algebra, we saw derivations, and this is a special kind of derivation, which are essentially bringing the concept of derivatives into algebraic settings. There is a Wikipedia article called "Derivation (differential algebra)" describing the main concept in algebra, though what is happening in this video isn't *exactly* the same thing as a derivation. I do wonder, however, if this sort of "arithmetic derivative" can be used to show anything interesting? I will be teaching an elementary number theory class next Spring, and it would be neat to bring this up if anything comes out of it.
assumming D(1) = 0 in unnecessary: D(1) = D(1*1) = D(1)1 + 1D(1) = 2D(1) therefore D(1) = 0. as to why that must be the case pick any n != 0, then D(n) = D(n*1) = D(n)1 + D(1)n and we have nD(1) = 0 therefore D(1) must be 0.
When you break it all down, you can find a direct calculation for the arithmetic derivative of a number based on its prime factors: Say we have a number and its prime factorisation: n = p₁^a₁ * p₂^a₂ * ... * pₙ^aₙ Then: D(n) = a₁(n/p₁) + a₂(n/p₂) + ... + aₙ(n/pₙ) I believe this also holds for the extension into the rational numbers, if you let the aᵢ's be negative.
@@jeanefpraxiadis1128 oh no my quickly rattled off youtube comment has some technical inaccuracies that didn't stop you from understanding whatever will i do
I've never heard of this derivative before. This is interesting. I should be able to make a recursive algorithm that calculates this in SageMath with Python.
Fixed points for integers are any prime to its own power. An interesting fixed point is the number x = 2 * 3 ^ (3/4) * 5 ^ (5/8) * 7 ^ (7/16) * 11 ^ (11/32) * 13 * (13/64) * ... which converges to x = 291.893574100944... and the arithmetic derivative of x is x. I use lower case d. Define ld(n) = d(n) / n . ld(x) = 1/2 + 1/4 + 1/8 + 1/16 + ... = 1, so d(x) = x * 1 or x. ld(n) is a gem and a huge tool. If you have the prime factorization of a number n = p1^m1 * p2^m2 * p3^m3 (put as many primes and multiplicities in here as you wish), then d(n) = n * (m1/p1 + m2/p2 + m3/p3 and so on). Alternate ways of defining d(n) are to define the prime case to a function of p and a wide variety of functions work just fine. If you define d(p), p prime, as d(p) =p instead of d(p)=1, d(n) is just n times the number of prime factors it has counting multiplicities and the power rule still holds. Try d(p) = 1/p, d(p) = 1/p^2, and d(p) = e^((i*2*pi)/p) as alternate definitions for fun. Using ld(n) for the d(p) = 1 standard defininition, you can deduce that the lower bound for d(n) for composite n is 2*sqrt(n) and the upper bound is ((1/2) * n * log2(n)). Having a power of 2 makes ld(n) as large as possible in relatiom to values of n "nearby" since the multiplicity is over 2 which makes ld(n) larger than the multiplicities being over other larger primes. Having the square of a prime puts the 2 as far "to the right" as possible giving the lowest ld(n) for a composite value. It's neat to see someone else playing with this. Note that ld(n) is the logarithm for this type of calculus. ld(ab) = ld(a) + ld(b) and ld (a^b) = b * ld(a). Now have some fun: Consider I(n) (capital I) as the inverse relation for d(n) and E(n) as the inverse relation for ld(n). Some integers have no I(n) value and some have multiple ones. When you go to rationals, it gets interesting with there being many values. Denominators must be of the form having all prime factors to the power of a multiplicity being a multiple of that prime factor for d of a rational (in lowest terms) to be an integer. Oh, and write your rules like d(n^3) = 3n^2 d(n) and it jibes with regular calculus and Let y be a function and consider d(y)/d(x) and d(y)/d(x) will work prime or not.
wait but can a number like x even be a fixed point? A number like that isn't rational and to extend it to arbitrary reals you'd need it to be a continuous function or something. As far as I'm aware it isn't continuous (but I could be wrong about that). If it is continuous I kind of want to see a plot.
I was wondering if you could extend this idea to the complex plane (a + bi) with a and b whole numbers and using Gausian primes. No surprise: you can. Interesting stuff...
I just love learning this kind of stuff. I learn more here than i did 50 yrs ago in high school. I had the worst teachers. I have been learning math for 20 years from you tube and books, and now you. A blackboard is always the best. Have a good day.
Just from the definition of a derivation, the power rule as you've written can be generalized to d(a^n) = n a^{n - 1} d(a). These sorts of derivations are usually dependent on the values of their bases, so it seems taking choices other than d(p) = 1 seems interesting. Moreover, I think you could allow this value to vary over the prime. i.e. d(2), d(3), d(5),... are allowed to vary independently. I wonder if clever choices of d(p_j) leads to any neat number theory facts
Michael, I found this notion of an "arithmetic derivative" quite interesting as I seek uses for it. Perhaps it can be used in conjunction with prime theory and/or Paul Erdos' notion of "highly composite numbers". Below I summarize my findings and suggest one application. Generalizing and Applying the derivatives of prime products: Prod(n) = p1 p2 ... pn, where pj is a prime. Let m' := D(m) be the first derivative of integer m. Then: Prod(1)' = 1 Prod(2)' = p1 + p2 Prod(3)' = p1p2 + p2p3 + p3p1 Prod(4)' = p1p2p3 + p1p2p4 + p2p3p4+ p2p3p4 + p2p3p1 + p3p4p1. Note the second derivative of each Prod(n): Prod(1)'' = 0 Prod(2)'' = 2 Prod(3)'' = 2(p1 + p2 + p3) Prod(4)'' = 3! (p1p2 + p1p3 + p3p4 + p2p3 + p2p4 + p4p1). And, the third derivative of Prod(4), denoted as Prod(4)^(3), is: Prod(4)^(3) := Prod(4)''' = 3! (p1 + p2 + p3 + p4). As an application, this would correspond to this fourth degree polynomial with p1, p2, p3, p4 as prime roots: x^4 - (Prod(4)'''/3!) x^3 - (Prod(4)''/2!) x^2 + (Prod(4)'/1!) x^1 - Prod(4) /(0!) = 0. The above generalizes to: Prod(n) = p1 p2 ... pn, Prod(n)^(1) = Sum of { all products of n-1 primes} Prod(n)^(2) = Sum of { all products of n-2 primes } : : : Prod(n)^(k) = Sum of { all products of n-k primes } : : : Prod(n)^(n-2) = Sum of { all products of 2 primes } Prod(n)^(n-1) = Sum of { all single primes } Prod(n)^(n) = Sum of { n ones) = n
If you find these integer-valued derivatives interesting, you may want to check out p-derivations (the Wikipedia article is a fair place to start). For a fixed prime p, the unique p-derivation Z -> Z is the map delta(x)=(x-x^p)/p. This satisfies non-linear analogues of the addition and product rules of the usual derivative.
There's a non-recursive formula too: if r = p₁^(k₁) · p₂^(k₂) · p₃^(k₃) ··· > 1 then D(r) = r · ∑(kᵢ/pᵢ). This even works for rational numbers (in which case some kᵢ are negative).
Another interesting way to approaching the quotient rule is to first start by further generalizing the power rule to non-primes. Suppose you have a constant k with prime factors a and b. Using the existing rules for products and powers of primes: D(k^n) = D(a^n*b^n) = a^n*D(b^n) + b^nD(a^n) = n[(a^n)(b^n-1) + (b^n)(a^n-1)] = n[a(a^n-1)(b^n-1) + b(a^n-1)(b^n-1)] = n(a + b)(a^n-1)(b^n-1) Since D(k) = a+b (recall a and b are prime, so D(a)=D(b)=1), we can say that D(k^n) = (nk^n-1)D(k) This works as a general form of the power rule because if you choose a prime number for k rather than a product of two primes like in my example, D(k) simply equals 1 and you're back to the p^n equation in the video. Now we can finally get back to the quotient rule. For the rational number (m/n): D(m/n) = D(m*n^-1) = (n^-1)D(m) + mD(n^-1) [Here's where that new formula comes in handy!] = (n^-1)D(m) + m(-1)(n^-2)D(n) = (n^-2)[nD(m) - mD(n)] = [nD(m) - mD(n)]/n^2 It's a little more convoluted to get to the final form of the equation than what was shown in the video, but personally this was a LOT more satisfying to work through. Maybe I'm biased because I spent the past half hour or so working this out...
I really like these peeks into other lesser known portions of math. But as a suggestion it would be nice to tie things up at the end with either places that these operations arise, such as fields of study, or areas of further research aside from Google for these results.
Both the first and last line of the initial description are redundant, just the two in between are sufficient. Definitions of D(1) and D(0) are completely redundant given the product rule: letting m=n=0 or 1 forces D(mn)=0 in both cases. Conversely, the definition of D(-n)=-D(n) is redundant if you define D(-1)=0. Applying the rule for D(mn) letting m=-1 shows these are equivalent. But defining D(-1) is itself also redundant: setting m=n=-1 and applying the product rule gives D(1)=-2D(-1) but since D(1)=0, D(-1) must also be 0. The definition of the quotient rule is also redundant. If you presume D to exist for rational m/n, the product rule gives: D(m)=D(m/n * n) = m/n*D(n) + D(m/n)*n. Solving this for D(m/n) gives the quotient rule.
I agree with most points, but if we calculate D(1×1) it becomes 1×D(1) + 1×D(1) in which case it is not known whether D(1) equals 0 or not, as 1 is neither a prime nor a composite number (It has its own category).
@@barbodnaderi6170 D(1)=D(1*1) = D(1)*1+ 1*D(1) = 2*D(1). The only solution for D(1)=2*D(1) is D(1)=0. I'm not sure what your point was about knowing whether 1 is prime or composite, but I see no way for the product rule to be satisfied by nonzero D(1). The product rule doesn't care whether m,n, or mn are composite.
I really like the arithmetic derivative. I learned about it a few months ago from Wikipedia, the great source of all useless information. I can't figure out why anyone would care about this function, but I care about it anyway. I like that 0 and 1 are the only constants here. I don't know much about differential algebra, but I think N, Z, or Q equipped with this function as a derivation is a differential ring. And Q equipped with it is a differential field. So in that sense, every number has a primitive, which is just another number. Some numbers have unique primitives (e.g. 5 only has primitive 6), while others have several primitives (e.g. 32 has the primitives 16 and 28), and 1 has infinitely many primitives (every prime). It also has fixed points at p^p with p prime, like 4 = 2^2, 27 = 3^3, 3125 = 5^5, etc., which come straight from the power rule. BTW, some people have probably already mentioned this, but the first, fourth, and fifth parts of the definition are unnecessary. All we need to uniquely define D over all rationals are the Leibniz rule and the fact D(p) = 1 for all prime p. The other three facts immediately follow from the Leibniz rule alone. We could create other well-defined differential fields in the same way by changing the rule D(p) = 1 to something else. For instance, if pₙ is the nth prime, then we could define D(pₙ₊₁) = pₙ for all n ≥ 1 and D(2) = 1. This also is well-defined. For instance, D(12) = 3 * D(4) + 4 * D(3) = 3 * [2 * D(2) + D(2) * 2] + 4 * 2 = 3 * (2 + 2) + 8 = 20, and also D(12) = D(6 * 2) = 6 * D(2) + 2 * D(6) = 6 + 2 * [3 * D(2) + 2 * D(3)] = 6 + 2 * (3 + 2 * 2) = 20. This just yields a different derivation on N, Z, or Q.
Interesting, thanks Michael. Just two comments: the first and the last propierties are consequence of the Leibniz rule. We can define here and analogous version of the "logarithmic derivative" of calculus: L(n) = D(n)/n which satisfies L(mn)=L(m)+L(n) (consequence of the Leibniz rule) to obtain an explicit formula for D(n) in terms of the values of D on the prime divisors of n (in particular this shows that D is well defined once you fix the value of D in the prime numbers in an arbitrary way).
Personally, derivatives make intuitive sense when considering them as limits. Is there a way to understand the arithmetic derivative also in terms of limits?
@@farrankhawaja9856 well it's using the definition of a derivative that employs a limit to give good intuition into the logic of standard derivatives. I understand how to do these 'new' ones, but what the final number expresses is not intuitively obvious to me. So if you could help me, I'm all ears
@@PiEndsWith0 true, like i can now take the arithmetic derivative, but what does it mean to take the derivative of 12? from my current understanding of derivatives, how can a constant have a rate of change?
The only way to do that is to convert every whole number into a function, made up of a product of (x+p) terms so that the function at 0 is the number, and 0 and 1 are just themselves, then take the normal derivative, and evaluate at 0
@@tomkerruish2982 yes, 0 and 1 must be indeed excluded and they are given by the definition D(0)=0 and D(1)=0 (although they actually follow from the Leibniz rule! D(1)=D(1*1)=D(1)1+1D(1)=2D(1) and D(0)=D(0*2)=0D(2)+2D(0)=2D(0))
@@juanmargalef Agreed! Likewise D(-1)=0 follows by a similar argument. On closer inspection, your formula for D(N) can be rewritten as D(N) = N(k1/P1+•••+kn/Pn), where N=P1^k1•••Pn^kn. This automatically includes N=1, since k1=•••=kn=0 in this case.
@@tomkerruish2982 good point! Nice way to write a more general formula. And the negatives also follow from the definition: D(-N)=D(-1*N)=-1*D(N)+ND(-1) =-D(N) (0=D(1)=D((-1)(-1))=-2D(-1))
I guess some interesting maps to consider: D(1/x) = [D(1)x - D(x)]/x^2 = -D(x)/x^2 which looks like the power rule again but applies for all numbers, not just primes. In fact... D(x^k) = D(x^(k-1))x + x^(k-1)D(x); repeat this enough and you get D(x^k) = x^(k-1) D(x) for all numbers, not just primes.
What's interesting to me is that the Leibniz rule implies the other rules on the board, except for evaluating D(p). If you set m = 0, you get D(0) = n*D(0) If you set m = n = 1, you get D(1) = 2*D(1) If you set m = n = -1, you get D(1) = -2*D(-1) Of course, this just mean D(-1) = D(0) = D(1) With that, setting m = -1 gets you D(-n) = D(-1)*n - D(n) = -D(n) So, anything satisfying the Leibniz rule must also have these properties. Neat!
excellent finding, i thought that assuming D(0) = 0 is necessary, and then D(1) = 0 and D(-1) = 0 follows, but i did come up with an idea to plug in some edge cases into the leibniz rule. so out of 4 assumptions that michael gave only two are necessary: leibniz and primes.
If you assume a function satisfies both the Leibniz rule and linearity [ D(x+y) = D(x)+D(y) ], is that enough rules to uniquely specify the derivative function (the normal/usual one)?
You can also derive the quotient rule from it If you set m = 1/n, you get D(1/n) = -D(n) / n^2 If you set m = a and n = 1/b, you get D(a/b) = (b * D(a) - a * D(b)) / b^2
Some properties we can deduce: * D(p^p) = p ^ p * For all m,n > 1 then D(m*n) > 2. Corolary: it doesn't exist an antiderivative for the number 2. * For all k > 1, D(4*k) > 4*k. Are there more "increasing" patterns?
@@angelmendez-rivera351 That's not maths, though, is it? Imagine if maths lectures went like that. "Today, students, we will be looking at the proof of the Fundamental Theorem of Arithmetic. First, suppose that the Fundamental Theorem of Arithmetic were false. If that were the case, then mathematicians would not be going around saying that it was true. So we have a contradiction. We must reject our supposition and conclude that the Fundamental Theorem of Arithmetic is true. This concludes our proof of the Fundamental Theorem of Arithmetic. Q.E.D."
@@angelmendez-rivera351 The definition of the function is not itself a theorem. However, before trying to investigate how a function behaves, you first need to check whether it is well defined. That is all that the OP is saying.
Mistake at 10:10. Should have been a D(n) instead of a D(m). But the end result is correct, because in the next line you write the correct thing despite the error.
isn't stating the fourth property redundant? D(-n)=-D(-n) follows easily from the product rule, if you set n and m equal to -1 you get D(-1*-1)=D(-1)*-1+D(-1)*-1 , which simplifies to D(1)=-2D(-1) and as D(1)=0 and -2 has a multiplicative inverse, this means D(-1)=0 then using the product rule with m set to -1 you get D(-n)==D(n)*-1+D(-1)n=-D(n) edit : explicitly stating the quotient rule is also redundant , D(mn)=D(m)n+D(n)m , set n=1/m , we get 0=D(m)/m+D(1/m)m which implies -D(m)/m^2=D(1/m) , so D(n/m)=D(n)/m+d(1/m)n by the product rule, and using the previous observation on the value of D(1/m) we get D(n/m)=D(n)/m-nD(m)/m^2 = ( D(n)m-nD(m) )/m^2
1:45, from the definition alone it isn't immediately clear that D is well-defined. One could decompose 12=3*4 and calculate D(3*4); will Leibniz always yield the same result irrespective of factorisation of the number? Would be a nice exercise to prove it (haha; homework!).
This function is basically doing the derivative stuff on the prime factorization of a number as if primes were the variables. And the quotient rule is simply the product rule applied to the prime factorization of a rational number which is the same as a prime factorization of integer but allowing negative exponents. And since every rational number has exactly one prime factorization (because the a in nominator and denominator always cancel out), it is not surprising it works. But it is very interesting. I wonder what might be the applications. Maybe Riemann hypothesis? And what can we say about the antiderivative (other than there is more than one)? :D And btw the straightforward formula for arithmetic derivative of q with prime factorization q = s * PRODUCT[over n]OF(p_n ^ a_n) where s is one of {-1,0,1}, i.e. a "sign-or-nullity", p_n is nth prime and a_n is integer, is D(q) = q * SUM[over n]OF(a_n/p_n) since using rule 4 (for negative q, and rule 1 for q=0), then rule 3 (with the generalized power derivative rule), then simplifying, and then factoring out abs(q) returning it its sign, we get D(q) = D(s*PRODUCT(p_n^a_n)) = s * D(PRODUCT(p_n^a_n)) = s * SUM(a_n * abs(q)/(p_n^a_n) * p_n^(a_n-1)) = s * SUM(a_n * abs(q) / p_n) = q * SUM(a_n/p_n) which suggests the sum of prime reciprocals D(q)/q (which we could call derivative quotient or something) it the interesting part, since (1) it somehow turns multiplication into summation, so it is somewhat like an arithmetic logarithm, (2) it is a rational number that preserves the integer-ness of integer q, (3) allows a new factorization of every rational number into a sequence of derivative quotients, and seems to have other interesting properties. And I also believe the function would be well defined for all numbers that can be written as prime factorizations with rational exponents. What are they called, algebraic numbers?
How do we know that D(n) is well defined? I don't think it is obvious that every way of breaking off factors of 100! and plugging them into the D function leads to the same final result after all that algebra.
This can be proved in a manner alike to the rational case shown in the video, using two arbitrary products that form a number, then show that both of them are equal at the end. Let ab = xy, all natural numbers, a and b not zero. Show that D(ab) = D(xy). D(ab) = aD(b) + bD(a) Note that: a = xy/b b = xy/a = a D(xy/a) + b D(xy/b) = a [ aD(xy) - xyD(a) ] / a^2 + b[ bD(xy) - xyD(b) ] / b^2 = D(xy) - xy/a D(a) + D(xy) - xy/b D(b) = 2 D(xy) - (xy/a D(a) + xy/b D(b)) = 2 D(xy) - (b D(a) + a D(b)) = 2 D(xy) - D(ab) = 2 D(xy) - D(xy) = D(xy) Therefore, the system is well defined, QED. Note: It is fine to use the "quotient rule" in this proof as it is derivable from the Leibniz product rule, which I leave as an exercise to the reader. :P
I've seen a few comments claiming that certain parts of the definition are "redundant" because if you assume that this arithmetic derivative is defined in certain place, you can derive what the definition must be there. The thing is, that's not how definitions work. According to that logic, the definition of a negative exponent is also redundant, and so is the definition of zero factorial. However, that's not to say that the definition given in the video doesn't have redundancies. Here's how I'd define it to get rid of the redundancies: D(0) = D(-1) = 0 D(p) = 1 when p is a prime D(1/n) = -D(n) / n^2 D(mn) = D(m)n + mD(n)
He doesn’t define the extended definition of the function with domain and codomain of rationals until the end. So replacing D(-n)=-D(m) with what you have is more concise but only makes sense if introducing the idea starting with the extended version, which he doesn’t do. So I think any redundancies make sense as part of the video. Edit: I’m a little dumb and see more clearly the changes you made. It’s pretty clever. You can get D(1) through D(-1*-1) and the product rule. So you only need the D(1/n) rule when extending to the rationals. So even for the integer-defined function you managed to shave off a rule. I agree with you also that definitions can (and should if it helps intuition and readability) have redundancies, but it’s a nice exercise to spot them and remove them.
@@angelmendez-rivera351 you CAN define the factorial in a way that 0! is obvious, but it’s not always defined in that way. My first introduction and I believe many peoples first introduction was just by the relation n! = n(n-1)! over natural numbers with special case 1! = 1 then you stop recursing. It’s also usually brought up in the context of calculating permutations where each n represents objects. There are other ways to define the same behavior in a more general domain like through the gamma function which applies to all real numbers, but that is a far less common introduction. So there is no one canonical definition, it depends on context and necessity, and because of that most people get introduced to the factorial function in its simplest form which does not define 0! except as a special case.
@@angelmendez-rivera351 When the factorial function is first taught, isn't n! generally defined to be the product of all natural numbers between 1 and n, inclusive? That *does* need a special case for when n is 0. You could also define n! for integers n>=0 to be equal to Γ(n+1), but that essentially just arbitrarily resticts the domain of the gamma function, in in addition to potentially being a circular definition, depending on how you define e. Is there some other non-piecewise definition of n! I'm missing?
@@angelmendez-rivera351 "The" definition of n! is not well-defined per se. Often we see n! = n(n-1)(n-2) ... (3)(2)(1) as our first definition, which can also be said verbally "product of the first n positive numbers" or like someone below "product of all positive numbers less than or equal to n". With that (first) definition, it is not clear what 0! is, since 0 is not a positive number. You could say it a bit differently like someone said below, or try to say something like "it's an empty product" which is "vacuously equal to 1". Though many people would not find that intuitive or satisfying IMO. If you use a recursive definition then yes, it's more clear how to derive that algebraically. If you use the gamma function to interpret the factorial, then it's because the integral from 0 to infinity of e^(-x) dx is equal to 1. Lastly, and in my opinion the most pedagogical choice for the random human being: if somebody asked me on the street why 0! is equal to 1, then I would tell them for an integer n greater than or equal to 0, that n! is the number of ways to sort n people into an ordered line. In that view, 0! = 1 is because there is exactly one way to sort 0 people into an ordered line, that is to do nothing and sort nobody, b/c you don't have anyone to sort :)
@@angelmendez-rivera351 _”The product of all the positive integers less than or equal to 0 is equal to 1. Therefore, 0! = 1” Great! So (-1)! = 1 too, and (-2)! = 1 and... Oh wait, you said it only works for non-negative integers. Why? If it's so obvious that 0! = 1 follows from the definition, why stop there? The problem is that you are reasoning backwards. Your definition makes sense for non-negative integers because we have the convention 0! = 1. Your definition must be artificially restricted to only the non-negative integers, otherwise we have x! = 1 (for x≤1) and x! = floor(x)! (for x>1). The definition of the factorial given by other commenters above doesn't need to be artificially restricted. We just define it sequentially: 1 = 1!, 1*2 = 2!; 1*2*3 = 3!, etc. This definition automatically yields factorials for positive integers only, without any messiness. I can't find a single mathematics resource online which does not define the factorial for positive n first, then list 0! as a special case which is defined *separately* as being equal to 1. Why are these resources bothering to do that, if your definition is so watertight as not to need this? Can you direct me to a source which defines the factorial as you described? This is not a rhetorical question by the way. If you got your definition from somewhere else, I would really like to see it.
If a derivative of a continuous function expresses its slope at each point, what does a derivative of an integer (or rational) number express? Is there a geometric application for this derivative?
It is known that the arithmetic derivative on rationals is not continuous, see "Arithmetic Subderivatives: Discontinuity and Continuity", Journal of Integer Sequences, Vol. 22 (2019), Article 19.7.4
Arithmetic derivative are quite mysterious. All I know is that Buium's use of the Fermat arithmetic derivative, ie D(n)=(n^p-n)/p for a prime p, is used in his study of arithmetic differential equations. Pretty interesting stuff, but quite difficult to understand.
I know what the meaning of the derivative of a number is. Regard the number as a N-dimensional spatial object. The derivative of it is a N-1 dimensional object, calculated by adding up the sizes of all its possibilities in a dimension lower. By default: D(p) = 1 and D(0) = D(1) = 0. Now suppose D(6) = D(3*2) = 5. Now is 6 the area of the rectangle (2,3) and 5 the total length of it. Now suppose D(30) = D(2*3*5) = 31. Now is 30 the volume of 'cube' (2,3,5) and 31 is the total area of its distinct sides (2,3), (2,5) and (3,5). Lastly, suppose D(210) = D(2*3*5*7) = 247. In this last example 210 is the 4D-volume of (2,3,5,7) and 247 is the total volume of the 'cubes' (1,2,5), (2,3,7), (2,5,7) and (3,5,7). I think you can extend this idea for all other examples.
4^4=2^8, D(4^4)=8*2^7=2^{10} =/= 4^4 n^n will be a fixed point when n is prime, not when n is another prime power, and I haven't thought about when n is more complicated.
Hang on .. (06:00) .. before we *extend* this alleged function D : Z --> Z to anything, we should check that it is well-defined. There are many ways to factor a number as m * n. It's not immediately obvious that you get the same result no matter how you calculate, say, D(60). In fact it's not even clear from the original bullet points what "step 1" is supposed to be.
This kind of reminds me of prime factorization. We use factorization in crypto a lot to find really big prime numbers for use in public/private key exchange, or what is called asymmetrical cryptography.
So then the reverse operation will be Arithmetic Integral. Is there a way to calculate this integral and what would be the properties of it? Like one to one correspondence between the number and its integral? Also wondering if one can discover some intuition for these concepts... Like derivative is the gradient of the primes used in the number and integral is some sort of area under those primes...
Since the derivative is defined for a dense field (the rationals), it's an interesting question whether or not it's continuous. Since it is not linear under addition I would guess not
@@hoodedR some (most?) converging series of rational numbers converge into an irrational number, but that's fine. We have two question to ask basically. Given a series of rational numbers an, does the series converging (being a Cauchy sequence) imply that the series D(an) also converge? If the answer to the first question is yes, we can ask a second one: if the series converges to a rational number a, does that imply that the series D(an) converge to D(a)?
@@tcoren1 very interesting 🤔 I also wonder one other thing about this function. D(n) = sum(n/p_i) = n*sum(1/p_i) for any natural number n where n = product(p_i) and all p_i are primes. I wonder if a similar meaning can be assigned to D(m/n) for m/n being any rational number. As I write this I notice that D(n) is a little similar to the Euler totient function. Maybe there's something there too
The extension to the set of rationals is good, because it then lends itself to the idea of an "arithmetic antiderivative", such that if we have a number m^n, the arithmetic antiderivative D-1(m^n) = (m^(n+1))/(n+1). Note that before, for the map of integers to integers, this inverse function would not work, as n+1 might not be a divisor of m^(n+1), and thus would lead to a rational result rather than an integer; but with the extension to the rationals proven with the quotient rule being well-defined, we can just set the domain and range of the antiderivative as going to and from the rationals.
Iirc I read that -x ln(x) also satisfies something like the Liebniz rule, and that this can be related to information-theoretic entropy, and maybe there could be a connection here? Also, if instead of defining each D(p) to be 1, it seems like the definitions all still work if you leave D(p) to be unspecified constants c_p . Another idea: if you take a non-standard model of the integers, and let p be some particular non-standard prime, and suppose that you have D send all other prizes to 0, but have D(p)=1 , then you could like, uh... well, it would really look like differentiation then, and also, I think it might(?) be additive (when only adding a standard number of numbers together) Because like, Then for any standard n, D(n) would be 0, Hm... actually, no, I guess it probably wouldn’t be additive? Because like, the factorization of p+1 would have, Well, p+1 wouldn’t be divisible by p, so if all other primes are sent to 0, the D(p+1) would be 0, rather than D(p) + D(1) = 1 + 0. But, what if we only say that D sends standard primes to 0? Then, some of the prime factors of p+1 may be non-standard primes, and perhaps this would be enough to make D(p+1)=1 ? Depending on what D sends these non-standard primes to. Well, D(p+1)= 2*D((p+1)/2) + D(2)*((p+1)/2) that doesn’t help.. Ok but say we just say that D(2)=D(3)=D(5)=0 and then we look for some standard prime p greater than these such that p+1 has factors other than 2,3, and 5, and then see what D has to send other things to in order to make D(p+1)=1 . Ok, take p=41, Then p+1=42=2*3*7 So D(42)=2 D(21) + 21 D(2) = 2 (3 D(7) + 7 D(3)) + 21 D(2) = 21 D(2) + 14 D(3) + 6 D(7) = 0 + 6 D(7) so, if D(7) is chosen to be 1/6 , then that works... What about D(p+2) though? Well, in this case 43 is also prime, so D(43)=1 would be the result. And D(p+3)=D(44)= D(4*11)= 4 D(11) + 11 D(4), and so D(11)=1/4 I suspect that soon problems would occur, but, for at least a few steps, it can work out, and I would expect that if you pick a big enough prime, there is no fixed number limit (not depending on the size of the prime) where it will always fail within that many steps, So, maybe there *are* functions like this where, when taking in a nonstandard natural number, it acts additively when only adding a standard number of terms together? ... but, maybe it couldn’t be defined internally, because otherwise one could use induction to show that 1+1+...+1 (with p copies of 1) would go to zero? Or at least, couldn’t show internally that it is additive for pairs.... Ok, well, what about a weaker result of, “D( p + n)=1, whenever n is standard”? (Which can’t even be expressed internally, as can’t speak of which numbers are standard within the language of the nonstandard model)
Can this be extended to the algebraic numbers? Square roots should be fairly straightforward, but what about other numbers. What is the arithmetic derivative of i?
It might be interesting to connect the existence of a full derivative (with Leibniz ánd the additive property) with transcendent elements in the ring/field.
Here's my best attempt at √2: 2 = √(2)•√(2) D(2) = D(√2)√2 + D(√2)√2 1 = D(√2) • (√2+√2) D(√2) = 1/(2√2) I believe the same argument can be made for most irrational numbers, and for any D(√(prime)) = 1/(2√(prime))
@@anshumanagrawal346 You were nonspecific. It can be extended (uniquely) to some irrationals. Possibly one could use the axiom of choice to extend it to all of them, but I know of no proof either way.
I was checking here and you can derive that d(1) = d(0) = 0 from the product rule as consistency requirement by checking d(0 × a) and d (1 × a). You can also derive the quotient rule for rationals by examining d(a × 1/a) and requiring that it is equal do d(1). This will give you d(1/k) = - 1/k² d(k). From there you can easily derive the expression for d(a/b). You can also find that d(-n) must be equal to -d(n) by considering d(-1 × -1) = d(1) and finding that d(-1) must be zero. Then you can just use d(-n) = d(-1 × n). Also, it seems like you can define an infinite family of those functions by requiring the product rule and defining what value the function will have for each prime number. All the other rules follow from that.
some fixed points: D(p^n)=p^n -> np^(n-1)=p^n which means p^p is a fixed point for all primes p. D((pq)^n) = p^nD(q^n)+q^nD(p^n) = n(p^nq^(n-1)+q^np^(n-1))=(p+q)n(pq)^(n-1) if we set this to be equal to (pq)^n we get: n(p+q)=pq eg n=pq/(p+q) in a similar way we could get arbitary fixed points from any number raised to a power, though generalizing that much becomes difficult to do from the phone ;p
Makes me wonder if this is connected to anything or if it just works. If so it wouldn't hurt to know a bit more. (It might hurt depending on how much of a rabbit hole it is I guess...)
Some input to the video, the domain and range has been defined to be integers, since rational number with fractional part non 0 doesn't fall in integers so it will be incorrect to have the value of these defined. You need to change the domain and range of the function to all rational numbers rather than integers
So rules 1-4: D(0) and D(1) are both 0. D(P)=1 D(mn)= mD(n)+nD(m) WRT n, D(n) is an odd function. But where does rule 4 (the "odd function" rule) come into play?
To derive the quotient rule, once you have the power rule, write a/b as a*(b^-1) and use the product rule and power rule to simplify it out. One you end up with (b*d(a)-a*d(b))/b^2 then do two things. Plug in c/1 and be sure you get d(c) so it's consistent with the integer version (it will be). Then work out d(ca/cb) to be sure it simplifies to d(a/b).
You cannot strictly do this until you have done some work to show that negative powers are well behaved under the power rule. If I am not mistaken we only proved that for positive powers
@@trueriver1950 Yeah, it is a point you have to take a "leap of faith" and just try it. When you derive the formula and be sure d(c/1) = d(c) and d(ca/cb) = d(a/b) it at least gives you a warm feeling that it can be worked with.
This has got to be one of the best videos in a while! I absolutely loved it! Can't wait to see more bizarre generalizations and their extensions/applications!!
is it possible to extend from real number like that. supose that an a sequence of rational number converge to a real number r. then D(r)=lim(D(an) exists. claim1 D(e)>
What I think would be worth investigating is how this behaves in neighborhoods around integer values For example take D(2)=1 If we investigate D(2+1/p) for primes p we get D(2+1/p)= D((2p+1)/p)= (D(2p+1)p-2p-1)/p^2= D(2p+1)/p-2/p-1/p^2 So looking at the limit as p goes to infinity the 2/p and 1/p^2 terms vanish leaving D(2p+1)/p Based on my calculations this seems to approach 0 from below so it would seem that there is some really interesting behavior happening around the integers. Would be curious if any values near 2 which give a derivative near 1
D(1) = 0 is actually a consequence from the Leibniz rule, it shouldn't be in the definition. In fact, you don't even need D(0) = 0, because it also comes from Leibniz and the fact that D(0) is finite because it's integer by definition.
You present 4 rules of the arithmetic derivative, but actually, 1 and 4 follow from 2 and 3. The first one is easy: D(1·1) = D(1) = 2·D(1), so D(1) is equal to zero. Plugin 0 instead of 1 and you'll get the same for D(0). For the forth rule, we have that D(-1·(-1)) = -2D(-1) = D(1) = 0, so D(-1) = 0. With this, we can proof that D(-n) = D(-1·n) = D(-1)·n + D(n)·(-1) = -D(n) The rational expansion also follows from 2 and 3: D(1) = D((1/n)·n) = D(1/n)·n + D(n)·(1/n) = 0 ----> D(1/n) = -f(n)/n^2 D(m/n) = D(m·(1/n)) = D(m)·(1/n) + D(1/n)·m = (D(m)n - D(n)m)/n^2 I'll add without proof that the power rule can be extended to rational exponents, with similar proofs to the ones above; it can also be expanded to the complex world (not all of it), and it's possible to show that this function is not continuous.
Amusingly, I'm going through Apostol's "Introduction to Analytic Number Theory", and there is another derivative given there. (It isn't called an arithmetic one, and it's for functions not numbers, but it seems a natural name, since it's defined for "arithmetic functions:" functions f : N -> C (or a subset thereof).) In this case the derivative defined is much simpler: f'(n) = f(n) log(n).
Small mistake in your final proof. Second line, second term in the numerator should be -m(D(a)n+aD(n)) not -m(D(a)n+aD( M )). You fixed it in the following line. I just wanted to point it out for those who may get confused at that line.
D(n^(1/k)) is probably well-defined for nonnegative integer n and positive integer k. Consider a unique factorization of n = prod p_i^a_i we have n * sum(1/p_i) = D(n) = D(n^(1/k)*n^(1/k)*...) [k times] = k*n^(1/k)*D(n^(1/k)) and we have D(n^(1/k)) = n^(1-1/k)/k * sum(1/p_i) Special cases below: Consider 1 = D(p) = D(sqrt(p)*sqrt(p)) = 2*sqrt(p)*D(sqrt(p)) -> D(sqrt(p)) = 1/(2*sqrt(p)) Consider a composite number p*q where p and q are prime: p + q = p*D(q) + q*D(p) = D(pq) = D(sqrt(pq)*sqrt(pq)) = 2*sqrt(pq)*D(sqrt(pq)) From this we have D(sqrt(pq)) = (sqrt(p/q) + sqrt(q/p))/2 Consider a composite number p*q*r where p, q, r are prime D(sqrt(pqr)) = D(sqrt(pq)*sqrt(r)) = sqrt(pq)*D(sqrt(r))+sqrt(r)*D(sqrt(pq)) = sqrt(pq)/(2*sqrt(r)) + sqrt(r)*(sqrt(p/q) + sqrt(q/p))/2 = (sqrt(pq/r)+sqrt(qr/p)+sqrt(rp/q))/2 pq+qr+rp = D(pqr) = 2sqrt(pqr)D(sqrt(pqr))
Last slide, line before last, there's a one-line typo where you wrote D(m) but clearly meant to write D(n). Just in case anyone else sees it and was wondering about it.
It would have been good to show that the original definition for integers is well defined, i.e. splitting 12 into 2x6 and 3x4 gives the same answer.
I think it comes from the fundamental theorem of arithmetic,and the fact that D(p)=1 for all primes p.
@@yoav613 Yes it can be proved. I just think that not even mentioning the issue is wrong.
@@normanstevens4924 i agree
good catch.
I picked it up after he did it on the rationals but it would have been nice.
This is very cool! I just wonder where this can be used. And how can this derivative of a number be interpreted?
me too. i'm intrigued!
To add to my comment, this is the first time when I disagreed with Michael when he said "... and that is a good place to stop". I rather thought it to be a good place to start!
Math things dont need to be applied to real life
I may be wrong, but I think generally it's the sum of all the combinations of it's prime factors with lenght one less than it's own prime factor decomposition. E.g: D(abcde) = abde+acde+adbc+aebc+bcde, with a,b,c,d,e all primes
@@The_Aleph_Null I don't think Surfin is looking for _practical real-world_ applications for this idea, but rather how this idea can be related to other bits of math. Can we extend this definition any further? Can we rigorously define an antiderivative definition? Are there hidden links between this idea and other concepts? These are all good questions that don't need considerations from the "real world" to be worth asking.
Any number of the form p^p, where p is prime, is fixed for D, because D(p^p) = p*p^(p-1) = p^p (in particular, 4 = 2^2 is fixed)
Wonder if there are more fixed points tho...
-p^p and 0. But those are all.
So if a positive integer n is divisible by p^p, then its arithmetic derivative is always greater than or equal to n, correct?
And also any even number n=2k must have arithmetic derivative greater than or equal to k+2 ?
@@knivesoutcatchdamouse2137 Yes, both correct.
p^p is the only case. We can do a proof by contradiction.
Assume there is a stationary point x with a prime factorisation of p1^n1 * p2^n2 *... * pm^nm.
Then x = D(x) = [n1*p1^(n1-1) * p2^n2 *... * pm^nm] + [p1^n1 * n2*p2^(n2-1) *... * pm^nm] +... + [p1^n1 * p2^n2 *... * nm*pm^(nm-1)].
Taking the simple case when m=2 we get p1^n1 * p2^n2 = x = D(x) = n1*p1^(n1-1) * p2^n2 + p1^n1 * n2*p2^(n2-1). If we divide the far left and far right by p1^(n1-1)*p2^(n2-1),
we get p1*p2 = n1*p2 + n2*p1. Here, p1 and p2 are different primes and n1 and n2 are integers > 0.
If we solve for n1 we get n1=p1/p2 * (p2-n1). Since p1 and p2 are different primes, they share no prime factors and similarly, since n2>0, (p2-n2) and p2 share no prime factors. So we cannot cancel out the p2 in the denominator. Hence n1 cannot be an integer. This violates our assumption that n1 is an integer for x to have a valid prime factorisation. Hence, there is no stationary point that can be represented as a product of powers of 2 different primes. A similar argument follows for n primes. Just solve for n1, factor out p1 in the numerator and you'll find that the denominator shares no factors with the numerator. The p1 won't share factors with the product in the denominator and the bracket will be less than the product in the denominator (the maths is just a mission to type out).
@@chrisjduplessis 0 also because 0 isn't prime but D(0) =0 by definition
This is probably one of the most interesting things I've seen for calculus outside of a real analysis class. So wild for a derivative to be expressed in prime factorization over the integers; I've never seen this and I love it. Keep up the great content!
Yes, very interesting concept. I did not come across this when doing my math studies, not even during my time as a grad student.
It feels like it should have important consequences. Perhaps transforming number theory into something more calculus like?
@@TheIllerX I wish I saw something like this in my number theory class. I feel like learning calc III theory with this on the back-end would help a lot of students.
It is really cool! This sort of idea shows up in some pretty advanced mathematics. For example, in one of my topics courses in commutative algebra, we saw derivations, and this is a special kind of derivation, which are essentially bringing the concept of derivatives into algebraic settings. There is a Wikipedia article called "Derivation (differential algebra)" describing the main concept in algebra, though what is happening in this video isn't *exactly* the same thing as a derivation.
I do wonder, however, if this sort of "arithmetic derivative" can be used to show anything interesting? I will be teaching an elementary number theory class next Spring, and it would be neat to bring this up if anything comes out of it.
assumming D(1) = 0 in unnecessary:
D(1) = D(1*1) = D(1)1 + 1D(1) = 2D(1)
therefore D(1) = 0.
as to why that must be the case
pick any n != 0, then D(n) = D(n*1) = D(n)1 + D(1)n
and we have nD(1) = 0
therefore D(1) must be 0.
When you break it all down, you can find a direct calculation for the arithmetic derivative of a number based on its prime factors:
Say we have a number and its prime factorisation:
n = p₁^a₁ * p₂^a₂ * ... * pₙ^aₙ
Then:
D(n) = a₁(n/p₁) + a₂(n/p₂) + ... + aₙ(n/pₙ)
I believe this also holds for the extension into the rational numbers, if you let the aᵢ's be negative.
You are using the letter n both to name your number and to name how many prime factors it has.
@@jeanefpraxiadis1128 oh no my quickly rattled off youtube comment has some technical inaccuracies that didn't stop you from understanding whatever will i do
You can even extend it to rational powers of rational numbers. Also, you always need to specify 0'=0.
let's extend this to complex numbers!
Yes. Examples in my comment.
I've never heard of this derivative before. This is interesting. I should be able to make a recursive algorithm that calculates this in SageMath with Python.
Fixed points for integers are any prime to its own power. An interesting fixed point is the number x = 2 * 3 ^ (3/4) * 5 ^ (5/8) * 7 ^ (7/16) * 11 ^ (11/32) * 13 * (13/64) * ... which converges to x = 291.893574100944... and the arithmetic derivative of x is x. I use lower case d. Define ld(n) = d(n) / n . ld(x) = 1/2 + 1/4 + 1/8 + 1/16 + ... = 1, so d(x) = x * 1 or x. ld(n) is a gem and a huge tool. If you have the prime factorization of a number n = p1^m1 * p2^m2 * p3^m3 (put as many primes and multiplicities in here as you wish), then d(n) = n * (m1/p1 + m2/p2 + m3/p3 and so on). Alternate ways of defining d(n) are to define the prime case to a function of p and a wide variety of functions work just fine. If you define d(p), p prime, as d(p) =p instead of d(p)=1, d(n) is just n times the number of prime factors it has counting multiplicities and the power rule still holds. Try d(p) = 1/p, d(p) = 1/p^2, and d(p) = e^((i*2*pi)/p) as alternate definitions for fun. Using ld(n) for the d(p) = 1 standard defininition, you can deduce that the lower bound for d(n) for composite n is 2*sqrt(n) and the upper bound is ((1/2) * n * log2(n)). Having a power of 2 makes ld(n) as large as possible in relatiom to values of n "nearby" since the multiplicity is over 2 which makes ld(n) larger than the multiplicities being over other larger primes. Having the square of a prime puts the 2 as far "to the right" as possible giving the lowest ld(n) for a composite value. It's neat to see someone else playing with this. Note that ld(n) is the logarithm for this type of calculus. ld(ab) = ld(a) + ld(b) and ld (a^b) = b * ld(a). Now have some fun: Consider I(n) (capital I) as the inverse relation for d(n) and E(n) as the inverse relation for ld(n). Some integers have no I(n) value and some have multiple ones. When you go to rationals, it gets interesting with there being many values. Denominators must be of the form having all prime factors to the power of a multiplicity being a multiple of that prime factor for d of a rational (in lowest terms) to be an integer. Oh, and write your rules like d(n^3) = 3n^2 d(n) and it jibes with regular calculus and Let y be a function and consider d(y)/d(x) and d(y)/d(x) will work prime or not.
wait but can a number like x even be a fixed point? A number like that isn't rational and to extend it to arbitrary reals you'd need it to be a continuous function or something. As far as I'm aware it isn't continuous (but I could be wrong about that). If it is continuous I kind of want to see a plot.
This was very fun. The next step is to look for an interpretation and then a use.
Thank you, professor.
I agree. Interesting, but I.m missing the same.
It would be interesting to understand what motivated the definition of the arithmetic derivative.
I was wondering if you could extend this idea to the complex plane (a + bi) with a and b whole numbers and using Gausian primes. No surprise: you can. Interesting stuff...
I just love learning this kind of stuff. I learn more here than i did 50 yrs ago in high school. I had the worst teachers. I have been learning math for 20 years from you tube and books, and now you. A blackboard is always the best. Have a good day.
Just from the definition of a derivation, the power rule as you've written can be generalized to d(a^n) = n a^{n - 1} d(a). These sorts of derivations are usually dependent on the values of their bases, so it seems taking choices other than d(p) = 1 seems interesting.
Moreover, I think you could allow this value to vary over the prime. i.e. d(2), d(3), d(5),... are allowed to vary independently. I wonder if clever choices of d(p_j) leads to any neat number theory facts
D(a)mn, I never realized the arithmetic derivative could be extended to the rationals too! Very cool!
truly amazing pun. you earned my like
Underrated pun lol
Michael, I found this notion of an "arithmetic derivative" quite interesting as I seek uses for it. Perhaps it can be used in conjunction with prime theory and/or Paul Erdos' notion of "highly composite numbers". Below I summarize my findings and suggest one application.
Generalizing and Applying the derivatives of prime products:
Prod(n) = p1 p2 ... pn, where pj is a prime.
Let m' := D(m) be the first derivative of integer m.
Then:
Prod(1)' = 1
Prod(2)' = p1 + p2
Prod(3)' = p1p2 + p2p3 + p3p1
Prod(4)' = p1p2p3 + p1p2p4 + p2p3p4+ p2p3p4 + p2p3p1 + p3p4p1.
Note the second derivative of each Prod(n):
Prod(1)'' = 0
Prod(2)'' = 2
Prod(3)'' = 2(p1 + p2 + p3)
Prod(4)'' = 3! (p1p2 + p1p3 + p3p4 + p2p3 + p2p4 + p4p1).
And, the third derivative of Prod(4), denoted as Prod(4)^(3), is:
Prod(4)^(3) := Prod(4)''' = 3! (p1 + p2 + p3 + p4).
As an application, this would correspond to this fourth degree polynomial with p1, p2, p3, p4 as prime roots:
x^4 - (Prod(4)'''/3!) x^3 - (Prod(4)''/2!) x^2 + (Prod(4)'/1!) x^1 - Prod(4) /(0!) = 0.
The above generalizes to:
Prod(n) = p1 p2 ... pn,
Prod(n)^(1) = Sum of { all products of n-1 primes}
Prod(n)^(2) = Sum of { all products of n-2 primes }
: : :
Prod(n)^(k) = Sum of { all products of n-k primes }
: : :
Prod(n)^(n-2) = Sum of { all products of 2 primes }
Prod(n)^(n-1) = Sum of { all single primes }
Prod(n)^(n) = Sum of { n ones) = n
If you find these integer-valued derivatives interesting, you may want to check out p-derivations (the Wikipedia article is a fair place to start). For a fixed prime p, the unique p-derivation Z -> Z is the map delta(x)=(x-x^p)/p. This satisfies non-linear analogues of the addition and product rules of the usual derivative.
Super, ultra, mega interesting! Thanks for this wonderful clip! 🙂
11:25 Yo what’s up?
There's a non-recursive formula too: if r = p₁^(k₁) · p₂^(k₂) · p₃^(k₃) ··· > 1 then D(r) = r · ∑(kᵢ/pᵢ). This even works for rational numbers (in which case some kᵢ are negative).
And this proves that the Arithmetic Derivative exists! (What we got in the video was not a definition but a set of desired properties).
Nice video! One of the coolest things in math is inventing a new thing and coming up with a bunch of rules with it 😍
For 2 primes p, q there's also D(pq) = p+q bc D(pq) = D(p)q+pD(q) =q + p
Another interesting way to approaching the quotient rule is to first start by further generalizing the power rule to non-primes. Suppose you have a constant k with prime factors a and b. Using the existing rules for products and powers of primes:
D(k^n) = D(a^n*b^n)
= a^n*D(b^n) + b^nD(a^n)
= n[(a^n)(b^n-1) + (b^n)(a^n-1)]
= n[a(a^n-1)(b^n-1) + b(a^n-1)(b^n-1)]
= n(a + b)(a^n-1)(b^n-1)
Since D(k) = a+b (recall a and b are prime, so D(a)=D(b)=1), we can say that D(k^n) = (nk^n-1)D(k)
This works as a general form of the power rule because if you choose a prime number for k rather than a product of two primes like in my example, D(k) simply equals 1 and you're back to the p^n equation in the video. Now we can finally get back to the quotient rule. For the rational number (m/n):
D(m/n) = D(m*n^-1)
= (n^-1)D(m) + mD(n^-1) [Here's where that new formula comes in handy!]
= (n^-1)D(m) + m(-1)(n^-2)D(n)
= (n^-2)[nD(m) - mD(n)]
= [nD(m) - mD(n)]/n^2
It's a little more convoluted to get to the final form of the equation than what was shown in the video, but personally this was a LOT more satisfying to work through. Maybe I'm biased because I spent the past half hour or so working this out...
Another great video by Dr.Penn.
I really like these peeks into other lesser known portions of math. But as a suggestion it would be nice to tie things up at the end with either places that these operations arise, such as fields of study, or areas of further research aside from Google for these results.
Both the first and last line of the initial description are redundant, just the two in between are sufficient.
Definitions of D(1) and D(0) are completely redundant given the product rule: letting m=n=0 or 1 forces D(mn)=0 in both cases.
Conversely, the definition of D(-n)=-D(n) is redundant if you define D(-1)=0. Applying the rule for D(mn) letting m=-1 shows these are equivalent. But defining D(-1) is itself also redundant: setting m=n=-1 and applying the product rule gives D(1)=-2D(-1) but since D(1)=0, D(-1) must also be 0.
The definition of the quotient rule is also redundant. If you presume D to exist for rational m/n, the product rule gives:
D(m)=D(m/n * n) = m/n*D(n) + D(m/n)*n. Solving this for D(m/n) gives the quotient rule.
I agree with most points, but if we calculate D(1×1) it becomes 1×D(1) + 1×D(1) in which case it is not known whether D(1) equals 0 or not, as 1 is neither a prime nor a composite number (It has its own category).
@@barbodnaderi6170 D(1)=D(1*1) = D(1)*1+ 1*D(1) = 2*D(1). The only solution for D(1)=2*D(1) is D(1)=0. I'm not sure what your point was about knowing whether 1 is prime or composite, but I see no way for the product rule to be satisfied by nonzero D(1). The product rule doesn't care whether m,n, or mn are composite.
I really like the arithmetic derivative. I learned about it a few months ago from Wikipedia, the great source of all useless information. I can't figure out why anyone would care about this function, but I care about it anyway.
I like that 0 and 1 are the only constants here. I don't know much about differential algebra, but I think N, Z, or Q equipped with this function as a derivation is a differential ring. And Q equipped with it is a differential field. So in that sense, every number has a primitive, which is just another number. Some numbers have unique primitives (e.g. 5 only has primitive 6), while others have several primitives (e.g. 32 has the primitives 16 and 28), and 1 has infinitely many primitives (every prime). It also has fixed points at p^p with p prime, like 4 = 2^2, 27 = 3^3, 3125 = 5^5, etc., which come straight from the power rule.
BTW, some people have probably already mentioned this, but the first, fourth, and fifth parts of the definition are unnecessary. All we need to uniquely define D over all rationals are the Leibniz rule and the fact D(p) = 1 for all prime p. The other three facts immediately follow from the Leibniz rule alone. We could create other well-defined differential fields in the same way by changing the rule D(p) = 1 to something else. For instance, if pₙ is the nth prime, then we could define D(pₙ₊₁) = pₙ for all n ≥ 1 and D(2) = 1. This also is well-defined. For instance, D(12) = 3 * D(4) + 4 * D(3) = 3 * [2 * D(2) + D(2) * 2] + 4 * 2 = 3 * (2 + 2) + 8 = 20, and also D(12) = D(6 * 2) = 6 * D(2) + 2 * D(6) = 6 + 2 * [3 * D(2) + 2 * D(3)] = 6 + 2 * (3 + 2 * 2) = 20. This just yields a different derivation on N, Z, or Q.
Interesting, thanks Michael. Just two comments: the first and the last propierties are consequence of the Leibniz rule. We can define here and analogous version of the "logarithmic derivative" of calculus: L(n) = D(n)/n which satisfies L(mn)=L(m)+L(n) (consequence of the Leibniz rule) to obtain an explicit formula for D(n) in terms of the values of D on the prime divisors of n (in particular this shows that D is well defined once you fix the value of D in the prime numbers in an arbitrary way).
It wouldnt be a Michael Penn video if there werent a little typo on the way 10:10; sometimes I wonder if they were on purpose ehehehe. Great video
Personally, derivatives make intuitive sense when considering them as limits. Is there a way to understand the arithmetic derivative also in terms of limits?
Why would you ever do that. Does this not make intuitive sense to you?
@@farrankhawaja9856 well it's using the definition of a derivative that employs a limit to give good intuition into the logic of standard derivatives. I understand how to do these 'new' ones, but what the final number expresses is not intuitively obvious to me. So if you could help me, I'm all ears
@@PiEndsWith0 true, like i can now take the arithmetic derivative, but what does it mean to take the derivative of 12? from my current understanding of derivatives, how can a constant have a rate of change?
The only way to do that is to convert every whole number into a function, made up of a product of (x+p) terms so that the function at 0 is the number, and 0 and 1 are just themselves, then take the normal derivative, and evaluate at 0
@@romajimamulo to do what precisely?
It's not hard to prove that for any integer N, we have D(N) = N (1/P1+1/P2+•••+1/Pn) where Pi are the primes dividing N i.e. P1•••Pn=N
Don't forget 0.
@@tomkerruish2982 yes, 0 and 1 must be indeed excluded and they are given by the definition D(0)=0 and D(1)=0 (although they actually follow from the Leibniz rule! D(1)=D(1*1)=D(1)1+1D(1)=2D(1) and D(0)=D(0*2)=0D(2)+2D(0)=2D(0))
@@juanmargalef Agreed! Likewise D(-1)=0 follows by a similar argument.
On closer inspection, your formula for D(N) can be rewritten as D(N) = N(k1/P1+•••+kn/Pn), where N=P1^k1•••Pn^kn. This automatically includes N=1, since k1=•••=kn=0 in this case.
@@tomkerruish2982 good point! Nice way to write a more general formula. And the negatives also follow from the definition: D(-N)=D(-1*N)=-1*D(N)+ND(-1) =-D(N) (0=D(1)=D((-1)(-1))=-2D(-1))
New to me too. Now I am waiting for the half arithmetic derivative from Dr. Peyam
I guess some interesting maps to consider: D(1/x) = [D(1)x - D(x)]/x^2 = -D(x)/x^2 which looks like the power rule again but applies for all numbers, not just primes. In fact... D(x^k) = D(x^(k-1))x + x^(k-1)D(x); repeat this enough and you get D(x^k) = x^(k-1) D(x) for all numbers, not just primes.
this is so interesting, I don't know how abundant simple exotic things like this are, but I would love to see more
What's interesting to me is that the Leibniz rule implies the other rules on the board, except for evaluating D(p).
If you set m = 0, you get D(0) = n*D(0)
If you set m = n = 1, you get D(1) = 2*D(1)
If you set m = n = -1, you get D(1) = -2*D(-1)
Of course, this just mean D(-1) = D(0) = D(1)
With that, setting m = -1 gets you D(-n) = D(-1)*n - D(n) = -D(n)
So, anything satisfying the Leibniz rule must also have these properties. Neat!
excellent finding, i thought that assuming D(0) = 0 is necessary, and then D(1) = 0 and D(-1) = 0 follows, but i did come up with an idea to plug in some edge cases into the leibniz rule.
so out of 4 assumptions that michael gave only two are necessary: leibniz and primes.
If you assume a function satisfies both the Leibniz rule and linearity [ D(x+y) = D(x)+D(y) ], is that enough rules to uniquely specify the derivative function (the normal/usual one)?
You can also derive the quotient rule from it
If you set m = 1/n, you get D(1/n) = -D(n) / n^2
If you set m = a and n = 1/b, you get D(a/b) = (b * D(a) - a * D(b)) / b^2
Some properties we can deduce:
* D(p^p) = p ^ p
* For all m,n > 1 then D(m*n) > 2. Corolary: it doesn't exist an antiderivative for the number 2.
* For all k > 1, D(4*k) > 4*k. Are there more "increasing" patterns?
In general for every p prime we have:
D((p^p) * m) = (p^p) * (m + D(m))
really fun stuff! great video as always!
A note on whether this (the original definition) is well-defined or not might be in order. But good stuff.
@@angelmendez-rivera351 That's not maths, though, is it? Imagine if maths lectures went like that.
"Today, students, we will be looking at the proof of the Fundamental Theorem of Arithmetic. First, suppose that the Fundamental Theorem of Arithmetic were false. If that were the case, then mathematicians would not be going around saying that it was true. So we have a contradiction. We must reject our supposition and conclude that the Fundamental Theorem of Arithmetic is true. This concludes our proof of the Fundamental Theorem of Arithmetic. Q.E.D."
@@angelmendez-rivera351 The definition of the function is not itself a theorem. However, before trying to investigate how a function behaves, you first need to check whether it is well defined. That is all that the OP is saying.
Mistake at 10:10. Should have been a D(n) instead of a D(m). But the end result is correct, because in the next line you write the correct thing despite the error.
isn't stating the fourth property redundant? D(-n)=-D(-n) follows easily from the product rule, if you set n and m equal to -1 you get D(-1*-1)=D(-1)*-1+D(-1)*-1 , which simplifies to D(1)=-2D(-1) and as D(1)=0 and -2 has a multiplicative inverse, this means D(-1)=0
then using the product rule with m set to -1 you get D(-n)==D(n)*-1+D(-1)n=-D(n)
edit : explicitly stating the quotient rule is also redundant , D(mn)=D(m)n+D(n)m , set n=1/m , we get 0=D(m)/m+D(1/m)m which implies -D(m)/m^2=D(1/m) , so D(n/m)=D(n)/m+d(1/m)n by the product rule, and using the previous observation on the value of D(1/m) we get
D(n/m)=D(n)/m-nD(m)/m^2 = ( D(n)m-nD(m) )/m^2
yes you could. i think in a math textbook you could elude stating property 4, but it's useful in a video nonetheless
Sometimes being redundant is better than expecting people to instantly deduce all that without even a prompt to try to proove it.
-1 can be considered a prime number so perhaps this just makes it clearer how to deal with negatives.
Yes, property 1 is also redundant from product rule. You only need properties 2 and 3.
@@IanXMiller I agree that -2, -3, etc. can be considered prime numbers, but -1 is a unit so typically it is not considered a prime element.
1:45, from the definition alone it isn't immediately clear that D is well-defined. One could decompose 12=3*4 and calculate D(3*4); will Leibniz always yield the same result irrespective of factorisation of the number? Would be a nice exercise to prove it (haha; homework!).
Stuff like this is why I love number theory.
This function is basically doing the derivative stuff on the prime factorization of a number as if primes were the variables. And the quotient rule is simply the product rule applied to the prime factorization of a rational number which is the same as a prime factorization of integer but allowing negative exponents. And since every rational number has exactly one prime factorization (because the a in nominator and denominator always cancel out), it is not surprising it works. But it is very interesting. I wonder what might be the applications. Maybe Riemann hypothesis? And what can we say about the antiderivative (other than there is more than one)? :D
And btw the straightforward formula for arithmetic derivative of q with prime factorization
q = s * PRODUCT[over n]OF(p_n ^ a_n)
where s is one of {-1,0,1}, i.e. a "sign-or-nullity", p_n is nth prime and a_n is integer, is
D(q) = q * SUM[over n]OF(a_n/p_n)
since using rule 4 (for negative q, and rule 1 for q=0), then rule 3 (with the generalized power derivative rule), then simplifying, and then factoring out abs(q) returning it its sign, we get
D(q) = D(s*PRODUCT(p_n^a_n)) = s * D(PRODUCT(p_n^a_n)) = s * SUM(a_n * abs(q)/(p_n^a_n) * p_n^(a_n-1)) = s * SUM(a_n * abs(q) / p_n) = q * SUM(a_n/p_n)
which suggests the sum of prime reciprocals D(q)/q (which we could call derivative quotient or something) it the interesting part, since (1) it somehow turns multiplication into summation, so it is somewhat like an arithmetic logarithm, (2) it is a rational number that preserves the integer-ness of integer q, (3) allows a new factorization of every rational number into a sequence of derivative quotients, and seems to have other interesting properties.
And I also believe the function would be well defined for all numbers that can be written as prime factorizations with rational exponents. What are they called, algebraic numbers?
Not sure how useful this is early in the video, but I love it! 😁😁
How do we know that D(n) is well defined? I don't think it is obvious that every way of breaking off factors of 100! and plugging them into the D function leads to the same final result after all that algebra.
This can be proved in a manner alike to the rational case shown in the video, using two arbitrary products that form a number, then show that both of them are equal at the end.
Let ab = xy, all natural numbers, a and b not zero.
Show that D(ab) = D(xy).
D(ab)
= aD(b) + bD(a)
Note that:
a = xy/b
b = xy/a
= a D(xy/a) + b D(xy/b)
= a [ aD(xy) - xyD(a) ] / a^2 + b[ bD(xy) - xyD(b) ] / b^2
= D(xy) - xy/a D(a) + D(xy) - xy/b D(b)
= 2 D(xy) - (xy/a D(a) + xy/b D(b))
= 2 D(xy) - (b D(a) + a D(b))
= 2 D(xy) - D(ab)
= 2 D(xy) - D(xy)
= D(xy)
Therefore, the system is well defined, QED.
Note: It is fine to use the "quotient rule" in this proof as it is derivable from the Leibniz product rule, which I leave as an exercise to the reader. :P
@@imacds leaving the proof as an exercise to the reader is kind of what I was complaining about
Great video
I've seen a few comments claiming that certain parts of the definition are "redundant" because if you assume that this arithmetic derivative is defined in certain place, you can derive what the definition must be there. The thing is, that's not how definitions work. According to that logic, the definition of a negative exponent is also redundant, and so is the definition of zero factorial. However, that's not to say that the definition given in the video doesn't have redundancies. Here's how I'd define it to get rid of the redundancies:
D(0) = D(-1) = 0
D(p) = 1 when p is a prime
D(1/n) = -D(n) / n^2
D(mn) = D(m)n + mD(n)
He doesn’t define the extended definition of the function with domain and codomain of rationals until the end. So replacing D(-n)=-D(m) with what you have is more concise but only makes sense if introducing the idea starting with the extended version, which he doesn’t do. So I think any redundancies make sense as part of the video.
Edit: I’m a little dumb and see more clearly the changes you made. It’s pretty clever. You can get D(1) through D(-1*-1) and the product rule. So you only need the D(1/n) rule when extending to the rationals. So even for the integer-defined function you managed to shave off a rule. I agree with you also that definitions can (and should if it helps intuition and readability) have redundancies, but it’s a nice exercise to spot them and remove them.
@@angelmendez-rivera351 you CAN define the factorial in a way that 0! is obvious, but it’s not always defined in that way. My first introduction and I believe many peoples first introduction was just by the relation n! = n(n-1)! over natural numbers with special case 1! = 1 then you stop recursing. It’s also usually brought up in the context of calculating permutations where each n represents objects. There are other ways to define the same behavior in a more general domain like through the gamma function which applies to all real numbers, but that is a far less common introduction. So there is no one canonical definition, it depends on context and necessity, and because of that most people get introduced to the factorial function in its simplest form which does not define 0! except as a special case.
@@angelmendez-rivera351 When the factorial function is first taught, isn't n! generally defined to be the product of all natural numbers between 1 and n, inclusive? That *does* need a special case for when n is 0. You could also define n! for integers n>=0 to be equal to Γ(n+1), but that essentially just arbitrarily resticts the domain of the gamma function, in in addition to potentially being a circular definition, depending on how you define e. Is there some other non-piecewise definition of n! I'm missing?
@@angelmendez-rivera351 "The" definition of n! is not well-defined per se. Often we see n! = n(n-1)(n-2) ... (3)(2)(1) as our first definition, which can also be said verbally "product of the first n positive numbers" or like someone below "product of all positive numbers less than or equal to n". With that (first) definition, it is not clear what 0! is, since 0 is not a positive number. You could say it a bit differently like someone said below, or try to say something like "it's an empty product" which is "vacuously equal to 1". Though many people would not find that intuitive or satisfying IMO.
If you use a recursive definition then yes, it's more clear how to derive that algebraically. If you use the gamma function to interpret the factorial, then it's because the integral from 0 to infinity of e^(-x) dx is equal to 1.
Lastly, and in my opinion the most pedagogical choice for the random human being: if somebody asked me on the street why 0! is equal to 1, then I would tell them for an integer n greater than or equal to 0, that n! is the number of ways to sort n people into an ordered line. In that view, 0! = 1 is because there is exactly one way to sort 0 people into an ordered line, that is to do nothing and sort nobody, b/c you don't have anyone to sort :)
@@angelmendez-rivera351 _”The product of all the positive integers less than or equal to 0 is equal to 1. Therefore, 0! = 1”
Great! So (-1)! = 1 too, and (-2)! = 1 and... Oh wait, you said it only works for non-negative integers. Why? If it's so obvious that 0! = 1 follows from the definition, why stop there?
The problem is that you are reasoning backwards. Your definition makes sense for non-negative integers because we have the convention 0! = 1. Your definition must be artificially restricted to only the non-negative integers, otherwise we have x! = 1 (for x≤1) and x! = floor(x)! (for x>1). The definition of the factorial given by other commenters above doesn't need to be artificially restricted. We just define it sequentially: 1 = 1!, 1*2 = 2!; 1*2*3 = 3!, etc. This definition automatically yields factorials for positive integers only, without any messiness.
I can't find a single mathematics resource online which does not define the factorial for positive n first, then list 0! as a special case which is defined *separately* as being equal to 1. Why are these resources bothering to do that, if your definition is so watertight as not to need this? Can you direct me to a source which defines the factorial as you described? This is not a rhetorical question by the way. If you got your definition from somewhere else, I would really like to see it.
How did you find out it? Any bibliography for this interesting topic?
If a derivative of a continuous function expresses its slope at each point, what does a derivative of an integer (or rational) number express? Is there a geometric application for this derivative?
Arithmetic derivative on rationals is not continuous, see J. Integer Seq. 22 (2019), no. 7, Art. 19.7.4.
It is known that the arithmetic derivative on rationals is not continuous, see "Arithmetic Subderivatives: Discontinuity and Continuity", Journal of Integer Sequences, Vol. 22 (2019), Article 19.7.4
Arithmetic derivative are quite mysterious. All I know is that Buium's use of the Fermat arithmetic derivative, ie D(n)=(n^p-n)/p for a prime p, is used in his study of arithmetic differential equations. Pretty interesting stuff, but quite difficult to understand.
I know what the meaning of the derivative of a number is. Regard the number as a N-dimensional spatial object. The derivative of it is a N-1 dimensional object, calculated by adding up the sizes of all its possibilities in a dimension lower. By default: D(p) = 1 and D(0) = D(1) = 0. Now suppose D(6) = D(3*2) = 5. Now is 6 the area of the rectangle (2,3) and 5 the total length of it. Now suppose D(30) = D(2*3*5) = 31. Now is 30 the volume of 'cube' (2,3,5) and 31 is the total area of its distinct sides (2,3), (2,5) and (3,5). Lastly, suppose D(210) = D(2*3*5*7) = 247. In this last example 210 is the 4D-volume of (2,3,5,7) and 247 is the total volume of the 'cubes' (1,2,5), (2,3,7), (2,5,7) and (3,5,7). I think you can extend this idea for all other examples.
The fixed points seem to be of the form n^n, unless I missed something.
Also 0.
4^4=2^8,
D(4^4)=8*2^7=2^{10} =/= 4^4
n^n will be a fixed point when n is prime, not when n is another prime power, and I haven't thought about when n is more complicated.
@@gregoryzelevinsky9837 Ah yes, you're very much correct!
Hang on .. (06:00) .. before we *extend* this alleged function D : Z --> Z to anything, we should check that it is well-defined. There are many ways to factor a number as m * n. It's not immediately obvious that you get the same result no matter how you calculate, say, D(60). In fact it's not even clear from the original bullet points what "step 1" is supposed to be.
This kind of reminds me of prime factorization. We use factorization in crypto a lot to find really big prime numbers for use in public/private key exchange, or what is called asymmetrical cryptography.
So then the reverse operation will be Arithmetic Integral. Is there a way to calculate this integral and what would be the properties of it? Like one to one correspondence between the number and its integral? Also wondering if one can discover some intuition for these concepts... Like derivative is the gradient of the primes used in the number and integral is some sort of area under those primes...
You finally got to this topic! Worked on this topic for my undergrad.
Since the derivative is defined for a dense field (the rationals), it's an interesting question whether or not it's continuous.
Since it is not linear under addition I would guess not
I had the same question!
If an infinite product converges, then you can arithmetically differentiate. But will that converge?
Is the concept of continuity well defined for such a field? I didn't even know that
@@hoodedR some (most?) converging series of rational numbers converge into an irrational number, but that's fine.
We have two question to ask basically.
Given a series of rational numbers an, does the series converging (being a Cauchy sequence) imply that the series D(an) also converge?
If the answer to the first question is yes, we can ask a second one: if the series converges to a rational number a, does that imply that the series D(an) converge to D(a)?
@@tcoren1 very interesting 🤔 I also wonder one other thing about this function. D(n) = sum(n/p_i) = n*sum(1/p_i) for any natural number n where n = product(p_i) and all p_i are primes. I wonder if a similar meaning can be assigned to D(m/n) for m/n being any rational number.
As I write this I notice that D(n) is a little similar to the Euler totient function. Maybe there's something there too
The extension to the set of rationals is good, because it then lends itself to the idea of an "arithmetic antiderivative", such that if we have a number m^n, the arithmetic antiderivative D-1(m^n) = (m^(n+1))/(n+1). Note that before, for the map of integers to integers, this inverse function would not work, as n+1 might not be a divisor of m^(n+1), and thus would lead to a rational result rather than an integer; but with the extension to the rationals proven with the quotient rule being well-defined, we can just set the domain and range of the antiderivative as going to and from the rationals.
Iirc I read that -x ln(x) also satisfies something like the Liebniz rule, and that this can be related to information-theoretic entropy,
and maybe there could be a connection here?
Also, if instead of defining each D(p) to be 1, it seems like the definitions all still work if you leave D(p) to be unspecified constants c_p .
Another idea: if you take a non-standard model of the integers, and let p be some particular non-standard prime, and suppose that you have D send all other prizes to 0, but have D(p)=1 , then you could like, uh... well, it would really look like differentiation then,
and also, I think it might(?) be additive (when only adding a standard number of numbers together)
Because like,
Then for any standard n, D(n) would be 0,
Hm... actually, no, I guess it probably wouldn’t be additive?
Because like, the factorization of p+1 would have,
Well, p+1 wouldn’t be divisible by p,
so if all other primes are sent to 0, the D(p+1) would be 0, rather than D(p) + D(1) = 1 + 0.
But, what if we only say that D sends standard primes to 0?
Then, some of the prime factors of p+1 may be non-standard primes,
and perhaps this would be enough to make D(p+1)=1 ?
Depending on what D sends these non-standard primes to.
Well, D(p+1)= 2*D((p+1)/2) + D(2)*((p+1)/2)
that doesn’t help..
Ok but say we just say that D(2)=D(3)=D(5)=0
and then we look for some standard prime p greater than these such that p+1 has factors other than 2,3, and 5,
and then see what D has to send other things to in order to make D(p+1)=1 .
Ok, take p=41,
Then p+1=42=2*3*7
So D(42)=2 D(21) + 21 D(2)
= 2 (3 D(7) + 7 D(3)) + 21 D(2)
= 21 D(2) + 14 D(3) + 6 D(7)
= 0 + 6 D(7)
so, if D(7) is chosen to be 1/6 , then that works...
What about D(p+2) though?
Well, in this case 43 is also prime, so D(43)=1 would be the result.
And D(p+3)=D(44)= D(4*11)= 4 D(11) + 11 D(4),
and so D(11)=1/4
I suspect that soon problems would occur,
but, for at least a few steps, it can work out,
and I would expect that if you pick a big enough prime, there is no fixed number limit (not depending on the size of the prime) where it will always fail within that many steps,
So, maybe there *are* functions like this where, when taking in a nonstandard natural number, it acts additively when only adding a standard number of terms together?
... but, maybe it couldn’t be defined internally, because otherwise one could use induction to show that 1+1+...+1 (with p copies of 1) would go to zero?
Or at least, couldn’t show internally that it is additive for pairs....
Ok, well, what about a weaker result of, “D( p + n)=1, whenever n is standard”?
(Which can’t even be expressed internally, as can’t speak of which numbers are standard within the language of the nonstandard model)
Can this be extended to the algebraic numbers? Square roots should be fairly straightforward, but what about other numbers. What is the arithmetic derivative of i?
It might be interesting to connect the existence of a full derivative (with Leibniz ánd the additive property) with transcendent elements in the ring/field.
Have you done videos on the fundamental theorem of arithmetic, algebra, or calculus?
What is practical uses of arithmetic derivatives?
Wait, what about irrational numbers?
Here's my best attempt at √2:
2 = √(2)•√(2)
D(2) = D(√2)√2 + D(√2)√2
1 = D(√2) • (√2+√2)
D(√2) = 1/(2√2)
I believe the same argument can be made for most irrational numbers, and for any D(√(prime)) = 1/(2√(prime))
Not possible to extend it to irrationals
@@anshumanagrawal346 Not all of them, but it can be extended to roots of rational numbers.
@@tomkerruish2982 It can't be extended to the set of Real Numbers, that's what I said
@@anshumanagrawal346 You were nonspecific. It can be extended (uniquely) to some irrationals. Possibly one could use the axiom of choice to extend it to all of them, but I know of no proof either way.
pretty interesting are also dual numbers and alike hyper dual numbers, numbers that follow the rules of the derivative for + - * and /
I was checking here and you can derive that d(1) = d(0) = 0 from the product rule as consistency requirement by checking d(0 × a) and d (1 × a).
You can also derive the quotient rule for rationals by examining d(a × 1/a) and requiring that it is equal do d(1). This will give you d(1/k) = - 1/k² d(k). From there you can easily derive the expression for d(a/b).
You can also find that d(-n) must be equal to -d(n) by considering d(-1 × -1) = d(1) and finding that d(-1) must be zero. Then you can just use d(-n) = d(-1 × n).
Also, it seems like you can define an infinite family of those functions by requiring the product rule and defining what value the function will have for each prime number. All the other rules follow from that.
some fixed points:
D(p^n)=p^n -> np^(n-1)=p^n which means p^p is a fixed point for all primes p.
D((pq)^n) = p^nD(q^n)+q^nD(p^n) = n(p^nq^(n-1)+q^np^(n-1))=(p+q)n(pq)^(n-1)
if we set this to be equal to (pq)^n we get:
n(p+q)=pq eg n=pq/(p+q)
in a similar way we could get arbitary fixed points from any number raised to a power, though generalizing that much becomes difficult to do from the phone ;p
is this related to p-adic numbers?
It is an open question whether D is p-adically continuous on rationals, see J. Integer Seq. 23 (2020), no. 7, Art. 20.7.3.
I love you, Michael. Greetings from Brazil!!!
My favourite derivative is the extension to the derivative of a context free grammar. You can then parse text by taking a derivative.
I am not familiar with this. Can you provide a link?
Makes me wonder if this is connected to anything or if it just works. If so it wouldn't hurt to know a bit more. (It might hurt depending on how much of a rabbit hole it is I guess...)
You actually only need the 2nd and 3rd properties listed on the right. With those you can prove that d(0)=d(1)=0 and d(-n)=-d(n).
Some input to the video, the domain and range has been defined to be integers, since rational number with fractional part non 0 doesn't fall in integers so it will be incorrect to have the value of these defined. You need to change the domain and range of the function to all rational numbers rather than integers
While the Leibniz property on its own is quite powerful, the linearity of functional/polynomial derivatives really just cannot be beat.
Might be neat to see the arithmetic derivative applied to integer equations, along with some q-analogs thrown in just for flexibility.
this is actually pretty cool, i wonder if we can find the inverse of this function and also extend it to manifolds
i think that one give us a lot of thing for primes
The arithmetic integral is going to give many answers for the same number but imo you could just pick the lowest.
So rules 1-4:
D(0) and D(1) are both 0.
D(P)=1
D(mn)= mD(n)+nD(m)
WRT n, D(n) is an odd function.
But where does rule 4 (the "odd function" rule) come into play?
Cool definition of that arithmetic derivative. But does this has any further applications?
This is just plain fun. Very cool.
Is this derivative continuous when extended to Q? If so, it should be extendable to R, right?
Arithmetic derivative on rationals is not continuous, see J. Integer Seq. 22 (2019), no. 7, Art. 19.7.4.
To derive the quotient rule, once you have the power rule, write a/b as a*(b^-1) and use the product rule and power rule to simplify it out. One you end up with (b*d(a)-a*d(b))/b^2 then do two things. Plug in c/1 and be sure you get d(c) so it's consistent with the integer version (it will be). Then work out d(ca/cb) to be sure it simplifies to d(a/b).
You cannot strictly do this until you have done some work to show that negative powers are well behaved under the power rule. If I am not mistaken we only proved that for positive powers
@@trueriver1950 Yeah, it is a point you have to take a "leap of faith" and just try it. When you derive the formula and be sure d(c/1) = d(c) and d(ca/cb) = d(a/b) it at least gives you a warm feeling that it can be worked with.
10:10 should be D(n) instead of D(m) but then get corrected at the next step?
This has got to be one of the best videos in a while! I absolutely loved it! Can't wait to see more bizarre generalizations and their extensions/applications!!
is it possible to extend from real number like that. supose that an a sequence of rational number converge to a real number r. then D(r)=lim(D(an) exists. claim1 D(e)>
continue D(e)
i did a mistake D(sqrt2)= sqrt(2)/2 and D(e)
Can this be extended in any reasonable way to a function on the reals?
What I think would be worth investigating is how this behaves in neighborhoods around integer values
For example take D(2)=1
If we investigate
D(2+1/p) for primes p we get
D(2+1/p)=
D((2p+1)/p)=
(D(2p+1)p-2p-1)/p^2=
D(2p+1)/p-2/p-1/p^2
So looking at the limit as p goes to infinity the 2/p and 1/p^2 terms vanish leaving D(2p+1)/p
Based on my calculations this seems to approach 0 from below so it would seem that there is some really interesting behavior happening around the integers. Would be curious if any values near 2 which give a derivative near 1
This is the first time
I see this kind of function
D(1) = 0 is actually a consequence from the Leibniz rule, it shouldn't be in the definition.
In fact, you don't even need D(0) = 0, because it also comes from Leibniz and the fact that D(0) is finite because it's integer by definition.
You present 4 rules of the arithmetic derivative, but actually, 1 and 4 follow from 2 and 3.
The first one is easy:
D(1·1) = D(1) = 2·D(1), so D(1) is equal to zero. Plugin 0 instead of 1 and you'll get the same for D(0).
For the forth rule, we have that D(-1·(-1)) = -2D(-1) = D(1) = 0, so D(-1) = 0. With this, we can proof that D(-n) = D(-1·n) = D(-1)·n + D(n)·(-1) = -D(n)
The rational expansion also follows from 2 and 3:
D(1) = D((1/n)·n) = D(1/n)·n + D(n)·(1/n) = 0 ----> D(1/n) = -f(n)/n^2
D(m/n) = D(m·(1/n)) = D(m)·(1/n) + D(1/n)·m = (D(m)n - D(n)m)/n^2
I'll add without proof that the power rule can be extended to rational exponents, with similar proofs to the ones above; it can also be expanded to the complex world (not all of it), and it's possible to show that this function is not continuous.
So... before we jump to rationals... how do we know that if AB=PQ then D(A) B + A D(B)=D(P) Q + P D(Q) ? Because it seems to me we are using it.
Amusingly, I'm going through Apostol's "Introduction to Analytic Number Theory", and there is another derivative given there. (It isn't called an arithmetic one, and it's for functions not numbers, but it seems a natural name, since it's defined for "arithmetic functions:" functions f : N -> C (or a subset thereof).)
In this case the derivative defined is much simpler: f'(n) = f(n) log(n).
It would be confusing to use the same term for different things.
@@omp199 It is confusing. Happens all the time lol.
Small mistake in your final proof. Second line, second term in the numerator should be -m(D(a)n+aD(n)) not -m(D(a)n+aD( M )). You fixed it in the following line. I just wanted to point it out for those who may get confused at that line.
Does this operator have a close form? I presume you can get it from fundamental theorem of algebra
Is D also well-defined on irrational numbers like pi = 2*(2/1)*(2/3)*(4/3)*...?
Or maybe D(2) = D(sqrt(2)*sqrt(2))?
D(sqrt(2)*sqrt(2)) may be fine but infinite series unlikely
D(n^(1/k)) is probably well-defined for nonnegative integer n and positive integer k.
Consider a unique factorization of n = prod p_i^a_i
we have
n * sum(1/p_i) = D(n) = D(n^(1/k)*n^(1/k)*...) [k times] = k*n^(1/k)*D(n^(1/k)) and we have
D(n^(1/k)) = n^(1-1/k)/k * sum(1/p_i)
Special cases below:
Consider 1 = D(p) = D(sqrt(p)*sqrt(p)) = 2*sqrt(p)*D(sqrt(p)) -> D(sqrt(p)) = 1/(2*sqrt(p))
Consider a composite number p*q where p and q are prime:
p + q = p*D(q) + q*D(p) = D(pq) = D(sqrt(pq)*sqrt(pq)) = 2*sqrt(pq)*D(sqrt(pq))
From this we have
D(sqrt(pq)) = (sqrt(p/q) + sqrt(q/p))/2
Consider a composite number p*q*r where p, q, r are prime
D(sqrt(pqr)) = D(sqrt(pq)*sqrt(r)) = sqrt(pq)*D(sqrt(r))+sqrt(r)*D(sqrt(pq))
= sqrt(pq)/(2*sqrt(r)) + sqrt(r)*(sqrt(p/q) + sqrt(q/p))/2
= (sqrt(pq/r)+sqrt(qr/p)+sqrt(rp/q))/2
pq+qr+rp = D(pqr) = 2sqrt(pqr)D(sqrt(pqr))
Fix points: let pi be the prime factors of n, n is a fix point if Σ1/pi=1
If your input is a fraction in reduced form and not an integer, is the output always a non-integer fraction?
Good one! I will make a code that solves for D(x) with this set of rules, seems simple enough.
Fixed points are p^p where p is prime
I wonder if this could be used to define a mapping from function space to rational space
Last slide, line before last, there's a one-line typo where you wrote D(m) but clearly meant to write D(n). Just in case anyone else sees it and was wondering about it.
Is there an arithmetic integral (or anti-derivative) as well?
I don't think it would be well defined for most values.
It would be worth explaining why the computation doesn't depend on the order in which a number is decomposed
Pa b + a Pb = Pb a + b Pa
Commutes