🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/michaelpenn. The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟
@gcewing Wouldn't that be more like math police? Math gangster would be more like completely disregarding all rigors and logics and yet still getting somewhat valid results.
Crazy that Euler still wrote 2-3 pages a day of published material while blind. He dictated it all to his scribe. Major contributions to areas of geometry, number theory, graph, Calculus. Just an absolute unit.
His scribe must have been quite a math wiz themselves. Imagine having cutting edge math explained to you verbally, and somehow understanding it well enough to reproduce it in writing.
@@KomradJenrol According to a biography, "the scientists assisting Euler were not mere secretaries; he discussed the general scheme of the works with them, and they developed his ideas, calculating tables, and sometimes compiled examples". From googling, one of them was Nicolas Fuss - he has his own wiki page so seems to have been an accomplished mathematician in his own right, and later married one of Euler's granddaughters.
@@Alex_Deam wow. That's impressive. Not only working with and for one of the all-time greats, but then getting to mаrrу his grаnddаughtеr. Mr. Fuss did well for himself, even if his name is not common knowledge.
All calculus, from Leibniz's infinitesimals, to Euler, to limits, to nonstandard analysis, to dual numbers is essentially the same thing--taking a small quantity that we disregard at the appropriate moment.
@@jursamaj I honestly don't see why such claim would be made false by the fact that the underlying ideas and intuition have a lot in common. Am I missing something here?
@@jursamaj I think the key decider is not a kind of "universal truth" if such a thing exists - but peer acceptance of the claim. Whoever made the claim. At that time it seemed math research took an empirical form. Whoever discovered something new needed to defend it from attack and maintain their claim on ownership. A bit like militarized intellectual ownership? Euler's funding provider may have taken ownership as well? Math experts were often employed in royal courts/empirical courts as a stranger exotic object might be owned, Time and place and all that...
Michael, there’re so many math videos online but I rarely see anything related to history of mathematics. Maybe you should consider a series that talks about history of calculus. I know that it took a long time for mathematicians to formalize the concept of limit. It’s always important to bear in mind that even something as rigorous as mathematics requires time and effort to crystallize new ideas.
infinitesimals as square roots of zero are not all that sketchy. dx*dx=0, dx>0 allows you to divide by dx. as soon as you get into clifford algebra, you end up recreating imaginaries from directions in space. all of geometry comes from objects that square to 1, or 0, or -1. it becomes natural to accept infinitesimals as square roots of 0.
My teacher exclaimed "No!" in horror when I said something that indicated that "different kinds of zero" was the intuition I'd constructed. If only TH-cam had existed back then, I could have claimed to be in the company of none other than Euler himself.
I'm definitely in the delta-epsilon standard calculus camp but there is definitely an indoctrinated prejudice of the now rigorous notions of non-standard analysis in most mathematicians so it makes sense that your teacher was so horrified by your heretical viewpoint!
Different kinds of zeros doesn't look intuitive to me. Different kinds of infinitesimal is the way i would explain this. As latter comment mentionned, non-standard analysis formalizes this very elegantly
Different types of zeros seem more of an construct or aid for abstract algebra,but we can also present an argument of if there can be different types of infinity then why not different kinds of zeroes.
especially when you start needing the dy/dx notation for DE, when it seems intuitive that you can do algebra with the differentials without getting strange results
It's interesting to see that we are so extremely close here to our modern understanding of differentiating via limits. The only thing that is _really_ missing, in terms of ideas, is a good definition of dy. If we have Δy = f(x+Δx) - f(x), all we are missing is the limit in order to take Δy and Δx to be small. Of course, in order to have something that survives in the limit, we need to normalise each side by a Δx too. And then we are there. In fact, to switch notation from Δ to δ, we probably have to have a step of understanding that we "really" mean 0 when we say δy or δx. And the way we get to that zero is by taking the limit.
Well, limits were basically introduced in order to make the differential approach rigorous. So it makes sense that it is similar. Although in the 20th century we also developed nonstandard analysis which lets you use differentials in a rigorous way without limits.
That was, perhaps, a geometrical approach. The secant line becomes the tangent line when the 2 points x, x+dx. are close enough. Then you have Delta y / Delta x = dy/dx.
@@atzuras I hazard a guess that you are spot-on here. Maybe the leap forward into peer acceptance was a way to explain in non-graphical terms. Perhaps sliding a secant equidistant (how? equidistant on x or on f(x)?) from a point to a tangent at the point graphically is just what writing "taking limit as dx tends to zero. The more I dwell on this reply the more it seems analysis replaced graphical approaches with a peer accepted analytic approach
@Alan-zf2tt I would use the following approach. f(x+dx)= f(x)+dy ( secant line) then f(x+dx) - f(x) = dy and dy*dx/dx =y' * dx so f(x+dx)- f(x)= y" dx and from here you got the tangent line y' and the derivative. You only need to assume that it exists a dy which is small, so also a small dx must exist and then delta(x) and dx are the same. ( still, some work to do to rigurously introduce the limit definition)
This is the sort of thing that brought Bishop Berkeley out in hives, and which he attacked in his book "The Analyst", pointing out the non-rigorous nature of the arguments used in calculus during the 18th century. Euler's df and dg, which both are and aren't zero, are examples of what he called "ghosts of departed quantities".
@@TheEternalVortex42 According to the Wikipedia article on "The Analyst", he was tolerant of Newton (who was religious as well as mathematically minded) and was more opposed to Edmond Halley.
Euler gets the power series for trig functions b y proving (by angle addition, and induction) (cos(x) + i sin(x))^n = cos(nx) + i sin(nx). Now let n-> infinity, x-> 0, so that nx = z is constant. Expand as binomial series....See Introduction to the Analysis of the Infinities, Euler Archive.
New territory? I have been following these for a few months now and this seems an interesting development to a "journey into past mindsets and ways of doing things". These may be strange compared to modern methods but with Euler's stamp of approval it gives a credence of sort to the exposition of techniques and mindsets of the time.
Thanks for another great video. It's nice to be able to view these things in a different way. Could you also, if not already, do a video on how the expansion of the natural logarithm was known before calculus was known? I also understand that the natural logarithm was defined before it was understood as an inverse function to the exponential, so was the expansion known also before this?
Neat. That little explanantion at the end felt like another intuitive way of thinking about taking infinitely thin rectangles when calculating areas under curves (integrals).
I wonder how this would interact with the combinatoric notion of "up" that is positive, yet not greater than zero. In fact, it's "confused with" zero, meaning it is not greater than, less than, or equal to zero. There's also "down", which is negative and less than up, yet still confused with zero.
euler was obviously an amazing mathematician, but this looks like a high school student trying to gain some marks by rearranging things in various ways on an exam they have not prepared enough for :D
Interesting! My intuition regarding the quotient rule was to make a common “denominator” for (f+df)/(g+dg) - f/g, which spits out old buddy (gdf - fdg)/(g^2) after only a few steps. We do get an extra gdg in the denominator, but that’s just 0, so we don’t really need to care about it.
@@TheEternalVortex42 If it is just another term in a sum I think you can: i.e. g^2+gdg=g^2 because g isn't zero but dg is. edit: I just checked if that denominator was indeed that sum and it turned out to be so indeed. So one can read "Since" instead of "If" in my previous sentence. edit 2: If you insist on thinking of "higher powers": g^2 is (g^2)dg^0 and dg^1 is a higher power than dg^0
I was thinking about multiplying with the denominator's "conjugate", g-dg: (f+df)(g-dg)/((g+dg)(g-dg)) = (fg+gdf-fdg-dfdg)/(g²-dg²) And since dg² = dfdg = 0, we get (fg+gdf-fdg)/g² = f/g + (gdf-fdg)/g² as expected, since we subtract f/g.
That's funny he thought of dx as almost an algebraic structure that behaved like zero in some circumstances but not others. Obviously, that doesn't actually work but there's a reasonable path of exploration from playing around with this concept to happening upon proper abstract algebra. I can imagine his thought process as "pretending there was a root for -1 that is itself a unit worked out so well, why not pretend there are 0-like objects that are not precisely 0"
I don't care what modern mathematicians say. The physicists have won and dy=y'dx. The time-consuming/annoying 'rigor' of the mathematician is no match for the power of the differential. The Fundamental Theorem of Engineering reigns supreme! π=3, e=2, π=e, sinx=x, Δ=d
in general, is there a rigourous way to prove that higher-order "dx" can be neglected compared to "first-order dx" ? This is everywhere mechanics but never actually proven (at least for me)
I don't think that the notion that higher order dx can be ignored are proven. They are usually formulated as an axiom. The only rigorous way I know to prove that they can be ignored is by using limits.
Let dy=0,dy=f(x)-f(x),dy=f(x+dx)-f(x) then dy/dx=(f(x+dx)-f(x))/dx, dx can be written as some c as limit c goes to 0 so dy/dx=lim(c-->0)(f(x+c)-f(x))/c
Question: In the derivation of the quotient rule. Why don't we simply bring anything to a common denominator: (df * g - f *dg ) / (g^2 + dg * g) and then say dg*g = 0*g = 0 and we are done?
But... isn't this just the standard definition of a derivative with a simplified notation? *dy/dx = (d/dx)f(x) = lim(dx->0) (f(x+dx) - f(x)) / dx* assume the limit, therefore *dx = 0,* and you have: *dy/dx = (f(x+dx) - f(x)) / dx* multiply by *dx:* *dy · dx/dx = (f(x+dx) - f(x)) · dx/dx = f(x+dx) - f(x)* simplify: *dy = f(x+dx) - f(x)* You can work with this rather than the formal definition and all you need to remember is that to get *dy/dx* you divide *dy* by *dx,* so any part of your *dy* that has more than one *dx* will still have some *dx* after dividing, and since *dx=0,* they disappear.
It seems to me that f(x+dx) - f(x) is quite similar to f(x+h) - f(x), which is the top half of the slope. Somehow, though, he never has to divide by his version of zero, which I find interesting.
How strict is this definition from Papa Euler? How does it behave in fields like differential forms and other rigorous approaches? Because, to me, it does make sense to think of it this way… if not, what is dx?
something doesnt feel right to me at all. If I take d(f/g) and write it like fg-fg since they are both 0 and thats the only rule we are working with (assuming 0=0), we got the product rule is true for the quationt of 2 fuctions. Am I wrong?
This idea is formalized in modern mathematics via smooth infinitesimal analysis: essentially, adding a topology on the dual numbers. The resulting system makes ordinary calculus calculations very easy, justifies computation procedures common in the natural sciences---and the differential geometry constructed on top of the "real number object" of SIA gives rise to synthetic differential geometry, in which tangent spaces are "real," action of Lie groups is first class (a "microflow") and you need not appeal to all this nonsense about equivalence classes of differential operators on curves. The only problem: you have to give up the law of excluded middle. The text by John Bell is an excellent introduction that you could probably teach calc 1 out of and be just as rigorous as our analysis/advanced calculus courses.
But with y = x^n you can repeat the same Euler's argument, analogously, just putting dx in a different place: dy = 0 = x^n - x^n = x^n - (x + dx)^n = x^n - (x^n + n x^(n-1) dx + ...) = - n x^(n-1) dx -> dy/dx = - n x^(n-1). This is opposite to the correct result. In other words, it is pretty ambiguous where to put dx and whether to put it with a plus or minus sign.
Notice that the derivative calculations really used only the assumption that (dx)^2 = 0. This assumption needn't require dx = 0, and in fact, modern algebraic geometry uses such "nilpotent" elements all the time, regarding a quotient algebra R[x]/(x^2) as the coordinate algebra of a scheme sometimes called the "generic tangent vector".
It is to be expected due to the motivation but this is pretty close to what you do in synthetic differential geometry / smooth infinitesimal analysis. There we have an arbitrary nilsquare d (which you can call dx instead), such that `d^2 = 0`, and for all functions f, `f(x + d) - f(x) = d * f'(x)`, in the constructive language, d is indeed not nonzero. This rigorously makes these algebraic manipulations fall through.
I couldn't have been the only one having used all of these methods, they just work out so well. In fact, I still use 0/0 is everything, to deal more efficiently with degenerate case.
I think the game at the time ran along lines of: in order to criticize you must do better than the person you are criticizing". Well that or a similar rule. I think it ran on basis of: something exists with its faults until it is replaced with something even better with fewer faults or no faults at all. Perhaps a primitive version of peer review? I can imagine Euler replying "If anyone can do better just dare to try" 🙂
This is like limit calculus without using limits🤣. How did a genius like Euler come up with all that zeros manipulation? It feels like getting the right results with "wrong/forbidden" calculation mumbo jumbo...👍
At times I've wondered why emphasis was on f(x+dx) - f(x) all divided by dx rather than f(x+dx) - f(x-dx) all divided by dx. Then the usual as dx tends to zero. Maybe they knew then about continuities discontinuities creating problems? Also f(x+dx) - f(x-dx) sort of graphically give a secant line tending to tangent line as dx tends to zero provided continuity in an interval about x existing. History seems to suggest a strict legal-type definition is demanded by rigor and peer acceptance?
@@bp56789 nd reply: By Jove! Sirrah ! As far as x squared and x cubed go you are correct 2dx seems to do the trick nicely. Know what this means? There may be no purpose behind limits as infinitesimals are self explanatory?
i think i get it now ..when we say dx=0 we mean that it is a point in a line that touches in locality a huge non linear graph ... i see for example if dx>0 it wouldnt work because it would be a small line in locality equal with dx=a ..a positive number indicate some distance and not a point...so dx=0 makes sense for it to indicate a point which a line is tangent to a graph ...ok nice!!
Are limits really necesary for the definition of a derivative? Consider this equation: f(x) = m(x-d) + f(d) Find an m, so that the multiplicity of the solution x=d is greater than1. Iff f(x) is a line or for every d in f(x)-m(x-d)-f(d) - 0 exists exactly one m, so that the multiplicity of the solution x=d is greater than 1, it is the derivative of f.
@@Kaget0ra Epsilon-delta is a completely different approach though which only speaks of potential infinities and infinitesimals as opposed to the actual infinities and infinitesimals seen here. This sort of calculus is simply non-standard but there exist formalisms where it can be made rigorous.
oh yeah this makes sense because we're treating dx as h in the limit definition of the derivative. we're just not dividing by dx which is why the answer also contains dx and why we're discarding higher powers of dx
@@АндрейДенькевич yes because if we divided by h in the definition then the term with the first power wouldn't have any h's attached and the rest would disappear in the limit as h→0
@@pawel_maslankayes ,assuming 1=(dx+dx^2)/dx, we admit that higher power of dx included in 1 (1 carries it). So if elsewhere higher power of dx pop up then we must discard it.
I have mixed feelings about this: if df=0 for every function, then d(fg)=df dg is a true formula. Did Euler has a notion of "linear approximation" and "order of magnitude" in mind? (It seems so, from "df dg is a smaller kind of 0"
From Wikipedia: In mathematics, the transcendental law of homogeneity (TLH) is a heuristic principle enunciated by Gottfried Wilhelm Leibniz most clearly in a 1710 text entitled Symbolismus memorabilis calculi algebraici et infinitesimalis in comparatione potentiarum et differentiarum, et de lege homogeneorum transcendentali.[1] Henk J. M. Bos describes it as the principle to the effect that in a sum involving infinitesimals of different orders, only the lowest-order term must be retained, and the remainder discarded.
I think Newton's differentials were very similar to this if I'm not mistaken. Btw, this video reminded me of one of Michael's earlier uploads: th-cam.com/video/dyjlRi8nuw0/w-d-xo.html which introduces not only the idea of different kinds of zeroes but also different kinds of infinities.
I guess you mean "this is the product of a top tier creative free-thinker, one we could only dream about coming close to, given our handicapped-by-formalism rigid minds"
@@costakeith9048With no rigor you are just claiming unproved things, doing selective manipulation and numerology. With only rigor you won't be able to properly explore new ideas and you are gonna be basically "handicapped by formalism". Rigor is not an enemy to mathematics and you should stop acting like it is just because being formal is not an easy task
Yes, surreals form a totally ordered field, so we can define an absolute value, which is a norm. So, we can define limits for surreal-valued functions. Since we can divide surreals, we can define a derivative as usual (limit of average ratio)
It's kinda funny how it's close to exactly how i used to think about it when i first learned about calculus but these days it makes me feel anxious as hell even with explaination at the end
0/0 is a strong signal that we don't have enough information to compute the limit. It's therefor usefull to look at it from a different perspective, like looking at how the derivatives behave around that point.
They are. Even the non-standard analysis has become more rigorous. Rigor is about having a better argument and naturally, mathematicians have accumulated good arguments only over time to answer each objection in as clever a way as possible.
Kind of. Basically the rule (the transcendental law of homogeneity) is "in a sum involving infinitesimals of different orders, only the lowest-order term must be retained, and the remainder discarded" (from Wikipedia paraphrasing the mathematician Henk J. M. Bos). For example, using the product rule derivation featured in the video: So y = fg implies dy = (f+df)(g+dg) - fg = fg + fdg + dfg + dfdg - fg = fdg + dfg + dfdg The last term is the product of two infinitesimals whereas the other two terms have only one infinitesimal factor in them so we just ignore that last one.
In modern times, it is hard to believe Euler actually calculated derivative like this :D. He must have got help from some weed to have this nice imagination.
@@TheEternalVortex42 I love how modern mathematicians fancy themselves as intellectually superior to Leibniz & Euler because those idiots used differentials, instead limits -- the way God intended !!
🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/michaelpenn. The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟
"Leonhard Euler was kind of a mathematical outlaw... kind of a mathematical gangster." - Ed Frenkel
He was a unique mathematical genius.
Now I have an image of old Lenny going around saying to people "Nice theorem you have there, shame if someone were to disprove it..."
@gcewing Wouldn't that be more like math police? Math gangster would be more like completely disregarding all rigors and logics and yet still getting somewhat valid results.
@@arduous222 SOMEWHAT??
@@gcewing "Nice theorem you hae there. I already proved it."
Crazy that Euler still wrote 2-3 pages a day of published material while blind. He dictated it all to his scribe. Major contributions to areas of geometry, number theory, graph, Calculus. Just an absolute unit.
His scribe must have been quite a math wiz themselves. Imagine having cutting edge math explained to you verbally, and somehow understanding it well enough to reproduce it in writing.
@@KomradJenrol According to a biography, "the scientists assisting Euler were not mere secretaries; he discussed the general scheme of the works with them, and they developed his ideas, calculating tables, and sometimes compiled examples". From googling, one of them was Nicolas Fuss - he has his own wiki page so seems to have been an accomplished mathematician in his own right, and later married one of Euler's granddaughters.
@@Alex_Deam wow. That's impressive. Not only working with and for one of the all-time greats, but then getting to mаrrу his grаnddаughtеr. Mr. Fuss did well for himself, even if his name is not common knowledge.
I cannot imagine starting a marh test by defining "Let dx=0 so that 0×dx/dx =a , a in R*)
I would had been graded with d(Fail) = 0 or something.
😂
wow euler came really close to discovering the modern technique of automatic differentiation with dual numbers
All calculus, from Leibniz's infinitesimals, to Euler, to limits, to nonstandard analysis, to dual numbers is essentially the same thing--taking a small quantity that we disregard at the appropriate moment.
@@TheEternalVortex42 And then people try to claim the newest version is "more rigorous" than the old…
@@jursamaj I honestly don't see why such claim would be made false by the fact that the underlying ideas and intuition have a lot in common. Am I missing something here?
@@jursamaj I think the key decider is not a kind of "universal truth" if such a thing exists - but peer acceptance of the claim. Whoever made the claim. At that time it seemed math research took an empirical form. Whoever discovered something new needed to defend it from attack and maintain their claim on ownership. A bit like militarized intellectual ownership?
Euler's funding provider may have taken ownership as well?
Math experts were often employed in royal courts/empirical courts as a stranger exotic object might be owned, Time and place and all that...
@@jursamajbecause it absolutely is.
Michael, there’re so many math videos online but I rarely see anything related to history of mathematics. Maybe you should consider a series that talks about history of calculus. I know that it took a long time for mathematicians to formalize the concept of limit. It’s always important to bear in mind that even something as rigorous as mathematics requires time and effort to crystallize new ideas.
Best comment.
infinitesimals as square roots of zero are not all that sketchy. dx*dx=0, dx>0 allows you to divide by dx. as soon as you get into clifford algebra, you end up recreating imaginaries from directions in space. all of geometry comes from objects that square to 1, or 0, or -1. it becomes natural to accept infinitesimals as square roots of 0.
My teacher exclaimed "No!" in horror when I said something that indicated that "different kinds of zero" was the intuition I'd constructed. If only TH-cam had existed back then, I could have claimed to be in the company of none other than Euler himself.
I'm definitely in the delta-epsilon standard calculus camp but there is definitely an indoctrinated prejudice of the now rigorous notions of non-standard analysis in most mathematicians so it makes sense that your teacher was so horrified by your heretical viewpoint!
Different kinds of zeros doesn't look intuitive to me. Different kinds of infinitesimal is the way i would explain this. As latter comment mentionned, non-standard analysis formalizes this very elegantly
Different types of zeros seem more of an construct or aid for abstract algebra,but we can also present an argument of if there can be different types of infinity then why not different kinds of zeroes.
When I took calculus one of the first proofs was that there is just one zero.
especially when you start needing the dy/dx notation for DE, when it seems intuitive that you can do algebra with the differentials without getting strange results
It's interesting to see that we are so extremely close here to our modern understanding of differentiating via limits. The only thing that is _really_ missing, in terms of ideas, is a good definition of dy.
If we have Δy = f(x+Δx) - f(x), all we are missing is the limit in order to take Δy and Δx to be small. Of course, in order to have something that survives in the limit, we need to normalise each side by a Δx too. And then we are there.
In fact, to switch notation from Δ to δ, we probably have to have a step of understanding that we "really" mean 0 when we say δy or δx. And the way we get to that zero is by taking the limit.
Well, limits were basically introduced in order to make the differential approach rigorous. So it makes sense that it is similar. Although in the 20th century we also developed nonstandard analysis which lets you use differentials in a rigorous way without limits.
Cauchy is the one who made the breakthrough.
That was, perhaps, a geometrical approach. The secant line becomes the tangent line when the 2 points x, x+dx. are close enough. Then you have Delta y / Delta x = dy/dx.
@@atzuras I hazard a guess that you are spot-on here. Maybe the leap forward into peer acceptance was a way to explain in non-graphical terms.
Perhaps sliding a secant equidistant (how? equidistant on x or on f(x)?) from a point to a tangent at the point graphically is just what writing "taking limit as dx tends to zero.
The more I dwell on this reply the more it seems analysis replaced graphical approaches with a peer accepted analytic approach
@Alan-zf2tt I would use the following approach. f(x+dx)= f(x)+dy ( secant line) then f(x+dx) - f(x) = dy and dy*dx/dx =y' * dx so f(x+dx)- f(x)= y" dx and from here you got the tangent line y' and the derivative. You only need to assume that it exists a dy which is small, so also a small dx must exist and then delta(x) and dx are the same. ( still, some work to do to rigurously introduce the limit definition)
This is the sort of thing that brought Bishop Berkeley out in hives, and which he attacked in his book "The Analyst", pointing out the non-rigorous nature of the arguments used in calculus during the 18th century. Euler's df and dg, which both are and aren't zero, are examples of what he called "ghosts of departed quantities".
Wasn't Berkeley primarily attacking Newton?
@@TheEternalVortex42 According to the Wikipedia article on "The Analyst", he was tolerant of Newton (who was religious as well as mathematically minded) and was more opposed to Edmond Halley.
Euler gets the power series for trig functions b y proving (by angle addition, and induction) (cos(x) + i sin(x))^n = cos(nx) + i sin(nx). Now let n-> infinity, x-> 0, so that nx = z is constant. Expand as binomial series....See Introduction to the Analysis of the Infinities, Euler Archive.
New territory? I have been following these for a few months now and this seems an interesting development to a "journey into past mindsets and ways of doing things".
These may be strange compared to modern methods but with Euler's stamp of approval it gives a credence of sort to the exposition of techniques and mindsets of the time.
Thanks for another great video. It's nice to be able to view these things in a different way. Could you also, if not already, do a video on how the expansion of the natural logarithm was known before calculus was known? I also understand that the natural logarithm was defined before it was understood as an inverse function to the exponential, so was the expansion known also before this?
Interesting topic, thanks for sharing! Love seeing a glimpse into the thought process from some of the greats in mathematics.
Neat. That little explanantion at the end felt like another intuitive way of thinking about taking infinitely thin rectangles when calculating areas under curves (integrals).
Very interesting. Thank you so much!
I wonder how this would interact with the combinatoric notion of "up" that is positive, yet not greater than zero. In fact, it's "confused with" zero, meaning it is not greater than, less than, or equal to zero. There's also "down", which is negative and less than up, yet still confused with zero.
euler was obviously an amazing mathematician, but this looks like a high school student trying to gain some marks by rearranging things in various ways on an exam they have not prepared enough for :D
Interesting! My intuition regarding the quotient rule was to make a common “denominator” for (f+df)/(g+dg) - f/g, which spits out old buddy (gdf - fdg)/(g^2) after only a few steps. We do get an extra gdg in the denominator, but that’s just 0, so we don’t really need to care about it.
The problem is you can't get rid of g dg arbitrarily like that since it's not a higher power
@@TheEternalVortex42 If it is just another term in a sum I think you can: i.e. g^2+gdg=g^2 because g isn't zero but dg is.
edit: I just checked if that denominator was indeed that sum and it turned out to be so indeed. So one can read "Since" instead of "If" in my previous sentence.
edit 2: If you insist on thinking of "higher powers": g^2 is (g^2)dg^0 and dg^1 is a higher power than dg^0
I was thinking about multiplying with the denominator's "conjugate", g-dg:
(f+df)(g-dg)/((g+dg)(g-dg)) = (fg+gdf-fdg-dfdg)/(g²-dg²)
And since dg² = dfdg = 0, we get
(fg+gdf-fdg)/g² = f/g + (gdf-fdg)/g²
as expected, since we subtract f/g.
That's funny he thought of dx as almost an algebraic structure that behaved like zero in some circumstances but not others. Obviously, that doesn't actually work but there's a reasonable path of exploration from playing around with this concept to happening upon proper abstract algebra. I can imagine his thought process as "pretending there was a root for -1 that is itself a unit worked out so well, why not pretend there are 0-like objects that are not precisely 0"
15:05 I’m gonna tell my kids that 0/0 = 1
For particularly singular values of 1, of course.
2:58 (0+0^2)/0=1
Some time ago I heard a story that, when his children were young, Euler would sometimes do his research with a child on his lap.
Isn't this just fancy infinitesimal way of denoting limits?
I don't care what modern mathematicians say. The physicists have won and dy=y'dx. The time-consuming/annoying 'rigor' of the mathematician is no match for the power of the differential. The Fundamental Theorem of Engineering reigns supreme! π=3, e=2, π=e, sinx=x, Δ=d
You acting like differentials are not rigorous or what?
in general, is there a rigourous way to prove that higher-order "dx" can be neglected compared to "first-order dx" ? This is everywhere mechanics but never actually proven (at least for me)
I don't think that the notion that higher order dx can be ignored are proven. They are usually formulated as an axiom. The only rigorous way I know to prove that they can be ignored is by using limits.
Let dy=0,dy=f(x)-f(x),dy=f(x+dx)-f(x) then dy/dx=(f(x+dx)-f(x))/dx, dx can be written as some c as limit c goes to 0 so dy/dx=lim(c-->0)(f(x+c)-f(x))/c
Question: In the derivation of the quotient rule. Why don't we simply bring anything to a common denominator: (df * g - f *dg ) / (g^2 + dg * g) and then say dg*g = 0*g = 0 and we are done?
But... isn't this just the standard definition of a derivative with a simplified notation?
*dy/dx = (d/dx)f(x) = lim(dx->0) (f(x+dx) - f(x)) / dx*
assume the limit, therefore *dx = 0,* and you have:
*dy/dx = (f(x+dx) - f(x)) / dx*
multiply by *dx:*
*dy · dx/dx = (f(x+dx) - f(x)) · dx/dx = f(x+dx) - f(x)*
simplify:
*dy = f(x+dx) - f(x)*
You can work with this rather than the formal definition and all you need to remember is that to get *dy/dx* you divide *dy* by *dx,* so any part of your *dy* that has more than one *dx* will still have some *dx* after dividing, and since *dx=0,* they disappear.
It seems to me that f(x+dx) - f(x) is quite similar to f(x+h) - f(x), which is the top half of the slope. Somehow, though, he never has to divide by his version of zero, which I find interesting.
he's getting dy instead of dy/dx like we normally would do today.
@@ThAlEdison I was *this* close to making that leap, and missed it. Thank you.
@@ThAlEdison Right, there's a reason it is called "differential" calculus and not "derivative" calculus :)
How strict is this definition from Papa Euler? How does it behave in fields like differential forms and other rigorous approaches? Because, to me, it does make sense to think of it this way… if not, what is dx?
something doesnt feel right to me at all. If I take d(f/g) and write it like fg-fg since they are both 0 and thats the only rule we are working with (assuming 0=0), we got the product rule is true for the quationt of 2 fuctions. Am I wrong?
but how is this different from newton Leibnitz method
Why doesn't 0/0 include complex numbers?
I suppose that there is no imaginary part? But there isn't a real part either so idk.
This is how we set up every differential equation in physics
Euler magic!
This idea is formalized in modern mathematics via smooth infinitesimal analysis: essentially, adding a topology on the dual numbers.
The resulting system makes ordinary calculus calculations very easy, justifies computation procedures common in the natural sciences---and the differential geometry constructed on top of the "real number object" of SIA gives rise to synthetic differential geometry, in which tangent spaces are "real," action of Lie groups is first class (a "microflow") and you need not appeal to all this nonsense about equivalence classes of differential operators on curves. The only problem: you have to give up the law of excluded middle.
The text by John Bell is an excellent introduction that you could probably teach calc 1 out of and be just as rigorous as our analysis/advanced calculus courses.
But with y = x^n you can repeat the same Euler's argument, analogously, just putting dx in a different place:
dy = 0 = x^n - x^n = x^n - (x + dx)^n = x^n - (x^n + n x^(n-1) dx + ...) = - n x^(n-1) dx -> dy/dx = - n x^(n-1). This is opposite to the correct result.
In other words, it is pretty ambiguous where to put dx and whether to put it with a plus or minus sign.
Neat video. 🙂 One thing that's not included that would have been kind of a cool addendum is to prove the Chain Rule using Euler's method.
Notice that the derivative calculations really used only the assumption that (dx)^2 = 0. This assumption needn't require dx = 0, and in fact, modern algebraic geometry uses such "nilpotent" elements all the time, regarding a quotient algebra R[x]/(x^2) as the coordinate algebra of a scheme sometimes called the "generic tangent vector".
I think we should call this Euler's Method. Very original.
How did we get from (1 + dx) * (dx/dx) to (dx + (dx)^2) / dx = 1?
1 * a = a
It is to be expected due to the motivation but this is pretty close to what you do in synthetic differential geometry / smooth infinitesimal analysis.
There we have an arbitrary nilsquare d (which you can call dx instead), such that `d^2 = 0`, and for all functions f, `f(x + d) - f(x) = d * f'(x)`, in the constructive language, d is indeed not nonzero. This rigorously makes these algebraic manipulations fall through.
I couldn't have been the only one having used all of these methods, they just work out so well.
In fact, I still use 0/0 is everything, to deal more efficiently with degenerate case.
What about sin(x) / x when x -> 0 ? is it not 0 / 0 ?
I think the game at the time ran along lines of: in order to criticize you must do better than the person you are criticizing".
Well that or a similar rule.
I think it ran on basis of: something exists with its faults until it is replaced with something even better with fewer faults or no faults at all.
Perhaps a primitive version of peer review?
I can imagine Euler replying "If anyone can do better just dare to try" 🙂
This is like limit calculus without using limits🤣. How did a genius like Euler come up with all that zeros manipulation? It feels like getting the right results with "wrong/forbidden" calculation mumbo jumbo...👍
It must've been an improvement on whatever it is that was taught to him, so imagine the mess that was calculus back then.
Arrived too early❤😂
At times I've wondered why emphasis was on f(x+dx) - f(x) all divided by dx rather than f(x+dx) - f(x-dx) all divided by dx. Then the usual as dx tends to zero.
Maybe they knew then about continuities discontinuities creating problems?
Also f(x+dx) - f(x-dx) sort of graphically give a secant line tending to tangent line as dx tends to zero provided continuity in an interval about x existing.
History seems to suggest a strict legal-type definition is demanded by rigor and peer acceptance?
shouldnt that be divided by 2dx instead?
@@bp56789 I really do not know. The thoughts occurred to me during video along with mention (somewhere) of secant.
Have you any insights on this?
@@bp56789 nd reply: By Jove! Sirrah ! As far as x squared and x cubed go you are correct 2dx seems to do the trick nicely.
Know what this means? There may be no purpose behind limits as infinitesimals are self explanatory?
Is dx for 0/0 what i is for √-1
Reminds me of dual numbers
This is remarkably similar to how stochastic calculus is done with quadratic covariation
Indeed
I see this as exactly the same thing as taking a limit, just without the formalism. Δy=f(x+Δx)-f(x), Δx=dx=0 ==> Δy=dy=0 ==> dy=f(x+dx)-f(x)
i think i get it now ..when we say dx=0 we mean that it is a point in a line that touches in locality a huge non linear graph ... i see for example if dx>0 it wouldnt work because it would be a small line in locality equal with dx=a ..a positive number indicate some distance and not a point...so dx=0 makes sense for it to indicate a point which a line is tangent to a graph ...ok nice!!
Are limits really necesary for the definition of a derivative? Consider this equation:
f(x) = m(x-d) + f(d)
Find an m, so that the multiplicity of the solution x=d is greater than1.
Iff f(x) is a line or for every d in f(x)-m(x-d)-f(d) - 0 exists exactly one m, so that the multiplicity of the solution x=d is greater than 1, it is the derivative of f.
11.00
It should be -dfdg instead of +dfdg
But it doesn't affect the result because it goes to zero
That's really cool! How is this formalized nowadays?
Smooth infinitesimal analysis---imo it's just flatly better than standard formalism.
Nonstandard calculus?
@@friedrichhayek4862 nonstandard analysis is different; it's syntax sugar for limits using transfinite cardinals
epsilon-delta
@@Kaget0ra Epsilon-delta is a completely different approach though which only speaks of potential infinities and infinitesimals as opposed to the actual infinities and infinitesimals seen here. This sort of calculus is simply non-standard but there exist formalisms where it can be made rigorous.
Me during the full length of the video: What the hell just happened?
oh yeah this makes sense because we're treating dx as h in the limit definition of the derivative. we're just not dividing by dx which is why the answer also contains dx and why we're discarding higher powers of dx
we're doscarding higher powers of dx everywhere except 1.
@@АндрейДенькевич yes because if we divided by h in the definition then the term with the first power wouldn't have any h's attached and the rest would disappear in the limit as h→0
@@pawel_maslankayes ,assuming 1=(dx+dx^2)/dx, we admit that higher power of dx included in 1 (1 carries it).
So if elsewhere higher power of dx pop up then we must discard it.
I have mixed feelings about this: if df=0 for every function, then d(fg)=df dg is a true formula.
Did Euler has a notion of "linear approximation" and "order of magnitude" in mind? (It seems so, from "df dg is a smaller kind of 0"
From Wikipedia: In mathematics, the transcendental law of homogeneity (TLH) is a heuristic principle enunciated by Gottfried Wilhelm Leibniz most clearly in a 1710 text entitled Symbolismus memorabilis calculi algebraici et infinitesimalis in comparatione potentiarum et differentiarum, et de lege homogeneorum transcendentali.[1] Henk J. M. Bos describes it as the principle to the effect that in a sum involving infinitesimals of different orders, only the lowest-order term must be retained, and the remainder discarded.
That’s some wooly mathematics…
I'm math noob.... i always thought that this dx, dy is convention only...
This is how we derived every engineering differential equation, i.e., by ignoring the higher order differentials.
// "versions of zero" isn't quite right. if it was, then this....
f[x + dx] = f[x + 2 dx]
So pragmatic!
A lot of these equations at the beginning make sense if you replace the variables with matrices
I think Newton's differentials were very similar to this if I'm not mistaken.
Btw, this video reminded me of one of Michael's earlier uploads: th-cam.com/video/dyjlRi8nuw0/w-d-xo.html which introduces not only the idea of different kinds of zeroes but also different kinds of infinities.
Interesting
Is there any result he proved that many years later turned out to be false ? (Result, not the proofs he did)
He was wrong on at least one conjecture but I don't think he ever proved any false result.
So, Euler literally worked with R[dx]/dx^2=0 before all the algebra stuff was invented.
It looks likes dual numbers system.
This is so ill defined it hurts
I guess you mean "this is the product of a top tier creative free-thinker, one we could only dream about coming close to, given our handicapped-by-formalism rigid minds"
@@rafaelfreitas6159our minds that got trained to crunch numbers, rather than developed in love with numbers.
@@rafaelfreitas6159both are correct
Rigor is the greatest enemy mathematics has ever faced, it single handedly brought an end to the golden age of mathematics.
@@costakeith9048With no rigor you are just claiming unproved things, doing selective manipulation and numerology. With only rigor you won't be able to properly explore new ideas and you are gonna be basically "handicapped by formalism". Rigor is not an enemy to mathematics and you should stop acting like it is just because being formal is not an easy task
Curious if surreal numbers would naturally generate derivatives???
Yes, surreals form a totally ordered field, so we can define an absolute value, which is a norm. So, we can define limits for surreal-valued functions. Since we can divide surreals, we can define a derivative as usual (limit of average ratio)
It's really interesting (and important) from the historical prospective, but mathematically this video is just painful lol
This was really cursed! If the dx is 0 then you can replace some random dx by 2dx and get all the wrong answers
Thank you this is very cursed
somehow I smell nonstandard analysis…
This is like different cardinalities for zero
Zero is weird.
Smoke and mirrors
Start by saying a*0=0, then divide by zero. Huh?
It's kinda funny how it's close to exactly how i used to think about it when i first learned about calculus but these days it makes me feel anxious as hell even with explaination at the end
so he took lim h=>0 literally
0/0 is a strong signal that we don't have enough information to compute the limit. It's therefor usefull to look at it from a different perspective, like looking at how the derivatives behave around that point.
k[X]/(X^2) 👁️👁️
I won't watch another one of your videos until you can show me in real life that the imaginary cannot be associative.
I always think it's hilarious when modern mathematicians think they're being "more rigorous" than prior generations…
They are. Even the non-standard analysis has become more rigorous. Rigor is about having a better argument and naturally, mathematicians have accumulated good arguments only over time to answer each objection in as clever a way as possible.
Flew over my head. Looks like selective manipulation
Kind of. Basically the rule (the transcendental law of homogeneity) is "in a sum involving infinitesimals of different orders, only the lowest-order term must be retained, and the remainder discarded" (from Wikipedia paraphrasing the mathematician Henk J. M. Bos).
For example, using the product rule derivation featured in the video:
So y = fg implies dy = (f+df)(g+dg) - fg = fg + fdg + dfg + dfdg - fg = fdg + dfg + dfdg
The last term is the product of two infinitesimals whereas the other two terms have only one infinitesimal factor in them so we just ignore that last one.
First!
In modern times, it is hard to believe Euler actually calculated derivative like this :D. He must have got help from some weed to have this nice imagination.
I feel like infinitesimals are the most natural way (ala Leibniz). The modern epislon/delta definition is rough
@@TheEternalVortex42 I love how modern mathematicians fancy themselves as intellectually superior to Leibniz & Euler because those idiots used differentials, instead limits -- the way God intended !!
@@glynnec2008Nobody does that
бред
🎉формализм🎉
@@lukandrate9866 если бы
gg!