How are you writing in the air? I used to write upside down on maps for guests in a Provincial Park when I was your age. Was a fair to middling successful trick for my partner and I to meet girls. But I had to watch three times to get the gist cos your math appearing in the air and it looks mirrored a la Leonardo as well keeps distracting me. Really cool. How do it work then? I'm thinking the same idea for music instructional videos. A graphic of some sort is all good as far as it goes but this is better. Just the first time I've seen it so I found it a bit distracting. For kids an easy way to demonstrate the issue with divide by zero is using a carton of eggs. My daughter was in tears. Couldn't grasp it. Out come the eggs. Full carton 12 holes 12 eggs so 1 egg per hole. Take 6 out. 1 egg for 2 holes. Throw the other 6 in the pan too. Now what? Happy tears now. Lightbulb moment. She understood it now as how many nothings are in a something? Not exactly the way mathematics says it but like Mr Spock said those are essentially the facts. Tried it a few times since and it works a charm. Once that basic idea is understood the rest of it's easy to get. 0 and 1 are different from each other and all other numbers but to most people they are just the first two of our ten digits.
@AdvaitAndLaynePals Isn't +0 = -0? If so, what about 0^(-0) = 1/0? I'd love school math exams if one could turn anything to 1 by dividing by zero! I hope you're a school teacher.
@@krillinslosingstreak OK, its just that they’re infamous for “proving” that all the integers add up to -1/12 and part of their “proof” was a graph with a sine wave shape which oscillated from 0 to 1. This, according to them, was the same as a straight line at y = 1/2. It most definitely isn’t. In calculus possibly, but not the way they were trying to use it. (Sorry for the long explanation.)
That comes rather naturally as these two are the neutral element for multiplication and addition. This is for example also the reason empty products and empty sums equal 1 resp. 0.
@@spectralumbra1568 If we were in a simulation, then everyone would follow some sort of rudimentary logic. Let me assure you friend, some people most certainly do not
It actually took thousands of years before zero was recognized as even a 'number' at all! It took Indian/Hindu mathematicians and philosophers -- who already appreciated the significance of 'nothingness' as a concept -- to finally give it its own symbol and mathematical definition. Not such a simple beast, at least to our naive intutitions.
No, even with the square root function, you still get 1 result, hence why it is called a *function.* I do not care if Wikipedia thinks otherwise in its opening paragraph, as Wikipedia also contradicts itself throughout that article.
@@angelmendez-rivera351 The Wikipedia article separates the "square root function" from the concept of square roots, as it should. That function is a mapped for convenience with geometry. And yeah, that is the same thing he did with 0^0 to make it determinant, but I think it's interesting that square roots are never referred to as "indeterminant" despite being similar.
@@biggerdoofus This is wrong in two ways. 1. Wikipedia is still somewhat wrong for the simple reason that is just abusing some of the language. The name "square root" has conventionally always referred to the sqrt function, and as for the "square rootS," this is where the abuse of language kicks in. Wikipedia should instead be talking about the roots of the polynomial x^2 - y, rather than "the square rootS of y," as this just leads to conflation, and as I said, is abuse of language. Wikipedia is usually a pretty good source when it comes to mundane aspects of a topic, but this particular thing is far from mundane, and it likely is in Wikipedia for the same reason that teachers actually teach this in schools, being incorrect and all. This is why most people nowadays are under the impression that sqrt(9) = +/-3 as opposed to simply sqrt(9) = 3. 2. The word "indeterminate" in mathematics has always referred to limiting forms, specifically. In fact, saying "0^0 is indeterminate" is also an abuse of notation. It would be more correct to say lim f(t)^g(t) (f -> 0+, g -> 0+, t -> c) is an indeterminate form, and if you wanted to use a shorter notation, you should write that (-> 0+)^(-> 0+) is indeterminate instead. This seems incredibly pedantic, but notation in mathematics is a super important thing, and every "controversial" topic in mathematics that does not involve an unsolved conjecture can always just be simplified down to a matter of misunderstanding and misusing notation, and in particular, to problems with the education system, rather than problems with the actual mathematics that involve these notations. No mathematician ever woukd dispute that the empty product is equal to 0. So the claim 0^0 = 1 should not even be considered a problem at all. But because the notation "0^0" is frequently used to instead refer to "(-> 0+)^(-> 0+)" rather using notation the way it should be used (particularly among teachers), people get confused and feel that 0^0 = 1 therefore has to be wrong. It's no different with the issue of 0.9... = 1. This equation is irrefutable. I can prove it rigorously. And the mathematics themselves are not controversial to anyone. What is controversial is the fact that most people interpret "0.9..." to mean something completely different than what it actually is defined as, hence creating confusion and, yes, controversy. This is why, rather than enforcing abuse of notation and of language that has been commonly practiced by people for decades or even longer than a century, I am rather militant about the way notation is used. Using notation incorrectly just makes learning mathematics much harder than it needs to be, and it makes the pedantry feel that much more infuriating to people.
@@angelmendez-rivera351 Ah. In that case, Wikipedia is also wrong to have a bunch of articles about the "algebraic" meaning for "indeterminate". I don't have any resources for the history of the term though, so I can only assume those articles are being written and verified by people who are math-adjacent rather than just mathematicians. I'm a hobbyist game dev and usually use multiple languages at once, so I may be a little too used to notation being arbitrary and functions being allowed to return weird results or sets of results.
@@biggerdoofus Wikipedia is somewhat unique among social platforms in that its users (i.e. editors) are not allowed to cite personal expertise/experience in their contributions, because that isn't verifiable. Instead, since Wikipedia insists on third-party citations (especially when there's doubt regarding particular facts), the defense against incorrect information is that the editor without (good) sources cannot defend against the editor with them. The only facts immune from the citation requirement are facts that are so incontrovertible that even absolute laypeople don't doubt them (I'm talking about like, "April 4, 2000 was a Tuesday" kinds of facts here). At any given time, most articles on Wikipedia are in a state of "consensus," meaning nobody is challenging the integrity (correctness, completeness, reliability, or clarity) of the article. While it could just be that it's a tiny article that very few people come across, for a more popular article a consensus indicates the tacit approval of the vast majority of its readers. And since all it takes is one person to challenge the status quo, if there aren't any recent major edits, a [citation needed] or [dubious] tag, or a discussion on the talk page regarding the particular information you're reading, it's a fair cop that nobody (not even the biggest experts) reading the article disagrees -- at least, not enough for the article to be worth changing. But when in doubt -- and especially when things of importance ride on correctness -- use the citations. Wikipedia works a lot better as a way of finding trusted citations than as a source itself. :D
My math teacher casually threw a 0^0 at me and got mad at me for being "wrong". Now I'm pissed that half the mathematical community would have said I was right.
@@gamerdio2503 to be fair, limits of a function at a point does not necessarily equal the function at that point. That's sort of the point of limits, they specifically describe the behavior of a function around a point, but have no bearing whatsoever on that point itself.
@@Marchclouds It has been a while since i learned calculus but what i remember of indeterminate numbers is that when you run into one it doesn't mean "the correct answer is 'indeterminate'" but rather it means that you need to find a way to calculate it that avoids the indeterminate number. Hmm... My uncertainty in saying that bothers me, perhaps i should relearn calculus. Anyway the important part is that you can calculate the determinate answer to an equation even when there is a path where an indeterminate gets involved
@@Marchclouds Your calculation is 0/0 = x, therefore 0 * x = 0. Which is a really bad thing to do because you have just multiplied an "equation" by 0 which is not an equivalent reformulation of the "equation" as multiplying both sides with zero makes it trivially true. I could claim that 2 = 1 and prove it by multiplying both sides by 0. 2*0 = 1*0 but that doesn't mean 2 = 1, does it?
The meaning of 0^0 depends entirely on the context in which it appears. In *calculus and limits,* 0^0 is considered an *indeterminate form.* This happens when you’re dealing with functions like f(x) and g(x) where both approach 0 as x approaches some value. In such cases, the value of f(x)^g(x) depends on how the functions behave, which is why 0^0 in this context doesn’t have a fixed value-it requires more information to evaluate. In *algebra and combinatorics*, 0^0 is typically defined as 1. This is done for practical reasons, like keeping formulas consistent. For example, in the binomial theorem or when calculating the number of ways to choose zero items from zero options, defining 0^0 as 1 avoids unnecessary exceptions and makes things work smoothly. The confusion arises because 0^0 can mean different things depending on the situation. In some cases, it’s left undefined to avoid ambiguity, while in others, it’s explicitly set to 1 to simplify calculations. To clarify further, sources like Wolfram state that x^0 = 1 for any x ≠ 0. For x = 0, they treat 0^0 as indeterminate unless it’s in a context, like algebra, where defining it as 1 makes sense. *Conclusion:* - In limits, 0^0 is indeterminate. - In algebra or combinatorics, it’s often defined as 1 for consistency.
Maybe i dont really understand what you mean, but what you mentioned is also mentioned in video. It said the 0^0 is not a limit. if the limit of x=a for f(x) and g(x) is equal to 0, then lim_{x->a} f(x)^g(x) seems like to be undetermined because it is the form of 0^0 But the most important thing is that you shouldn't consider it as 0^0 form limit. If every limit is wrote down by epsilon-delta definition, then theres no need to remember 0/0, ∞/∞, 0^0... is undetermined. They teach us this is just because most of the function we saw in real life is continuous, if we know the form of 0^0 is undetermined, then we can deal with it with some other method. But if you use epsilon-delta definition, you will not see any of 0^0 in the proof, then its "perfectly" avoid any 0^0, so in limit, you can do anything as usual even if you dont know what is "undetermined." The problem is "0^0 cant be CALULATED if we dont define it." As you said, wolfram said that x^0=1 when x not equal to 0. But wolfram is just a calulator. It doesn't even tell why? If anyone create another calulator and said 0^0=3.14, wont you doubt about it? 0^0 is only controversial, theres no proof said it should be undefined. It is undefined is only because we don't define. Maybe undefined is a good choice for its definition, but I trust define it to be 1 will do more good than harm.
Fun Fact: if you take the limit of x^x with x approaching 0, it will at first appear to be getting closer to 0, but will actually increase, and approach 1.
The entire video be like th-cam.com/users/aboutthisad?pf=aos&source=youtube&reasons=AXRXrqkI-8iATWcSlBOY0FmVIL2ZaGjRhDwuqqv4n0InvTNVV_-vuq0U2785FnAGomj1bNg2lJPjzhaid8AshKbXWziybELxLrP5_uwGyx-cC6dFWqo835lImQxv8eMPIqXnj-ZWco4JMUYOT7fn6DYf3pyYRP1350Nw2Em2zGrLgDPrQeGxTBidWFjKjJFSPFkPXG7ZAE0xANcQTWa6cB-lLP3eDMsBhIo2TpekZU5HWJHtiecZkwpcxNIzG74z62vq-c3tL3ZB9c07LKKCnt_G9OfKVKeJt7jS1pJVl3FRo38_RQu7ND-vFP0WSwcNJcayqHsdXznGFhnJkEhnE-zmcFrC8bj9XDdY_duilTcZwPgVJSBgIMuHDGxUDmC2SGpp0iKno9Vi0nqGL6WSuEEvPwmHQxUuLyNpMMi4oKfqunhf5TqlayrwZ2IPzJgl-qCFOET00t1qgQhFWN0B0ejIYtA7q1cQK7KwZ4ytR9oxweT3LhokbOytnl9qtWcp6NDIMT35IXvI1jdvEQ1WhsjuTATzso3NESswCJzVsV4ZI6F1PS1YSFeNesN67oeAWs_KpmH_2E-sSCVngk_R5OqR8ZmvgXTYCMkvMTaEOPOyjZC0PEJJbwtOu_uDbJOJakvlqjUXDBDB_0IizKeqGSHd9MO48kcdUk_LQKB2pOjeS95Ab8kj0tfPj-dDDgLWITTTKPuiZqmcMjml6EqIg73V-1CUthxOaqqAMVgfRYq29rVKSqhcZN_t6dNzajDtLwODS5-ORt4S4Puw0mwbwTxJbFo5MMarKOYuA8hQUuUIVL1bHbnz19v-GQnPg8Wc02uAE2n5BLkM31h05wCSvh0EbnxHpj-zNHRzAEVVjKh8K55Fk-ZZTOAMcedUswoyOG9VQvC7ZNTyHitl6Mf8uOqvinPfCk3izxQFyuwYPxZDR_24ruJUAe4uO7JiKSbj76B3ooQ95TxrVYdwes-KrqBDjiLQBPqXKm8vfL38ppxUIoKpKpdA4Zt-HKl1c3nijx1DQnu_vVkAlBpQn2wArq5YKc0ZsCBcnbESaoLD3_yRkf7ejuZp226wl6A0GanBVCPVr3CbBalt0OpdkdbnMVr6uyyTXWkPO19ElPejrlhxVvKVkBoPNOuDv4bNEozpOXeUZCnjR53u-lXjinQHqdL9U6pw_-tXjw81JnxpTJs2-Elozkeb8AMHl_hqXzBH2cuORdXGwLxAMZmMUcXCCt8b0tNrg7PQeRbU_ZwRraoHt-7RZGpBl1GJ_NNmmH_yW80OUz1YbCIXyfXfYf8ZKCBF5o9O9cLRJRdttFu5XXarL_ZkZHqopurfIRaren8LK-8gT9KPwobMIVwDEbFRvLvPFm_8wOaxmlaaTY_vYFtExva63nfor-KIHmKdgoaMh2FNcj-spt7Ey3qurQ7rk8-c5E8n-zKrTJb4i7pEUd9Y_Giv4eIFv1s-AJxHrrcESDV11xXpB-2voFtPjvZari98zk9y0m_U11KFGwDYDPX0dFNA1BJsRMGAIwHMw52uc9oRNjHOR4jmvkzjofllzAxccNKjdmMJprnMj27wrw4gaInP9x9cF9FJQsAUIPb8xw6aVJbr5y2j0rdwW8rEpCqXyUdF6RHR8A7_rc9sz4qVhHTnp-xLbkeeziEvC-T2t_SDk4LoEAy85I6e38aEtrHUZn0ZNQXLXZrt1yncWXRTyNJahjKsbMk_7ccU9tUGcb4Lh4NMXcYwWS-SBg0CFFgWoBAGpxgTN9RYWr4rkq9FxkavAt5oK-AdUmuD18IaoCBu_TIUYSlkLmOCWUYNBQukLhVLawPmA_iRCgfQgucGcADjH0A3YpP_Y2ZwTmjj9Q6e-30fOc2HwnCHNiny-wYFs-qe-gBa6eXF51zeN_7tV5SOGWPERqLoxUBBQwG17VgXJQEZH4I3oE28rI4YO3BQYHWDlP56OQPEabLphpdkVVcVDn1ubZYANCGItags0CuciuOGtjLXHuTBHPiaROXVZaf5dXFX6p_-YKQKnEBjTSbXAz3J6dmWADHYS9SESkVrRrzId3tO5ms1iDBoraXz2an4gcargwWljnL88RYh3QuHm6CfWMMVuhcyYsvH5ludQxU6GLxCKkZnyDYHKttUuJw4nbWn5g&hl=en&pageId=114547461114575074589&opi=112496729&ata_theme=dark
I originally accepted 0^0 = 1 based on the Desmos plot of y = x^x. But when you brought in the binomial theorem and taylor series expansion of e^x, it made more sense.
Last time I checked x^x was undefined at x = 0 on Desmos, but if you look at the limit (so take x = 0.01), it's pretty obvious that 0^0 -> 1. If you put in additional mathematical evidence like the binomial theorem, then it's a strong case that it equals 1.
*0⁰ = 1 Proof* (This is going to ignite an argument...) *TL;DR: Using 0 as an exponent is an empty product. Empty products always output 1, regardless of the input.* Using a capital pi to define integer exponents in both directions (±), x⁰ always results in an empty product (upper bound less than lower bound), which evaluates to 1. Here are basic formulas for integer exponents (positive/negative integers) using capital Pi. bⁿ = Πₖ₌₁ⁿ b b⁻ⁿ = Πₖ₌₁ⁿ (1÷b) When n is 0, this results in an empty product for both equations. b⁰ -= Πₖ₌₁⁰ b- = 1 b⁰ -= Πₖ₌₁⁰ (1÷b)- = 1 *This even holds true when b and n are both 0.* (Undefined values are overridden by the empty product.) *0⁰ **-= Πₖ₌₁⁰ 0-** = 1* *0⁰ **-= Πₖ₌₁⁰ (1÷0)-** = 1* The same thing happens with factorials of natural numbers and 0!. n! = Πₖ₌₁ⁿ k 0! -= Πₖ₌₁⁰ k- = 1 (Posted as a regular Comment on the video as well)
Knuth's opinion is that if the exponent is viewed as an integer then 0^0 = 1 (because we don't have to worry about a bunch of annoying special cases as you noted) but if the exponent is viewed as a real number 0^0 is undefined (because the function x^y has an essential discontinuity at x = y = 0).
Many things in mathematics are controversial (mostly with teachers and students, and not actually with mathematicians). For example, you will find that too many people disagree with the equation 0.(9) = 1, despite the fact that the equation is irrefutable. Indeed, you may even be one of those people yourself. But this is not controversial with mathematicians. Mathematicians all agree unanimously that the above equation is true. Teachers and students are the ones who disagree.
I've never heard 0^0 being undefined. In school, I was taught 0^0=1 and my teacher even gave us the explanation you gave at the end. 0^0=1 easily makes the most sense to me lol.
My case was the opposite, I was taught it was undefined at school, and then when studying Engineering I was told it was equal to 1 (for practical purposes).
No it really depends on the case just as with infinity. For example let’s consider following divergent series 2*2*2*….. = p Then p must be 2*p tight ? p = 2p | solving for p p = 0 Ups something happened here. Never Play Roth infinity and 0 they’re not good numbers
@@mathsphysnexus no what I wrote is correct sir. p is the sum so two times p must be 2p = p, since p is infinity it doesn’t change it. But0 also satisfies the equation
I think of 0^0 as multiplying by zero zeros. If I multiply any finite number by any non-zero quantity of zeros, I get zero. But if I multiply it by no zeros at all, I am not changing it. So the answer must be the multiplicative identity.
Regardless of how you feel about 0^0, I hope you enjoyed the video! To my understanding, there is not a 100% consensus on the definition of zero to the zero power and many (highly qualified) individuals will view 0^0 as undefined. This video is my view on what the logical and intuitive definition of 0^0 should be. A lot of time and research went into making this. If you enjoyed it, I would really appreciate a 'like' on this video. If you didn't enjoy it, I would appreciate a 'dislike' and a comment with your critiques. Thank you all very much for watching!
While it is true that there are highly qualified mathematicians who will say that 0^0 is undefined, from what I can find in the literature, *most* mathematicians agree that 0^0 = 1, and there are plenty of mathematicians who would argue a consensus does exist. This, in itself, is indicative of there being a consensus, even if that consensus is not as strong as it could be. It should also be noted, though, that the consensus is stronger now than it has ever been.
@@angelmendez-rivera351 Yes, the consensus now is stronger than it ever has been. A lot of the "highly qualified" individuals who disagree are thinking in terms of the old arguments that this very video debunks. Why? Because they heard those arguments when they were mathematically naive, and then they never thought about the topic again.
I would argue that it is indeterminate. Indeterminate is not the same as undefined. Indeterminate forms are both indeterminate and undefined (without context). Something like 5/0 on the other hand is undefined, but not indeterminate.
In some cases it’s useful to consider 0^0 indeterminate, or rather, encountering 0^0 means that your current approach isn’t going to yield any useful information. If substitution gives you 0^0 it doesn’t necessarily mean your limit is 1.
Correct; it doesn't mean your _limit_ is 1. But limits are not the same thing as arithmetic. So this isn't a compelling argument to say that 0^0, as an arithmetic expression, should be undefined.
Suppose x^x = 1 xln(x) = 0 x = 1 or x = 0 for x = 0 to be a root them lim(x->0) xln(x) must equal 0 using L'Hospital's rule, lim(x->0) xln(x) = lim(x->0) -x^1 = 0 Therefore, 0^0 = 1
>> for x = 0 to be a root them lim(x->0) xln(x) must equal 0 no?? that's not how it works. and by the way, you just proved, that x = 0 cannot be a root of xln(x) = 0, because ln(0) doesn't make sense. it doesn't matter if a function approaches some value
I remember in one of my math classes (not set threory but something similar enough) the defenition of x^y is "the number of functions with input from a set size y to a set of size x" and from that 0^0 is just how many functions are there from the empty set to itself, which is 1 (some kind of "epmty function" with no output but also no input space)...
x doesn't have to be a number, because we don't have to work with sets and functions. This holds in any closed category with initial and terminal objects; we can show that the unique internal hom object [0,0] from the initial object to itself is equivalent to the terminal object 1. Neatly, the same argument shows that [x,x] will always be non-zero; there's always at least one such arrow, the identity arrow, inhabiting that hom-set.
In competitive programming, I've mostly encountered problems where assuming 0 ^ 0 = 1, have worked in formula (if my memory serves me right) and given me the correct answer, but I've also encountered one scenario where 0^0 = 1 would lead to the wrong answer, so I think it's best to keep it undefined.
No, this is completely false. You did not encounter an scenario where 0^0 = 1 led to the wrong answer. You encountered an scenario where someone made a mistake elsewhere, and instead of correcting the mistake, they incorrectly blamed the failure on 0^0 = 1.
@@gildedbear5355 No. There is literally no scenario in mathematics where 0^0 = 0 makes sense. Literally none. Stop spreading misinformation. I am tired of you people doing this nonsense.
@@gildedbear5355 We need actual instances where 0^0=0 makes sense, preferably a series that requires it in the same way that the exponential and binomial series in this video require 1
ปีที่แล้ว +3
An approximation I devised obeying the involution (-x + 1)/(2x + 1) for pairs of values in x, is x^x = (x^2 + x +1)/(-2x^2 + 4x + 1). It is the special case a = 2 in (-x + 1)/(ax + 1), respective ((a^2 - 1)x^2 + (3a +1 - a^2)x + 3)/((a-a^3)x^2 + (a^3 + 2a)x +3). For example it gives (1/4)^(1/4) = (1/2)^(1/2).
By the way, 0^0 must be defined if you want to include tetration in our number system. According to the arithmetic-geometric conversion, any 0's get converted to 1's, as 0 is the arithmetic identity number, and 1 is the geometric identity number. The operations get increased by 1 hierarchical order during an arithmetic-geometric conversion. Therefore, 0^0 is equal to 1 tetrated to 1. However, failing to define 0^0 causes tetration base 1, 0, and negative numbers to become undefined as well, including 1 tetrated to 1. However, 1 tetrated to 1 is just a power tower of 1's with 1 entry, which is just 1. Therefore, 1 is undefined. All integers can be multiplied by 1, and then all integers are undefined. All rational numbers are ratios of integers, and each integer can be multiplied by 1, and the rational numbers get lost in the black hole of undefinedness. Irrational numbers will eventually fall, first the square and cubic roots, then pi and e, and finally the complex numbers until the entire number system gets annihilated except 0. And finally, 0 will accept its fate of being undefined as being a product of 1 and 0, and the entire Mathsverse will collapse. If 0^0=1, then 1 tetrated to anything is equal to 1, including fractional and irrational numbers. 0 tetrated to anything is 1 if the tetrating number is even, and 0 if the tetrating number is odd. Negative numbers tetrated to anything are: Defined for all integer values if the negative number, written as a fraction, has both an odd numerator and denominator Defined for integer values to 2 if the negative number, written as a fraction, has an even numerator and an odd denominator Defined for integers to 1 if the negative number is either irrational or has an even denominator. -1 tetrated to anything equals -1 if n is not 0 and is equal to 0 if n is -1, and 1 if n is 0.
This argument relies on the axiom 1⋅x = x (multiplicative identity property). This applies to any real number x. Is 0^0 a real number? The axiom may not be applicable in the manner suggested.
A lot of math relies on 0^0 being 1, only some of which was brought up in this video. The reality is that the only math in our common axioms that suggests 0^0 should have a value, all agrees that the value should be 1.
@@OrchidAlloy And yet many teachers and college students disagree with 0.(9) = 1, much in the same vein that many teachers and students disagree with 0^0 = 1, even though no well-educated mathematician would find the latter controversial, provided that you give a definition for exponentiation.
Brilliant, need to show my students this! I always approached it as letting f(x) = x^x, and testing ever-decreasing numbers to see from raw calculation the limit as x --> 0. This is really good in how you've conceptualised indices with the "1 (multiplied by)" as an invisible initial operation.
I’ve been thinking about this for days but I can’t wrap my head around it: PLEASE TELL ME HOW YOU RECORD THIS VIDEOS😭🙏🏻 I literally can’t sleep thinking about it. Did you seriously learn how to mirror-write??
I'm sorry I caused you trouble!! I write on a piece of glass normally (not reserved). After I'm done recording, I flip the image horizontally in a video editing software. You may now go to bed :)
@@BriTheMathGuy omg THANK YOU. Me and my father literally had a discussion about it and it was actually much easier than we had imagined: we like to make things harder than they actually are hahaha
It's called a light board, if you want to learn more or see other youtubers use them. They're great for combining hand-written lecture notes with projected powerpoint displays!
Its fantastic how any advanced mathematical functions like exponents and factorials, and maybe others, treat operations on 0, like 0^0 and 0! as 1 Truly fascinating keep these vids up!
I've found that good teachers make all the difference. For something you'd be easily able to access, I think 3Blue1Brown's calculus series is fantastic and highly recommend it, as he does a great job using visual representations of concepts and not getting hung up on rigor.
Excellent explanation. I use the definition of multiplication which states that multiplication must be between two factors, and then you use parentheses and multiplicative zeros to make it work, but your explanation, although less stringent, is quicker to reach the message. Keep up the good work!
Even with limits most limits of consideration will lead to 1 rather than 0 (how useful is the function 0^x?), so it is perfectly natural for it to be defined to be 1 using limits as an intuition. The problem with defining 0/0 is that there are many different relevant limits and they can approach really any value, so a consistent definition doesn’t make sense in that case.
@@jadegrace1312 *Except that f : R*R -> R*R, (x, y) -> x^y isn't defined for (x, y) = (0, 0).* You are wrong on multiple grounds. 1. The function that you defined above is nonsensical, since x^y is necessarily a real number, not an element of R*R. 2. f : (R+)*(R+) -> R, (x, y) |-> x^y, where R+ = {z is real : z = 0 or z > 0} is perfectly well-defined at (0, 0), and the video discussed, f(0, 0) = 1. This is not debatable.
@@jadegrace1312 No, you are wrong. It does equal 1. You can choose to believe otherwise, but it is demonstrably wrong. I can prove that it is wrong. Look z^n is defined as the product where z appears as a factor n times. This is the definition everyone uses. This is how you get that 2^3 = 8, because by definition, 2^3 := 2·2·2, since this is the product where 2 appears as a factor 3 times. This is how you get that 3^1 = 3, because 3 appears in the product only as a factor 1 time, so the product is the factor. This is how you get that 5^0 = 1, because it is the product where 5 appears as a factor 0 times, and since it appears 0 times, the product is empty, and so it is equal to 1. This is also how you get 0^0 = 1, because this is the product where 0 appears as a factor 0 times, and since it appears 0 times, the product is empty, and so it is equal to 1. This can all be presented rigorously, but I do not have a keyboard for mathematical symbols. If you want to extend the definition for real exponents, then you can, and you still get the same results in the special case that said real exponent is natural. Consider that z^n = lim exp[n·log(x)] (x -> z) is always true when z is real and nonnegative, given how I defined z^n. Then for real nonnegative z and real y, I define z^y = lim exp[y·log(x)] (x -> z) whenever this limit exists. Actually, this definition even work z and y are complex. Anyway, 0^0 = lim exp[0·log(x)] (x -> 0) = lim exp(0) (x -> 0) = exp(0) = 1. The only time definition will not work is when z = 0 and Re(y) < 0 or Re(z) = 0 and |Im(z)| > 0.
But the "in most contexts" thing is generally nonsense, because contrary to what people like to say, there literally does not exist an scenario where 0^0 = 1 is false. None. No mathematician has ever been able to present such an scenario without someone else showing they made a mistake.
There do exist contexts in which 0^0 != 1. These contexts usually aren't very useful and tend exist for the sole purpose of arguing against 0^0 = 1. For all _practical_ contexts that I'm aware of, 0^0 = 1.
@@angeldude101 That is what I am saying, I think it is funny that it is anticipated someone in the comments is going to start an argument about how this is not the case for the sake of arguing it is not the case.
@jash21222 *lim (x -> 0-) 0^x fails to exist...* Well, this assumes that 0^x exists for x < 0, which is not the case. Keep in mind: lim f(x) (x -> p, x in S) = L is defined by the proposition that for all real ε > 0, there exists some real δ > 0, such that for all x in dom(f), if 0 < |x - p| < δ and x is in S, then |f(x) - L| < ε. If you analyze this by letting dom(f) = [0, ∞), S = (-∞, 0), f(x) = 0^x, and p = 0, then you will notice that, for all x in [0, ∞), x is in (-∞, 0) is false. Therefore, the material implication is vacuously true for all real numbers L, which means that, in a matter of speaking, a limit _does_ exist, and is not unique. Every real number limit is a valid limit here. *lim (x -> 0+) 0^x = 0 is reason enough to leave 0^0 undefined.* No, it is not. At best, if proves f is discontinuous, which I never disputed.
a^b = e^( b(ln|a| + i*arg(a)) ) Substituting in 0, we get 0⁰ = e^( 0(ln|0| + i*arg(0)) ) ln|0| is -∞, and 0 * -∞ = Ø, so instead: let 0⁰ = lim x→0 e^( 0(ln|x| + i*arg(0)) ) For a principal value, let -π
I subbed to you mainly because of this, I don’t like it when people, especially high school teachers like mine, say “there is no reasonable definition for 0^0”. You gave a clear explanation.
Education in mathematics is a lot like an oral-written tradition: it is passed down and inherited essentially by word of mouth and word of book. Yes, mathematical consensus exists, and yes, peer-reviewed publications exist, but mathematics education completely ignores this and only relies on those very indirectly. In high school and undegraduate school in colleges, teachers do not discuss peer-reviewed publications in the classroom, and they do not even really mention the idea of consensus. They rely heavily on textbooks, some of which are good, some of which are not, and on the curricula that they themselves designed, and which are significantly affected by what they themselves were taught by the teachers. The way education works today is just a very elaborated, obfuscated, convoluted word-of-mouth tradition accompanied by writings. This explains very well why there is such a disconnect between "what teachers told me is true" and "what mathematicians actually hold to be true (in context)." Having said this, I suspect that the reason teachers keep telling their students that 0^0 is undefined, despite being demonstrably false, is that they are having a conceptual misunderstanding regarding how functions are defined and how limits are defined rigorously, and the distinction between evaluating a function at a point, and evaluating the limit of a function near a point. Studies do demonstrate that functions and limits are two of the most poorly understood mathematical topics by high school teachers and students, possibly the two worst understood. Personal experience also supports this hypothesis (although obviously to a rather small extent, since anecdotes are not statistically significant evidence), because I know of too many instances in which teachers have verbatim told students "if you want to find the limit of this function, as x -> c, then you should substitute the point x = c into it," which is literally and explicitly prohibited by the definition of limits. This, to me, indicates just how poorly understood limits are among high school teachers, and among some undergraduate college teachers. Even on YT, you can find plenty of videos of teachers doing this. The fact that they teach this only reinforces the student's already mediocre understanding of limits, and makes it worse, rather than better. Some of those students go on to become teachers, and then they teach their students whatever nonsense they ended up learning. Of course, this is not true of every high school teacher, but it is common enough to be a legitimate concern that needs to be addressed. Anyway, because limits are so poorly understood in the education system, they probably confuse the idea of the indeterminate form (->0)^(->0) with the arithmetic operation 0^0. They often treat them as the same thing as use them interchangeably, so much so, that they even use the notation 0^0 when referring to the indeterminate form, which is just strictly incorrect, because the indeterminate form is a limit expression, not an arithmetic operation, and so it should be denoted with a limit. The fact that this is an old controversy does not help the case, because all it does it make some people's incorrect understanding of the topic to feel validated and legitimized by their ancestors who got it wrong too, like Cauchy, for example. All of these factors together are probably what explains why the idea that 0^0 is undefined, despite being contradicted by the mathematical consensus and the literature, and by the simple mathematical logic itself too, really, is so commonly taught in schools. This is why I always say that education for mathematics needs a significant overhaul. Division by 0, 0^0, square roots and nth roots, the order of operations, the definition of the domain of a function, calculus in general, and many more other topics in mathematics have been taught so unbelievably poorly for so many decades now, that even the mathematics teachers are wrong, and these topics need to be approached differently, and be taught better.
@@angelmendez-rivera351 I couldn't agree more, teachers follow the rule that if explaining something a certain way will help the students solve questions and help those students pass that don't have the ability to (or simply don't try to) understand the complicated concepts. They'll not care enough to actually tell them the truth and this perpetuates as you said, and now it has gotten to the point where many teachers themselves are misinformed
@@ВладДрезельс Teachers fail to tell their students that there is a conceptual difference between the roots of a polynomial, and the nth root functions evaluated at s number. This distinction is not harmless at all, because it confuses students and it leads them to believe that, for example, the symbol sqrt(9) is -3 and 3, as opposed to being just 3, with the latter being the correct answer. It causes problems when students have to solve exercises where they solve an equation or evaluate an expression when those contain radical symbols. If you follow different mathematics channels on TH-cams and look at videos on the topic, you will see just how common this misunderstanding is. The vast majority of people will say that "9 has two square roots, -3 and 3, so sqrt(9) = +/-3," or if they accept that sqrt(9) = 3, then they will say "well, 9 has two square roots, but only the positive one is denoted by sqrt(9)." Even Wikipedia will make mistakes concerning this. Teachers have this super bad habit of assigning classwork exercises or homework exercises where they ask things like "what is the domain of sqrt(4·x - 5) + 7?" or "what is the domain of 1/(x^2 - 5·x + 4)?" Expressions have no inherent domain, so the question is nonsensical. The domain is something that comes specified with the definition of a function, it is not a property of a function that you can figure out a posteriori. This also gets me to my next point: there are studies that show that many teachers do not actually understand what functions are. Teachers are able to give intuitions, they are able to give examples of functions (in a poorly stated manner), and they know about things like the "vertical line test." But they are unable to define what a function is correctly, and they are also unable to explain what are the requirements for a function to be invertible, or even what a surjection is. This explains why the concept is taught so poorly. A function f is a binary relation from set X to set Y such that, for every x in X, there exists exaxtly one y in Y such that (x, y) is in f. Notice how, in order to define a function f, you need to define the domain of f in its definition. This is crucial for understanding how inverse functions work.
@@angelmendez-rivera351 Ok, I see, yeah, at my school there were things similar to what you describe; However, I wanted to mention this your last phrase about square root: “If they accept that sqrt(9)=3 they will say that 9 has two square roots, but only positive is denoted as sqrt(9).” I agree this is incorrect terminologically, because nth root is a function(or unary operation), so it is nonsense to say “there are two roots”, but don’t these people just mean “there are two numbers square of which is 9” and we took only “positive” part of x^2 parabola to “invert” it? I mean, I’ll repeat: right terminology does not work like that, but if person is speaking like this you need just to tell him: number square of which is 9 is not necessarily a square root; it’s a root of equation x^2=9. Square root is just positive number square of which is 9. I mean, it seems to me that this term-confusions may not be so crucial and could be fixed by one remark. And also about domains: I mean again, this is not how adult smart people write in papers, but a slight change of formulation makes the phrase “find domain of «insert expression»” absolutely correct: just say: determine the set values for which the expression after substitution them into x is defined(this is just too long to say I suspect). I mean, some people may don’t see these subtle differences, but sometimes people may just have a speaking and writing jargon which shortens and optimize communication. I mean for example in my country one never writes sine square of 2x as (sin(2x))^2. We write mostly sin^2(2x)(in fact we omit brackets as well, the power 2 looks small on a paper but not on a keyboard😭)-yeah, this expression is not in fact sine squared of 2x, but in a context everybody understands perfectly(either people with deep or superficial knowledge). All I wanted to say is that sometimes you may confuse real misunderstanding and innocent jargon which has influence on understating approaching 0.
Also, if you graph y=x^x and look at what happens when you go from the right towards 0, you can see that it seemingly goes down towards (0,0) but actually starts curving up to (0,1).
Yes, in real number plane, 0^0 indeed approaches 1 from both positive and negative sides. You might say, well yes, then it's 0^0=1, but no, in complex number plane, this breaks down. So 0^0≠1 unless you're working with only real things. Source: Numberphile
This also works from the negative too! I was thinking that this is a much simpler answer. But 0^0 could actually be two *different* zeroes. For example, if you took the limit of (e^(-1/x))^x that's still 0^0 but you get 1/e.
I've always thought of taking something to the zero power as dividing it by itself. With 0, that would give us the indeterminate form (not undefined) as 0/0 which only has values that can be applied in specific contexts by evaluating limits. Personally, I think 0^0 only works in context
There are other comments here that say it depends on context, but weirdly none of them actually _name_ a context. They only cite personal feeling or some incomplete memory.
@@muskyoxes ... That's because it's random formulas. Context might be f(x)=(x+2)(x+7)/(x^2-4). Here, there's a hole at x=-2, and you would get 0/0. But by analyzing the limit, you see the hole is at f(x)=5. So in this specific context, 0/0=5
@@muskyoxesthe density of an object is given by the formula, D=m/v, as the mass and volume both approach 0, the density also approaches 0. One of many examples
Glad I came across this video, when I asked my teacher in AP Calc (I think it was him) what 0^0 was, he asked me, "What's 1^0, what's 2^0, what's 3^0... so what do you think 0^0 is?" And that's how I just accepted 0^0 = 1 as the answer. Cool to see more information about this. Granted when you mentioned series I died internally, I didn't get get part of calc- LOL, I'm not good at Calc.
That's not a good answer. Sadly, the teachers that teach in the K-12 system are often horrible mathematicians. 0/0 is both indeterminate and undefined as the value can be just about anything, depending upon how precisely you got that 0/0. Sadly, this video is missing some pretty basic knowledge which results in the argument not following in any reasonable way. 0! is most certainly not 0. It's 1.
5:40 but if you are saying that we don't divide by 3 to continue the pattern instead we multiply 1 , 3 less each time then how would you define negative exponents? You pattern will just stop as 3^0. So the pattern indeed continue by dividing by 3 so that all the properties of exponents continue to work for all kinds of exponents. So 0^0 is indeed undefined but we use 1 for certain contexts because it just works fine over there.
As a tutor, I would point out to my students that defining 'raising a number to a power' as 'multiplying *_that number_* times *itself,* that many times' -- which is how most people learn it in school -- is actually *not* a correct definition. 7² is not 7 multiplying *itself* twice: that would be 7 (× 7) (× 7), [where the action/operation of 'multiplying by n' is indicated here as (× n) ], which is 7 multiplied by 7 once, and then multiplied by 7 again to make 'twice', but obviously this is incorrect as there are a total of three 7s instead of just two! Instead, the more correct way of saying it is 'multiplying *_one_* times the number, that many times', so 7² = 1 (× 7) (× 7) = 1 × 7 × 7, which = 7 × 7, since 1 multiplying any number is just that number itself. See, the 1 is usually omitted because it's easier to write fewer symbols, but it's still part of the definition: omitting the 1 is just a shortcut! Therefore, whenever you are raising a number to a power, say nᵏ, ask yourself, "How many times do I multiply *_1_* (!! extra emphasis in your head to break old/wrong way of thinking) by the number n? Ah, k times!" Then the resulting expression is "1 (× n) (× n) ... (× n)" with "(× n)" written k times. Thus, if k = 0, then you write "(× n)" 0 times, i.e. you don't write it at all! So, arithmetically, n⁰ = 1 .... and that's it, you don't write any "(× n)"s, just n⁰ = 1, and you're done! And thus, 0⁰ = 1 .... and that's it, you don't write any "(× 0)"s, just 0⁰ = 1, and you're done! Bonus: This also explains the whole 'negative powers' thing: The symbol n⁻¹ (or, alternatively, as ⅟n) is actually *just a symbol,* and it means 'the multiplicative *inverse* of n, whatever that might be'. And all numbers in a field -- *except* for 0 -- automatically have a multiplicative inverse defined for them, such that n × n⁻¹ = 1, by definition of what a multiplicative inverse is (it's the thing that multiplies the number to make 1). But there isn't one for 0 (because no number multiplies 0 to make 1), i.e. it is literally 'not defined', aka 'undefined', i.e. "0⁻¹" is an undefined expression. But when the 'multiplicative inverse' *does* exist, it is usually a *different number* than the original number (except that 1 is its own inverse). And so, when you 'raise a number to a *negative* power' what you're really doing is *just* raising the number's *inverse* (which, again, is its own number) to a *positive* power'. In other words, 7⁻² is not 'dividing 7 by itself twice' (which is just wrong, and further illustrates why the common 'definition' of exponentiation doesn't make sense), nor is it even 'dividing 1 by 7 twice' (although the result of that would be equivalent; it is *not* the definition); instead, it is 'multiplying 1 by the *inverse of 7,* namely 7⁻¹, twice'. Or, more succinctly, it is raising 7⁻¹ to the power of 2. Which = (7⁻¹)² = 1 (× 7⁻¹) (× 7⁻¹) = 1 × 7⁻¹ × 7⁻¹ = 7⁻¹ × 7⁻¹. Technically, you need a (very simple) theorem to show that n⁻¹ = 1 / n (provided n ≠ 0). And *then,* once you've got that little theorem under your belt, it finally makes sense to talk about 'negative powers' as 'dividing by the number', so the above becomes 7⁻² ≡ (7⁻¹)² = 1 × 7⁻¹ × 7⁻¹ = 1 × (1/7) × (1/7) = 1 (/ 7) (/ 7), which now can be read as 'dividing *_1_* by 7 twice'. Which, by the way = 1 / (7 × 7) = 1 / (7²), which is where the exponent rules come from, like n⁻ᵏ = 1 / (nᵏ) ... provided *_n ≠ 0_* (!! mental emphasis!). Incidentally, this is why (n⁻¹)⁰ = n⁽⁻¹ ˣ ⁰⁾ = n⁰ = 1, *except* when n = 0, *even though* 0⁰ = 1, because we *started* with n⁻¹, which would have been 0⁻¹, but 0⁻¹ is undefined, and so (0⁻¹)⁰ is also undefined! Order of operations matters! Again, these are results from the earlier, more basic definitions involving the operation of repeated *multiplication* and the existence (except for 0) of multiplicative *inverses;* they are *not* stand-alone definitions of negative exponentiation themselves.
Very nicely put. This is also why, when the exponent n is a natural number, I always use the English phrase "the product with n instances of the factor z" to define z^n, rather than saying "the product with z multiplied by itself n times." The latter is just a semantic mistake, and a lot more confusing to visualize, which is why people fail to understand that z^1 = z and z^0 = 1 are consequences of the definition itself, not separate definitions we impose afterwards.
Note 1: (-1) is its own multiplicative inverse as well. In fact 1 and (-1) are the only real numbers who are there own multiplicative inverse. Note 2: You don't need to say 7²=1x7x7 if you define the power of a number x to a natural number n as the product of n factors of x. x to the 0 would be the empty product which is the multiplicative identity, which is 1 in the real numbers. Rest seems to be just fine.
However, is there any proof that ‘one times the number, that many times’ is a real thing? If it isn’t, but it seems like the better definition, would using that definition complicate other formulas, math problems, etc.?
@@jaydenaleung It pretty much lies in the definition of multiplication. Basically the multiplication has a neutral element that won't change the outcome if multiplied with any other object. In the field of the real numbers this neutral multiplicative element is 1. So not only 1xa=a for any real number a, but also 1x1xa=a and 1x1x...x1xa=a. Furthermore an empty operation will always return the neutral element. So an empty product will always return 1.
Regarding series, it's a useful convention that 0^0=1 in the context of series, because it allows formulas to be written more compactly, not needing to split off the k=0 term from the rest of the series. But this doesn't necessarily mean that 0^0=1. Similarly, it's a useful convention that an empty product is equal to 1, as it allows you to avoid stating separate cases for when sets are empty. Ultimately, I'd say that any value you give to 0^0 is purely definition, and can be useful or not depending on context. It's not something that is proven using other definitions.
I just mentioned the empty product proof on R/learnmath, and got the response below. My Calculus Teacher agrees with the response. "Indeed, in most contexts. When doing modern algebra or discrete math, 0^0 = 1 is the best choice. Especially for polynomials, since they can be thought of as objects of the form ∑ (a\_n)x^(n), which yields 0^0 when evaluating at x=0. When doing more continuous math (e.g. real analysis, topology, differential geometry) though, it's safer to say 0^0 is undefined rather than creating special cases everywhere (e.g. in theorems like l'Hopital's Rule applied to exponentials)"
The Laplace Transform of f(t)=t^n is F(s)=n!/s^(n+1). Our concern is 0^0, so t=n=0. So, F(s)=0!/s^1=1/s. Inverse Laplace of 1/s is 1. Also, initial value theorem states that lim f(t) as t goes to 0 is equal to lim s*F(s) as s goes to infinity. Well, s*F(s) = s*1/s, which is just 1, so lim 1 as s goes to infinity is still 1. Therefore, 0^0=1 for both methods.
You are correct. There are no cases where 0^0 = 0 would actually result to be beneficial. Also, the definition of exponentiation itself contradicts 0^0 = 0.
@@angelmendez-rivera351 I actually meant there sure are such cases. Anyway, there's whole Wikipedia article on the topic which I think provides more complete view at the topic: en.wikipedia.org/wiki/Zero_to_the_power_of_zero In general I think it only makes sense to define 0^0 = 1 where it helps. There is whole family of functions where it makes sense to define 0^0 = arbitrary number to make the function continuous so I don't think there's point in taking 0^0 = 1 as universal definition.
@@kasuha *I actually meant there sure are such cases.* If that is what you meant, then you are wrong. Such cases do not exist. *Anyway, there's whole Wikipedia article on the topoc which I think provides more complete view at the topic.* Throughout my education, I have read this article more than 10 times at different points in time. One thing that you absolutely did not notice about this article is that it explicitly states a few times that mathematicians always choose to either define 0^0 as 1, or leave it undefined, which actually proves my point. There genuinely is no situation where mathematicians choose to define 0^0 to anything that is not 1. In fact, the same article also states at one point that... there is consensus about there being a mathematical consensus that 0^0 = 1. It makes my point even better. *In general, I think it only makes sense to define 0^0 = 1 where it helps.* There is not a single situation where it does not help. *There is whole family of functions where it makes sense to define 0^0 = arbitrary number to make the function continuous* No, it does not make sense, because several theorems in analysis demonstrate that those functions would still be discontinuous at that point even if it was defined at that point. For example, 0^x would still be discontinuous at 0 if 0^0 = 0, because 0^x for x < 0 is undefined. Also, the fact that you are making this argument demonstrates to me that you completely missed the point of the video, and Bri said in the video rather explicitly that using limits as an argument for undefining or defining 0^0 is completely invalid and fallacious.
@@angelmendez-rivera351 "There is not a single situation where it does not help." Sorry but you're wrong about it. For the simplest example, consider function f(x) = 0^x
This is a very nice video. Probably the best video on 0^0 currently on TH-cam! I wonder if, in your research, you came across the topic of the empty product but didn't feel your audience was "ready" for it? Your "multiply both sides by 1" argument seems to be emulating the empty product without going all the way.
I 100% agree with you, both on this being probably the best video on YT on the topic, and on his argument emulating the empty product idea without fully going all the way.
Thanks very much! I did come across the empty product argument but I wanted to give what I thought was the most simple/intuitive way to get to the result. Thanks for watching and commenting!
@@BriTheMathGuy I agree that the empty product can be a bit of work to motivate. But I think it's the best way to have this argument. Personal opinion, though! The rest of this post will be long, and it will be my take on the whole 0^0 thing from a very broad perspective. I know plenty of people won't read this, but I would love to say it anyway. Maybe it will help others. I don't think the argument for 0^0 = 1 used in this video quite hits the point (though I think this video points out the common flaws people use against defining 0^0 well). I consider the best argument to be a much more general one - 0^0 = 1 because it is an instance of the empty product. Empty operations are _incredibly_ useful. It can clean so many things up, and is a nice unifying theory which explains a lot of seemingly strange conventions, actually making them results, rather than conventions. When we allow for empty operations, 0^0 = 1 is true for the same reason that 0! = 1, which is true for the same reason that x^0 = 1, which is true for the same reason that 0*x = 0, which is true for the same reason that the empty set is the basis for the 0 vector space, which is true for the same reason that units aren't primes in rings (and why 1 isn't a prime number), which is true for the same reason that the 0 ring isn't an integral domain, which is true for the same reason that a topology must include the empty set and the set itself as open sets, which is true for the same reason that the degree 0 component of the tensor algebra of an R-module is the ring R itself, and so much more. Now, you're free to disagree with any of these things as being "true". You could change the definitions if you want. But these are things we have found to work very nicely, and they can all be explained/motivated with the same basic reasoning - the associative property is _morally_ about extending a binary operation to an operation on finite sequences, and that if empty operations are to be consistent with associativity, then the empty operation is the identity of that operation. So if you set up your mathematical framework to allow for empty operations which are consistent with associativity, then none of these things are "special conventions" anymore - they follow from the basic definitions of things like exponentiation, multiplication, addition, union, intersection, tensor product, etc. Empty operations also make things so much clearer. With allowing arbitrary finite products, the Fundamental Theorem of Arithmetic can be cleaned up from, "Every integer greater than 1 is either prime or can be written uniquely as a product of primes, up to order of the factors" to "Every positive integer can be written uniquely as a product of primes, up to order of the factors". It really boils the statement down to the heart of the matter. So there's an extremely general, useful principle, and this principle implies 0^0 = 1. This principle also explains why we know 0^0 = 1 will always give the "right" answer when 0^0 crops up in discrete formulas (including polynomials and power series and the binomial theorem) - because these formulas rely on the associative property of multiplication, which is the same thing the empty product relies on. (The associative property is also the _critical_ property that gives us the exponential rules!) Now, analysts may say, "0^0 doesn't work well with continuity", and sure, that's a true statement. So analysts are free to have 0^0 undefined in contexts where they care about continuity, if they please. But this shouldn't be seen as the "general situation". Undefining 0^0 because of continuity should be seen as an exception that is made out of convenience. Because it is a matter of convenience, not necessity. And Bri explains this well when explaining why "0^0" being indeterminate as a limiting form is perfectly consistent with 0^0 = 1. So why should this one particular instance of convenience be seen as the general rule when, in every other context, 0^0 = 1 works perfectly and has a very nice theoretical framework?
@@MuffinsAPlenty Well, honestly, to be fair, while I agree with your point, properly motivating the idea of nullary functions would take a whole series of videos, which would be well outside of what this video can encompass.
This is actually one of the reasons I think the video ISN'T very good. He completely ignores the logical result being an empty product set, to insert the *1 option, when you could similarly do just about anything (+0 if you wanted the other result). Then further proceeds to say the "better" calculators are handling it this way. To claim those calculators are "better" is completely false. In mathematics, the empty product does often become 1, and would lead credence to his theory, but in programming, the actual answer to an empty product is neither 1 nor 0, but null. The reason these programs spit out 1 or 0, or even undef (shorthand for undefined), for 0^0 is because it is an easier form of error handling, as a fetch command involving zero would cause an error. Unfortunately this video assumes the conclusion for much of its premise.
I came up with this solution (not about 0^0, just talking about how exponent works in general) while in class talking about exponents and I'm so glad its actually held in fact and makes sense :)
tbh reindexing the power series or adding restrictions to the binomial theorem actually makes way more sense than arbitrarily changing the definition of exponentiation
here's the problem though, x^x has no limit as x approaches 0 because the positive side will converge to one and the negative converges towards negative one (when defined)
@@joeym5243 But we need to define it. "Undefined" is just a code word of saying, "Screw this challenge. I'm turning back". This is very bad as it states that you are fearful and afraid of challenges. This is the exact opposite goal of humanity. Humans are meant to break away from nature using self-awareness, conscience, willpower, and imagination. This is why mankind managed to establish such civilization that sets them apart from all animals. We 21st-century humans must thank our long-gone ancestors by breaking away even more to make them proud. Einstein left in his will saying the first person that uses his theory of relativity to invent time travel must travel back to April 17th, 1955, to make him proud. "Undefined" is basically stating we are not used to those numbers, so let's just don't use them. It all depends on context. If we were living in Minecraft, a world without circles, and all of a sudden, a circle randomly appeared out of the blue, we would call it "undefined", but since in our world, we have polar coordinates, the premium package with the spherical bundle, we are accustomed to seeing circles, and we won't call them "undefined". Also, a long time ago, people worshipped the moon like a god at an "undefined" distance away from us, and they believed the sky's the limit, and everything they see in the night sky are basically pure celestial spheres of light at an "undefined" distance away from us, and the Earth was the point where those "undefined" distances converged to, but we managed to reach the moon and even send space probes outside our solar system, even attempting to reach the end of a universe, making such distances not "undefined" anymore. Finally, infinities are everywhere. Without it, the Big Bang wouldn't have happened, and every time you move, infinities are required to make it happen. Infinities created us, don't disrespect them by calling it "undefined" Divide by 0, spread your wings, learn how to fly, and do the impossible. We need infinities to make our dreams of time travel and superpowers come true.
Since a certain subset of nerds just love defining operations with no practical relevance, I've decided to inject some logic into the process. Any value of x that is raised to the power of n is expressed in the context of multplication n times. So x squared is "x times x", x to the power of 1 ix "x" and x to the power of 0 would therefore be expressed 0 times in the context of multiplication and thus would be "" (null). It really doesn't matter what x is.
You got me before mentioning the 0^1=0^(2-1). I immediately knew you were going to say that, even while you were still writing 0^(1-1). I think the empty product is what got me in the end. Still what happens with 0^z with complex z? I'm especially concerned with z being real negative or close to a real negative.
0^0 = 1, 0^z = 0 if Re(z) > 0, 0^z is undefined otherwise. This is easy to argue for, too. 0^z = 0^[Re(z) + Im(z)·i] = 0^Re(z)·0^[Im(z)·i]. If Re(z) > 0, then 0^Re(z) = 0, and 0·0^[Im(z)·i] = 0 regardless of what you may expect 0^[Im(z)·i] to be. If Re(z) < 0, then the expression is trivially undefined, because you have to divide by 0. If Re(z) = 0 but |Im(z)| > 0, then 0^Re(z)·0^[Im(z)·i] = 0^0·0^[Im(z)·i] = 1·0^[Im(z)·i] = 0^[Im(z)·i]. 0^[Im(z)·i] is consider to be undefined, since the only real way to deal with imaginary exponents is via b^(i·t) = exp[i·t·log(b)], but log(0) is undefined.
There are infinitely many places where x^y is undefined. The issue here is that 0^0 is not necessarily one of them. Because 0^0=1, 0^-1 (if defined) would need to be a solution to 0x=1, but this equation has no solutions in real or complex numbers, therefore 0^-1 must remain undefined. (Note that to keep this argument valid, I use only multiplication, not division. That is because division by zero is not allowed in a valid argument. Because it would make the entire argument itself invalid, division by zero can't be a step in any argument meant to establish that a value is undefined. We have to use other methods, such as multiplication, instead.)
Just because 0^0 is notationally convenient for series, doesn't make it defined. All those sums, like in the binomial theorem, that have 0^0 can be perfectly written so that the term where 0^0 is not part of the sum and is considered separately. Like you said, you only need to redefine the sum. You don't really need 0^0 for the concept to work. It's that the formula is not so pretty without it.
The issue is if you redefine that one series you break entire families of related series. The series is defined the way it is for a very good reason... The taylor series of the exp function around 0 has an infinite radius of convergence, and is continuous everywhere. Thats something you can prove without ever plugging in any values... If you plug in 0 you need to get 1, because thats what every sequence (exp(z))_z∈C approaching exp(0) converges to. If you consider the first term seperately and leave 0⁰ undefined you break almost every Taylor series, and you break the entire concept of continuous functions. Thats not something you want to do.
That is not the same thing. Analyzing the origin of the x^0 function is not the same as saying the number 0^0 is undefined. You can look at the limit of the f(x) = x^0 from the left and right and have a well defined value for f(0). That is completely different than saying you can have a well defined value for the expression 0^0. For mathematical rigor, what you should write for the first f(0) of the maclurin series of e^x is the limit x->0 of x^0/0! That limit is well defined as 1. But if the function was 0^x, the limit would be 0. What you are talking about has to do with the subject of removable discontinuities. I admit my real analysis is quite rusty and I'm only an engineer, but that is how I learned things in calculus 101.
@@Alkis05 the issue is the following: You can determine if a function is continuous based on some quite technical characteristics of said function, its entirely overkill but it works and its rigorous. If it is continuous then its limits tell you what the function evaluates to at any given point, if it isnt they don't. Now, when you have a continuous function that evaluates to 0⁰ at any point the limit will always be 1, and never anything else. There are plenty of discontinuous functions where that isnt the case, but they do not matter because their limits dont necessariely allow you to deduce anything about what the function actually evaluates to at any point (limits do not commute with discontinuous functions) So unless you want to break a lot of math you do not want to have 0⁰ be anything other than well defined and 1.
@@Alkis05 depends on the sets you define them over, if you define sqrt x over C, its not continuous, if you define them over R+ then sqrtx is continuous but 0^x isnt
Yes, 0^0 can be 1 in some cases but not always. You can get functions in calculus where the limit of a function approaches 0^0 but if you try to rewrite the the function, usually by using L'Hospital's rule, you can get just about any other number. So, 0^0 is actually an indeterminate form in calculus just like infinity-infinity, 0/0 or infinity/infinity. Regardless, you can actually still define the super-square of 0 to be 1 because the limit of the function x^x as x approaches 0 is actually 1, so it might be correct in that sense.
*Yes, 0^0 can be 1 in some cases, but not always.* No. Stop spreading misinformation. There is no situation in which 0^0 = 1 is false. None. Every example you can think of is an example where you made a mistake elsewhere and you just did not realize it. *You can get functions in calculus where the limit of a function approaches 0^0, but if you try to rewrite the function, usually by using L'Hôpital's rule, you can get just about any other number.* No, no such scenarios exist. The problem is that you are failing to understand that if lim a = 0 and lim b = 0, this does not imply lim exp[ln(b)·a] = 1. This has nothing to do with the arithmetic expression 0^0, which appears nowhere in the evaluation of this limit, not if you evaluate it correctly. The only reason it appears in this limit is because people incorrectly write exp[ln(b)·a] as b^a, and because they then incorrectly say lim exp[ln(b)·a] = 0^0, which is false. Consider this: suppose for a minute that b^a = exp[ln(b)·a] really is true (and it is not true, but more on that later). If lim a = lim b = 0, then lim exp[ln(b)·a] = exp[lim ln(b)·a], since exp is a continuous function. However, it is false that lim ln(b)·a = lim ln(b)·lim a, and the reason this is false is because if lim b = 0, then lim ln(b) does not exist: ln(b) is a diverging sequence. So you cannot say exp[lim ln(b)·a] = exp[lim ln(b)·lim a] = exp[ln(lim b)·lim a] = (lim b)^(lim a) = 0^0, which is what you are doing here. But you also cannot say b^a = exp[ln(b)·a] to start with, because b^a is defined for arbitrarily real b and integer a, while exp[ln(b)·a] is defined for positive real b and arbitrarily real a. These are different expressions. If b is positive real and a is an integer, then the two expressions happen to be equal, but they are defined on incompatible domains, and the domain in which we are evaluating those limits is only compatible with the domain of exp[ln(b)·a], not the domain of b^a: you cannot freely vary a to be a real number, since a is an integer, and the integers are isolated points of the real numbers. *So, 0^0 is actually an indeterine form in calculus just like infinity-infinity, 0/0, or infinity/infinity.* Cursed be Cauchy for spreading the myth/hoax of indeterminate forms, and cursed be the education system for mathematics, for not correcting this large mistake. There is no such a thing as an "indeterminate form." This is nonsense. If you consider the function f : R*(R>0) -> R defined by f(x, y) = exp[ln(y)·x], then you can say that (0, 0) is a non-removable singularity of f. If you consider the function g : R*R -> R defined by g(x, y) = y - x, then you can say that (♾, ♾) is a non-removable singularity of g. If you consider the function h : (R\{0})*(R\{0}) -> R defined by h(x, y) = x/y, then you can say that (0, 0) and (♾, ♾) are non-removable singularities of h. You can say these things, but these things have absolutely nothing to do with, and have no relationship to, the associated arithmetic expressions 0^0, ♾ - ♾, 0/0, and ♾/♾, that people mistakenly attribute to these singularities. 0^0 = 1. Period. End of discussion. ♾ - ♾ and ♾/♾ are undefined, since there exists no possible elementary algebraic structure you can assign to Union(R, {-♾, ♾}), and 0/0 is undefined, because 0 has no multiplicative inverse. That is all. There is no such a thing as an indeterminate form or an indeterminate algebraic expression. You will never see a mathematician even pretending that these are real things in an academic written work, and I honestly have no clue why the education system chooses to continue insisting on spreading that particular myth around. The calculus curriculum, at a worldwide level, clearly needs some urgent overhaul. *Regardless, you can actually still define the super-square of 0 to be 1, because the limit of the function x^x as x approaches 0 is actually 1, so it might be correct in that sense.* No, this would be incorrect. The function j : R>0 -> R defined by j(x) = exp[x·ln(x)] does indeed satisfy lim j(x) (x -> 0) = 1. But that does not imply 0^0 = 1. What it does imply is that if we continuously extend j to [0, ♾), then j(0) = 1, but we have no reason to think that the continuous extension of j to 0 should be given by 0^0. This is absurd, and it betrays a fundamental misunderstanding of how limits and continuity and functions in general work. Simply put, the expressions f(p) and lim f(x) (x -> p) should not be treated as the same expression, ever. There is no context in which this is allowed, except as a literal abuse of notation, or except when you have already proven a priori that f is continuous at p, but this requires that f be defined at p and specified at p, which you cannot use a limit to do, unless f is already defined as the limit of some other function. They are different expressions, in general, with completely different definitions.
@@PhiDXVODs Okay. Let me make this as simple as possible. I think you and I can agree that (-1)^2 is well-defined, and is equal to 1. But exp[2·ln(-1)] is undefined. So that is the first example that b^a is not actually equal to exp[a·ln(b)]. I think you also can agree that 0^2 is well defined, and is equal to 0, yet exp[2·ln(0)] is undefined. So that is the second example. In conclusion, the equality b^a = exp[a·ln(b)] is false.
@@angelmendez-rivera351 "There is no such a thing as an "indeterminate form."" Um... isn't _literally all of calculus_ just solving different versions of 0/0 and 0*∞? The definition of the derivative is lim d->0 (f(x+d) - f(x))/d. If you pretend indeterminate forms don't exist and just substitute, you get (f(x) - f(x))/0 = 0/0. You need to pretend that d isn't 0 until the very end in order to actually solve it. The only way to get around this is to abandon real numbers and use infinitesimals. Then you'd have Re((f(x + 𝛆) - f(x))/𝛆). Tada! No indeterminate form! And all you had to do was claim that there exists a number infinitely close to 0 but not 0. You are correct however in that these aren't equivalent to 0^0.
@@angeldude101 *...isn't literally all of calculus just solving different versions of 0/0 and 0·∞?* No. To the contrary, there is never a situation where you must evaluate 0/0 or 0·∞. Calculus is all about evaluating limits. Limits are rigorously well-defined in the context of real analysis, and things like division by 0 or ∞ never appear in problem-solving. *The definition of the derivative is lim d->0 (f(x+d) - f(x))/d. If you pretend indeterminate forms don't exist and just substitute, you get (f(x) - f(x))/0 = 0/0.* Well, yes. Letting d = 0 is _not_ the same as letting d -> 0. I never said they are the same thing. Letting d -> 0 does not require division by 0 here. There is no 0/0 to consider. *You need to pretend that d isn't 0 until the very end in order to actually solve it.* Correct. So there is never a point where you actually encounter 0/0. You are proving my thesis here. *The only way to get around this is to abandon real numbers and use infinitesimals. Then you'd have Re((f(x + 𝛆) - f(x))/𝛆). Tada! No indeterminate form! And all you had to do was claim that there exists a number infinitely close to 0 but not 0.* No. Nonstandard analysis is a perfectly valid way of handling this, but it is not more valid nor superior to real analysis. Your dismissal of real analysis here is ignorant.
For 5:40 you say that 1*3^3 = 3 * 3 * 3 And etc. You also say that to go a exponent lower, you divide by said number If you apply this for zero, you would be dividing by 0 And it also wouldnt make sense for 0^2 to be lower than 0^0 since 0 is less than 2. So they have to be the same.
Also, in the combinatorial interpretation, x^y is the number of lists of length y of elements taken from a set of size x. Regardless of the set and hence the value of x, we have one list of length 0, namely the empty list.
This popular thought that 0^0 is "indeterminate" or "undefined", and how people use limits to jump to apparent conclusions about set expressions that are unrelated, like 0^0, is a good example of how limits, or more generally, fundamental concepts in analysis are mistaught. It is apparent that many teachers lack a good understanding in these subjects and fail to communicate the ideas properly, leading to confusion among students. As confusing as these subjects are to a new learner, I believe these misunderstandings could be prevented if it wasn't for the current education system (which teaches you more about how to answer exam questions than actually giving you a good foundation for the subjects, generally speaking)
The neutral element with regards to multiplication is 1. Meaning that if there are no factors (which means ^0, you get 1. likewise: The neutral element with regards to addition is 0. Meaning that if there are no summands (which means *0), you get 0.
The set theory explanation seems simpler and more intuitive. There is always one way to put nothing in a set. Raising anything to the zeroth power asks that set question, and putting nothing into the set counts as one solution, hence X^0 has 1 solution, even for 0^0 because an empty set is still a set, and still has 1 solution.
I agree, but this is not a definition of exponentiation that is taught in schools, and writing the definition formally requires concepts that are not taught in school either.
There is no good reason to not define 0^0 as 1. It just makes sense. It's the empty product of elements that are all 0. Empty products are 1. That being said, 0^0=1 is not strictly necessary for the purpose of series expansion. Technically, a polynomial ring R[X] over a commutative ring R is defined by extending R by an element X, i.e. we add X and all elements necessary to make the new set a ring itself. That way we obtain all the powers X^n of X and the linear combinations of the resulting elements. At this stage X is not a number or representative of one, even though our intuition obviously tells us that a primary intent is for it to represent a number. This means that we can easily define series by working in the polynomial ring R[X] over the appropriate ring R. In that case, X^0=1. To evaluate a polynomial, we just fix the condition that we evaluate polynomials in their "reduced" form, and so we never get 0^0, because that would only occur when we have the power X^0 which "reduces" to 1 before being evaluated. We never technically need 0^0. And this isn't as clumsy as my explanation suggests, there's nothing fundamentally hard or wrong about. But the question is: why bother? What are we preserving that is of so much value by not defining 0^0=1 and forcing all these ways to make it work without it? Just make 0^0=1. It makes sense, it works, it's intuitive, and more importantly it's probably simpler for people who already struggle at math to have concrete answers on these things rather than yet another "it's not defined" moment that can be very confusing.
Using the definition of the “one less 3 every time” instead of deviding by 3 every time we reduce its power by 1, how would that work with negative exponents? Doesn’t this kind of break that logic? Not a critisism, genuinly interested.
Negative whole numbers are not counting numbers. They are defined by way of the notion of "additive inverses." 0 and the positive whole numbers, on the other hand, are natural numbers, and so they can count how many elements there are in a collection. It makes perfect sense to talk about there being 0 copies of an object, or about there being 0 houses in a specifed region, but it does not make sense to talk about having -1 houses. z^n, for natural n and z being an element of a multiplicative monoid, is defined as being equal to the product with n copies of the factor z. I can talk about there being 0 copies of the factor z (including z = 0) and it makes perfect sense, and this can be made rigorous using multisets, but it does not make any sense to talk about there being -1 copies of the factor z in the product. So does this break the logic? Yes, it does, but this is not a problem. Why is this not a problem? Because when you extend this definition so that z^n is well-defined for every whole number n, as opposed to only every natural number n, this is, we are now working with Z, instead of N, then no matter how you choose to extend this definition, the extension MUST include the previous definition as a special case. Otherwise, it is not actually an extension of the definition in question. So regardless of which route I take to define z^n for negative n, this route has to simplify to z^0 = 1, z^1 = z, etc, when n is nonnegative. For natural numbers n and m, the equation z^(n + m) = z^n·z^m is always satisfied. We would like this to be true for every whole number as well, not just natural numbers, so that I can have, for example, z^0 = z^1·z^(-1). Since z^0 = 1 and z^1 = z, it follows that z·z^(-1) = 1. In other words, given how we have chosen to define exponentiation for arbitrary n, z^(-1) has to represent the multiplicative inverse of z, and this is a consequence of how the definition simplifies as a special case when n is nonnegative. This also explains very nicely why 0^(-1) is undefined: because 0·0^(-1) = 1 must be true, in order to satisfy the fact that 0^(n + m) = 0^n·0^m, in this case having n = -m = 1. However, as we know that the equation 0·x = 1 has no solutions in the field of complex numbers, it follows that 0^(-1) is undefined. In light of this, it must be clarified that z^(n + m) = z^n·z^m is true whenever z^(n + m), z^n, z^m are all well-defined.
@@WallCarBoatHead Yes, you are dividing by 3, but the point to understand here is that this needs to be motivated properly. We use division for negative exponents, because it does not make sense to talk about multiplying a quantity by -1 repetitions of the factor 3, so talking about one less repetition is also nonsensical in that case. However, it does make sense for 0 and other natural numbers, which is the point the video tried to get at.
Isn't the introduction of 1 * 3^0 kind of arbitrary? What you really have here is a geometric sequence. For some x^n, we have that t(n-1)=x*t(n). Suppose we know that some t(n-1)=3^1, x=3, then we simply have 3^1 = 3*t(n) ==> t(n)=3^1/3 = 1 = 3^0. But this breaks down with x=0. t(n-1)=0^1=0 ==> 0 = 0*t(n) ==> t(n)=0/0=0^0. (undef.)
He already debunked this argument at the very beginning of the video. Obviously, if you try to divide by 0, then you will obtain nonsense, but no one told you that you need to divide by 0. Also, there is nothing arbitrary in introducing 1·3^0. It is a fact of reality that 1·x = x is true for every x. If 0^0 has a value, then it will not produce any contradictions if you multiply by 1. Your reframing of his argument is also wrong. There is no division to be done here. What he is doing is not a recursion, but rather, he is using exponents as a way of counting the number of the base factor in the product, which is, you know, actually the definition of exponentiation in the first place. If there are 0 copies of the base factor 0 in the product, then the product cannot be equal to 0. In fact, since there are no other factors either, the product is empty, hence equal to 1. By the way, the argument he used is also the same argument that proves that 0! = 1.
@@angelmendez-rivera351 Factorial is defined as follows; T(n)=n*T(n-1), T(1)=1! = 1*T(0) ==> T(0) = 0! = 1!/1 = 1. The argument used to prove 0!=1 is precisely my argument that 0^0=1.
@@BiscuitZombies You don't need to divide by 1 to show that 0! = 1. You can use the empty product argument that Angel Mendez-Rivera mentions as well. With the empty product, 0! = 1 is _immediate_ from the definition of factorial. Just because you've seen a justification for something in the past doesn't mean it's the one true way to do things. The empty product is a _far_ cleaner and more useful approach than what you've seen in the past.
@@BiscuitZombies I know what you are trying to get at. Division is the intuitive introduction to how we do empty products, but it is not how the idea is made formal in mathematics, which is really my point here. The video managed to do a good job of emulating the concept without throwing a rigor bomb at the audience, but also without carelessly appealing to division where it does not apply.
The pattern you used in the end also leverages limits as essentially you are looking at 0 / 0 each time you lower the power. 0 / 0 isn't defined but the limit of x / x as x approaches 0 certainly approaches 1. 0^0 isn't equal to 1.
No, the pattern he used at the end does not leverage any limits, and it does not even use division. The pattern he used at the end is what any reasonable person on planet Earth would call "counting." Yes, this is something that even a Kindergarden child can do. You can count how many copies n of the base z you have in the product, and then you can denote that product as z^n. In fact, this IS the *definition* of exponentiation, and I would hope that you know this. So z^0 is nothing but the product with 0 copies of the factor z. Since you have 0 copies of the factor, the product has 0 factors, and the product of 0 factors is equal to 1. Therefore, z^0 = 1, for any z... including z = 0. Did I use division at any point in my demonstration? No, and neither did Bri.
@@angelmendez-rivera351 At 5:18 he quite visibly uses the notation for division to show what he is doing at each step down. Also how would fractional exponents work with this line of thinking? We know that they are perfectly well defined and 0^0 would have to fit that definition as well.
@@theimmux3034 You are ignoring that, almost immediately after that, from 5:24 to 5:32, he explicitly states: "it's not that we are dividing by 3 each time, it's that we're multiplying by one less 3 each time." He did begin by mentioning division, only to clarify afterwards that division is not necessary to actually get there, division is merely the first intuition people recur to get there, but it does not accurately capture the idea of counting. *Also, how would fractional exponents work with this line of thinking. We know that they are perfectly well-defined, and 0^0 would have to fit that definition as well.* The definition for fractional exponents is an extension of the definition for natural exponents, so you are correct that 0^0 = 1 would have to fit such a definition as well, and it does fit it. Every rational number can be expressed as a product t·u, where t is an element of Z, and u is a unit fraction. Given that exponentiation to the power of t would already be defined prior to defining it for fractional exponents, all we need to do is find an appropriate definition for z^u, and then z^(t·u) := (z^u)^t. It should be important to note, though, that (z^u)^t may not necessarily be equal to (z^t)^u, so the order absolutely matters. Now, every unit fraction is equal to u = 1/m =: m^(-1), where m is an element of Z. Given that we want to preserve z^n·z^m = z^(n + m) whenever z^n, z^m, and z^(n + m) are well-defined, an appropriate way of defining it would be to say z^u = z^(1/m) := root(m, z), where root(m, z) is the mth root function evaluated at z. In fact, this is the only way of defining z^q for rational q I have ever seen in my life. 0^0 = 1 still satisfies this definition. If you want to go further and define it for real exponents, then you can run into problems using the base 0 if you are not careful, but in general, you actually run into problems with multiple bases. While there are ways to define exponentiation for real exponents, they all do involve some ad hoc post rationalization when the base is not a positive real number. So ultimately, you may not even want to use power notation to work with real numbers. This is why in analysis, nearly everything is done in terms of the exp function and the ln functions instead, but even then, this is not proof. For example, defining exponentiation x^y as exp[y·log(x)] does not work for x = 0, not even if y > 0 because log(0) is undefined. So you have a few alternatives: piecewise define x^y so that you can define 0^y separately, but if you do not want to rely on the definition for rational y, then defining 0^y for any y at all is arbitrary; or you use a limit argument that 0^y for y > 0 should be 0 because lim exp[y·log(x)] (x -> 0) = 0 for y > 0. This same argument also results in 0^0 = 1, because lim exp[0·log(x)] (x -> 0) = lim exp(0) (x -> 0) = lim 1 (x -> 0) = 1. However, this basically requires you that you switch your definition to being simply x^y := lim exp[y·log(t)] (t -> x), which many take to be unsatisfactory. So defining exponentiation for real and complex exponents can be done unambiguously, but whether those definitions are satisfying is up to debate. Many mathematicians opt to simply not use powers when they are working with nonrational quantities. However, these definitions all result in 0^0 = 1. I have yet to see a definition that, if correctly stated and applied, does not result in 0^0 = 1.
That is also something I noticed. It is still limits--yes, even if it's talking about removing a multiplication. It still goes from a number above 0 down to 0 to find the value of a function at 0. That doesn't make it wrong, because, as I argue above, using limits is a standard way to try and find a value for something. It is however, due to using limits, a definition rather than a proof. Just like x^0 = 1 is a definition, not a proof. It's an extension of the meaning of exponents. It is a natural definition, but it is a definition all the same, just like analytic continuation. The main thing is that 0^0 = 1 is useful in sequences, hence it makes sense to define it that way. The funny thing is, the natural limit definition also works in that context, too.
The meaning of power is x^a*x^b=x^(a+b) and x^1=X. So x^a*x^0=x^a and 0^0*0^1=0^1. So x^0=1 and 0^0*0=0. So 0^0=all numbers because any number times 0=0.
For the power definition, couldn’t you technically multiply 0 by any coefficient, because of the multiplicative identity of 0? Wouldn’t this mean that 0^0 could equal any number and therefore is undefined?
You can, but you still get 0⁰ = 1. For example, let’s use the coefficient 4. First we’ll do threes because that’s what’s in the video. 4 × 3³ = 4 × 3 × 3 × 3 4 × 3² = 4 × 3 × 3 4 × 3¹ = 4 × 3 4 × 3⁰ = 4 We can solve this last equation by dividing both sides by four to get 3⁰ = 1. Now let’s try with 0. 4 × 0³ = 4 × 0 × 0 × 0 4 × 0² = 4 × 0 × 0 4 × 0¹ = 4 × 0 4 × 0⁰ = 4 We can solve this last equation by dividing both sides by four to get 0⁰ = 1.
@@TheBasikShow That assumes that 0^0 is 1 to begin with. If you remove the 4 * on the left side, the right side still gives the correct answer. If the left is 0^0 in this case, the right side would say 4. Nothing has been proven.
@@noahali-origamiandmore2050 Of course this isn’t a proof, it’s a pattern that gives a justification for the definition that 0⁰ = 1. If you want a /proof/ then you first need to explicitly define what exponentiation is, and when you do-surprise, surprise!-you always get 0⁰ = 1. I’m confused about what you said with removing the 4 from the left side; if you remove the four it becomes a different equation.
@@TheBasikShow No you do not get that 0^0=1 because no such "proof" proves that 0^0=1. 0^4=4*(0*0*0*0) 0^3=4*(0*0*0) 0^2=4*(0*0) 0^1=4*(0) 0^0=4 I don't need to multiply the left side by four because you can see that all lines before the 0^0 line are valid. You can replace 4 with any number and end up proving that 0^0 can be any number. "Taking away" factors is not valid in this case. But you're probably thinking that it's only valid to use 1 and not 4 because any number times 1 is itself. However, going from 0^1 to 0^0 involved this "taking away" a factor of zero. That's absurd because that's the same thing as dividing by 0, which is invalid. 0^2=0 not because "taking away" a factor of 0^3 gives you 0^2 but because 0^2=0*0. The only reason that you can "take away" factors from nonzero bases is because dividing by nonzero numbers is okay, but to get from 0^1 to 0^0, a division by zero was involved.
3:10 Another problem with the limits argument is that the limit of 0^x doesn't actually exist Sure, from the right it equals 0 But 0 to any negative power n is 1/(0^n) or 1/0. There is no limit from the left because 0^x is undefined to the left of 0. The 2nd limit in this argument doesn't even exist
The limit is an idea. In the same way a line can be discontinuous at certain points, both limits that he drew are approaching different things, but both share the point 1 on the y axis
@@kanewilliams1653 Consider g(x, y) = x^y (assuming we know what is being meant by x^y in the first place, because this is a discussion that needs to be had). It can be proven that lim g(x, 0) (x -> 0) = 1, and that lim g(0, y) (y -> 0+) = 0. For this reason, you can safely say that lim g(x, y) (x -> 0+, y -> 0+) does not exist. This is the argument being used for declaring that 0^0 must be undefined. But what the video is saying is that lim g(x, y) (x -> 0+, y -> 0+) not existing does not actually imply 0^0 is undefined. So the argument that #TeamUndefined is resting on is completely invalid. In fact, there is no contradiction in saying that g(0, 0) = 0^0 = 1 and lim g(x, g) (x -> 0+, y -> 0+) not existing. Seriously, there is no contradiction. Because the first equation is about what g is *exactly at* (0, 0), while the limit is about that g is *very close to* (0, 0). Do you see how they refer to different things? Look at this way. You are familiar with the floor function, right? The floor function, also known as the greatest integer function, is defined for every real number, and what it does is output the greatest integer that is smaller or equal to x. If you want a formal symbolic definition, then floor(x) = n n =< x < n + 1, n is an integer. So having established this, you may now ask yourself, what is lim floor(x) (x -> 1)? If you look at the graph on Wikipedia, what you will see is that lim floor(x) (x -> 1-) = 0, but lim floor(x) (x -> 1+) = 1. So clearly, lim floor(x) (x -> 1) does not exist. Now I ask you: should the nonexistence of lim floor(x) (x -> 1) lead you to conclude that floor(1) does not exist? Because I just gave you the definition of the floor function, and if you substitute x = 1 into said definition, then floor(1) = 1. In fact, this is mathematically correct, and no one on planet Earth disputes that this is correct: everyone knows that the greatest integer that is smaller or equal to 1 is 1 itself. But if you then say that lim floor(x) (x -> 1) implies floor(1) is undefined, then you are saying 1 is undefined. Do you see the problem with this logic? What needs to be understood here is that limits, given how they are defined, are about how functions behave *near* a point, not *at* a point, so you cannot use any information about lim floor(x) (x -> 1) to make any conclusions about floor(1). The only thing that lim floor(x) (x -> 1) not existing tells you is that the floor function is discontinuous at 1 (in fact, discontinuous at every integer, if you look at the graph), but it being discontinuous at 1 does not make it undefined at 1. To actually know what floor(1) is, you need to use the definition of the floor function, not limits, in the same way that if you want to calculate 2·3 or sqrt(9) or sin(π), you use the definition of multiplication, the square root function, and the sine function, respectively, you do not use limits. Arithmetic, algebra, and computation exist for this precise reason. Similarly, if you want to determine what g(x, y) = x^y is when x = y = 0, then you need to look at the definition of the symbol x^y itself, not use limits. Limits can tell you something about whether g is continuous at (0, 0), but they cannot tell you whether it is defined at g, and what its value is, if any. So you need to ask yourself, what is the definition of x^y? What is exponentiation? These two questions have clear answers, and those answers do entail that 0^0 = 1. The video explained very briefly near the end why 0^0 = 1 is entailed by the definition, but it did not go into a whole lot of detail about it.
@@angelmendez-rivera351 Thanks for the detailed reply. I understand the idea that limits approach a function's point and are not actually representative of a function at a certain point. Your floor function shows this well. What I am still confused about is why the definition of 0^0 = 1 appears out of an everywhere-else continuous function where all other values are 0 (in the case of 0^x, as x varies). The floor function is DEFINED to be such that the piecewise graph is how it is. But exponentiation has no analogous definition. Instead, the arguments given in the video appear to be that i) It is just convention, because it simplifies some equations we use in higher math (which is itself a circular argument, because were we to use alternative definitions of 0^0 we might have alternative definitions of these further higher-level equations, and ii) because it follows from the "dividing by 3" property which holds naturally for all integers excluding zero, in which case 0^3 = 0, 0^2 = 0, 0^1 = 0, 0^0 = 1 (!), and 0^(-1)=0, and so on. Do you see my confusion? Hope that makes sense
@@kanewilliams1653 *What I am still confused about is why the definition of 0^0 = 1 appears out of an everywhere-else continuous function where all other values are 0* I understand what you are getting at, but 0^x is not everywhere else continuous. It is not even defined everywhere. For example, I think you would agree that 0^(-1) is undefined, because 0^(-1) would have to be equal to 1/0, but 1/0 is undefined. However, what you can say is that 0^x is continuous everywhere in the positive real axis, yet right-discontinuous at x = 0 if we accept that 0^0 = 1. And yes, I can see how that can be a little puzzling at first glance, but I think it still makes perfect sense. *The floor function is DEFINED to be such that the piecewise graph is how it is. But exponentiation has no analogous definition.* This is true, but this is also because the floor function is a unary function: it takes a single real number as input. Exponentiation is a binary function: rather than taking a single real number as input, it takes a pair of real numbers (x, y) as input, and it gives you an output that we denote x^y. This makes exponentiation more complicated and more prone to having discontinuities, especially because the definition of x^y for real numbers is ultimately an extension of the definition of natural numbers too. For a second, let us just think about x^n looks like, when is a natural number. How is this defined? Well, the exponent is there to *count* how many "copies" of the factor x we want in the product. So if I write x^4, this means that I am denoting a product with 4 copies of the factor x, x·x·x·x. There is a rigorous way of formulating this intuitive definition, but I do not really have a mathematical keyboard so it would be sort of tedious to do so. Anyway, you should be familiar with this idea already, since this is how polynomials work. So when thinking about the case n = 0, you are denoting the product with 0 copies of rhe factor x, a.k.a, a product with 0 factors. And this may seem to be counterintuitive, and it could even sound nonsensical. What even is a product of 0 factors? Well, imagine this. You can always multiply x^n by c to get c·x^n. Again, you probably have seen this, since this is idea polynomials are based on. The product c·x^n can easily be seen to be the product where c is multiplied by x exactly n times. So now, the exponent *counts* how many times you apply the action of multiplying by x to the "input" c. Done like this, it is now very easy to justify why x^1 = 1 and x^0 = 1 have to be true for every x. c·x^1 denotes the product where c is multiplied by x one time, so it is just c·x. Since this holds for arbitrary c, this means x^1 = x. Similarly, c·x^0 denotes the product where you multiply c by x zero times. If you do the multiplication 0 times, that means you just do nothing to c, leaving it unchanged. So c·x^0 = c. Since this holds for arbitrary c, this implies x^0 = 1. But notice that x itself was arbitrary too, so this has to hold for every complex number x. Notice how this includes x = 0. So this implies that 0^0 = 1, and that makes sense given the way exponentiation is usually defined. So this is how you end up with 0^0 = 1 but 0^1 = 0. Of course, for a variety of reasons, it is useful to extend this to rational exponents or real exponents, and not just work with natural exponents. This is tricky, but it can be done. The idea is that we would like for x^(y1)·x^(y2) = x^(y1 + y^2) to always be true whenever the individual parts can be defined, and we would obvious like our definition of x^y to simplify to the definition above when y happens to be a natural number. You may notice that x^n = lim exp[n·ln(t)] (t -> x) is true for every n and every x, and you realize that if you define x^y := lim exp[y·ln(t)] (t -> x) for real y, it does satisfy the properties we want it to satisfy. You may wonder why there is limit in there, and that is because without the limit being there, not only would 0^0 be undefined, but actually, 0^n would always be undefined for any n, since ln(0) is undefined as well. The limit solves this problem, but some people choose to omit it and take as an implied-by-context notation to simply write exp[y·ln(x)] instead. This is still somewhat sloppy notation, but okay. *Instead, the arguments given in the video appear to be that i) It is just convention, because it simplifies some equations we use in higher math ... and ii) because it follows from the dividing by 3 property...* I completely agree with you on point i). Convenient notation is itself not a proof that 0^0 = 1 is true, it is only an argument that says that 0^0 = 1 is a useful convention, though that does not make it a definition. And in fact, I agree that ultimately, we could just change the notation that is used in the binomial theorem and in the Taylor series theorem and not have to use 0^0 = 1 at all. As for it being a common convention, this is accurate if taken to be just an oversimplification of the debate. There is some nuance and historical context regarding how mathematicians view 0^0 and why it is taken "as a convention," and not as, say, a theorem. But that nuance also involves a debate concerning notation in mathematics too. As for point ii), I definitely see where you are coming from. As Brian clarifed in the comment thread started by Muffins, Brian wanted to essentially emulate the idea of the empty product, but without actually having to rigorously make use of that, and instead appeal to the concept by trying to get the viewer to develop the intuition for themselves with his argument. But the way he presented that argument was a little confusing, and it may just have been better if he had chosen to explain it using an arbitrary constant c instead of using 1 specifically, because then it makes people think that the argument has something to do with 1 multiplied by 0 is 0, which is superficially undermined by what some people brought up: that 0 multiplied by any number is 0 too. But if he had done it with an arbitrary constant, then the logic he is trying to use would be more clear.
Awesome video! I'd like to give my opinion on the argument you gave at 3:50. You mentioned that the binomial formula and the infinite sum of e^x require 0^0 to have a definition to prevent any restrictions like x cannot equal 0. However, I believe that that the binomial formula and Taylor series (infinite sum thingy) would work just as well if 0^0 isn't defined as a value. The reason is that the zero in the exponent is defined as an integer. Since it is an integer, it can't get arbitrarily close to zero (it can equal zero, but you cannot take a limit as it approaches zero). Thus, if you fill in x=0 into any of the formulas, you can use a limit to give it a value, namely the limit as x→0 of x^0, which equals 1 (reminder, the reason why I can write a zero in the exponent is because it is an integer). If you disagree with my reasoning, feel free to leave a reply and discuss.
Your argument is fallacious. What you are essentially saying is that, in the binomial formula, (0 + 0)^n should be replaced with lim (x + y)^n (x -> 0, y -> 0). Doing this misses the point of what limits are. By definition, limits only tell you about the value of a function *near* a point, not *at* a point. At this point, I am repeating myself, because I have had to say this throughout many other comment threads to this video. So limits cannot answer the question of what is (0 + 0)^n, and replacing this with lim (x + y)^n (x -> 0, y -> 0) is not actually valid.
@@davidmelo9862 No, because to show that it is continuous everywhere for n = 0, it necessarily has to be the case that 0^0 is defined in the first place.
If you want the binomial theorem to hold for every natural n, and every complex number x and y, then you necessarily need to have 0^0 = 1. Using limits is not going to work, because then it will not hold for every complex number x and y, and it would not be binomial theorem if you had to use limits.
@@angelmendez-rivera351 For n=0, the binomial theorem will state that (x+y)^0 = x^0y^0, in which case what I said has to define 0^0 of course as it is the way it is defined in a combinatoric setting. I agree that 0^0=1, but the argument that "we have to change our whole notation if we say it's undefined" is kinda not true because one can get (0+y)^n = lim(x+y)^n (x->0) = lim(x->0) x^0 y^n = y^n Which is the case when these have continuity in any meaningful form. (n!=0) Of course there are problems with this since the binomial theorem is not really about numbers as it is commutative structures.
It's not controversial: it is undefined. That's what real mathematicians say when there are two (or more) different arguments that lead to differing results. "Controversy" => "undefined". Please now produce some real maths instead of nonsense
Wrote 0^0 as e^ln(0^0), properties of logarithms state that this is e^(0ln(0)). What if 0=2*0, then we have e^(0(ln(2)+ln(0)), which is different to our original but sill the same. We also have an ln(0) here which does help its definition.
🎓Become a Math Master With My Intro To Proofs Course! (FREE ON TH-cam)
th-cam.com/video/3czgfHULZCs/w-d-xo.html
as a 10th grader, I see this as an absolute win
I’m a 5th grader I think it’s 0
Back to the first video. 0sq.root of X, I think should be not undefined but all numbers up to and including infinite.
How are you writing in the air? I used to write upside down on maps for guests in a Provincial Park when I was your age. Was a fair to middling successful trick for my partner and I to meet girls.
But I had to watch three times to get the gist cos your math appearing in the air and it looks mirrored a la Leonardo as well keeps distracting me. Really cool. How do it work then? I'm thinking the same idea for music instructional videos. A graphic of some sort is all good as far as it goes but this is better. Just the first time I've seen it so I found it a bit distracting.
For kids an easy way to demonstrate the issue with divide by zero is using a carton of eggs. My daughter was in tears. Couldn't grasp it. Out come the eggs. Full carton 12 holes 12 eggs so 1 egg per hole. Take 6 out. 1 egg for 2 holes. Throw the other 6 in the pan too. Now what?
Happy tears now. Lightbulb moment. She understood it now as how many nothings are in a something? Not exactly the way mathematics says it but like Mr Spock said those are essentially the facts.
Tried it a few times since and it works a charm. Once that basic idea is understood the rest of it's easy to get.
0 and 1 are different from each other and all other numbers but to most people they are just the first two of our ten digits.
@AdvaitAndLaynePals Isn't +0 = -0? If so, what about 0^(-0) = 1/0? I'd love school math exams if one could turn anything to 1 by dividing by zero! I hope you're a school teacher.
Someone: "It should be 0"
Someone else: "It should be 1"
Someone else else: "Let's split the difference and call it 1/2"
**laughs in Ramanujan**
Ah! A Numberphile fan i see!
@@georgecaplin9075 Wait, no, well kind of. I know about Numberphile, but I just came up with this on the fly.
@@krillinslosingstreak OK, its just that they’re infamous for “proving” that all the integers add up to -1/12 and part of their “proof” was a graph with a sine wave shape which oscillated from 0 to 1. This, according to them, was the same as a straight line at y = 1/2. It most definitely isn’t. In calculus possibly, but not the way they were trying to use it. (Sorry for the long explanation.)
@@georgecaplin9075 It's fine. :) I must have watched that a while ago, I don't remember much of it, but I do remember the -1/12 thing.
Every 'controversial' number seems to have something to do with 0 and 1 colliding in some way, it's beautiful. Both are such iconic numbers.
That comes rather naturally as these two are the neutral element for multiplication and addition. This is for example also the reason empty products and empty sums equal 1 resp. 0.
@@E942-h2d also technically these are only numbers required to build an entire number system.
0 and 1, partners in crimes.
Proof that we're living in a simulation. Everything goes back to 0s and 1s.
@@spectralumbra1568 If we were in a simulation, then everyone would follow some sort of rudimentary logic. Let me assure you friend, some people most certainly do not
0, what should be the simplest number in math, always subverting our expectations. Great work as always
Ikr, its value is literally nothing
But it can create the most conflicts and confusions
@@mathgeek5698 exactly
Exactly!
@@BriTheMathGuy Yeah!
It actually took thousands of years before zero was recognized as even a 'number' at all! It took Indian/Hindu mathematicians and philosophers -- who already appreciated the significance of 'nothingness' as a concept -- to finally give it its own symbol and mathematical definition. Not such a simple beast, at least to our naive intutitions.
New drinking game: Take a sip every time he says ‘zero’.
🏴☠️
Your chances of survival are whats gonna cause your death: "zero"
zero is said 56 times in this video
1 minute in and i alr finished 1 bottle
Say zero enough times and it turns into another number?
"We can't have 2 results" square roots would like a word with you.
No, even with the square root function, you still get 1 result, hence why it is called a *function.* I do not care if Wikipedia thinks otherwise in its opening paragraph, as Wikipedia also contradicts itself throughout that article.
@@angelmendez-rivera351 The Wikipedia article separates the "square root function" from the concept of square roots, as it should. That function is a mapped for convenience with geometry. And yeah, that is the same thing he did with 0^0 to make it determinant, but I think it's interesting that square roots are never referred to as "indeterminant" despite being similar.
@@biggerdoofus This is wrong in two ways.
1. Wikipedia is still somewhat wrong for the simple reason that is just abusing some of the language. The name "square root" has conventionally always referred to the sqrt function, and as for the "square rootS," this is where the abuse of language kicks in. Wikipedia should instead be talking about the roots of the polynomial x^2 - y, rather than "the square rootS of y," as this just leads to conflation, and as I said, is abuse of language. Wikipedia is usually a pretty good source when it comes to mundane aspects of a topic, but this particular thing is far from mundane, and it likely is in Wikipedia for the same reason that teachers actually teach this in schools, being incorrect and all. This is why most people nowadays are under the impression that sqrt(9) = +/-3 as opposed to simply sqrt(9) = 3.
2. The word "indeterminate" in mathematics has always referred to limiting forms, specifically. In fact, saying "0^0 is indeterminate" is also an abuse of notation. It would be more correct to say lim f(t)^g(t) (f -> 0+, g -> 0+, t -> c) is an indeterminate form, and if you wanted to use a shorter notation, you should write that (-> 0+)^(-> 0+) is indeterminate instead.
This seems incredibly pedantic, but notation in mathematics is a super important thing, and every "controversial" topic in mathematics that does not involve an unsolved conjecture can always just be simplified down to a matter of misunderstanding and misusing notation, and in particular, to problems with the education system, rather than problems with the actual mathematics that involve these notations. No mathematician ever woukd dispute that the empty product is equal to 0. So the claim 0^0 = 1 should not even be considered a problem at all. But because the notation "0^0" is frequently used to instead refer to "(-> 0+)^(-> 0+)" rather using notation the way it should be used (particularly among teachers), people get confused and feel that 0^0 = 1 therefore has to be wrong. It's no different with the issue of 0.9... = 1. This equation is irrefutable. I can prove it rigorously. And the mathematics themselves are not controversial to anyone. What is controversial is the fact that most people interpret "0.9..." to mean something completely different than what it actually is defined as, hence creating confusion and, yes, controversy. This is why, rather than enforcing abuse of notation and of language that has been commonly practiced by people for decades or even longer than a century, I am rather militant about the way notation is used. Using notation incorrectly just makes learning mathematics much harder than it needs to be, and it makes the pedantry feel that much more infuriating to people.
@@angelmendez-rivera351 Ah. In that case, Wikipedia is also wrong to have a bunch of articles about the "algebraic" meaning for "indeterminate". I don't have any resources for the history of the term though, so I can only assume those articles are being written and verified by people who are math-adjacent rather than just mathematicians. I'm a hobbyist game dev and usually use multiple languages at once, so I may be a little too used to notation being arbitrary and functions being allowed to return weird results or sets of results.
@@biggerdoofus Wikipedia is somewhat unique among social platforms in that its users (i.e. editors) are not allowed to cite personal expertise/experience in their contributions, because that isn't verifiable. Instead, since Wikipedia insists on third-party citations (especially when there's doubt regarding particular facts), the defense against incorrect information is that the editor without (good) sources cannot defend against the editor with them. The only facts immune from the citation requirement are facts that are so incontrovertible that even absolute laypeople don't doubt them (I'm talking about like, "April 4, 2000 was a Tuesday" kinds of facts here).
At any given time, most articles on Wikipedia are in a state of "consensus," meaning nobody is challenging the integrity (correctness, completeness, reliability, or clarity) of the article. While it could just be that it's a tiny article that very few people come across, for a more popular article a consensus indicates the tacit approval of the vast majority of its readers. And since all it takes is one person to challenge the status quo, if there aren't any recent major edits, a [citation needed] or [dubious] tag, or a discussion on the talk page regarding the particular information you're reading, it's a fair cop that nobody (not even the biggest experts) reading the article disagrees -- at least, not enough for the article to be worth changing.
But when in doubt -- and especially when things of importance ride on correctness -- use the citations. Wikipedia works a lot better as a way of finding trusted citations than as a source itself. :D
My math teacher casually threw a 0^0 at me and got mad at me for being "wrong". Now I'm pissed that half the mathematical community would have said I was right.
Your maths teacher said that 0^0 should be 0?!
@@MrCmon113 im assuming they said it was undefined and expected them to show its limit instead
To be fair, 0^0 *is* an indeterminate form when the 0s are limits, and it doesn't have to be 1
@Michael Darrow lim x->0+ 0^x is 0
@@gamerdio2503 to be fair, limits of a function at a point does not necessarily equal the function at that point. That's sort of the point of limits, they specifically describe the behavior of a function around a point, but have no bearing whatsoever on that point itself.
I generally just use the idea that an empty sum = 0, empty product = 1. The identity considering the operation.
Nice!
@Garrett Hager nice
@@Marchclouds
It has been a while since i learned calculus but what i remember of indeterminate numbers is that when you run into one it doesn't mean "the correct answer is 'indeterminate'" but rather it means that you need to find a way to calculate it that avoids the indeterminate number.
Hmm... My uncertainty in saying that bothers me, perhaps i should relearn calculus. Anyway the important part is that you can calculate the determinate answer to an equation even when there is a path where an indeterminate gets involved
@@Marchclouds Your calculation is 0/0 = x, therefore 0 * x = 0. Which is a really bad thing to do because you have just multiplied an "equation" by 0 which is not an equivalent reformulation of the "equation" as multiplying both sides with zero makes it trivially true. I could claim that 2 = 1 and prove it by multiplying both sides by 0. 2*0 = 1*0 but that doesn't mean 2 = 1, does it?
@@CK-nh7sv Still, as 0/0 is by definition undefined, so should 0^0 be.
The meaning of 0^0 depends entirely on the context in which it appears.
In *calculus and limits,* 0^0 is considered an *indeterminate form.* This happens when you’re dealing with functions like f(x) and g(x) where both approach 0 as x approaches some value. In such cases, the value of f(x)^g(x) depends on how the functions behave, which is why 0^0 in this context doesn’t have a fixed value-it requires more information to evaluate.
In *algebra and combinatorics*, 0^0 is typically defined as 1. This is done for practical reasons, like keeping formulas consistent. For example, in the binomial theorem or when calculating the number of ways to choose zero items from zero options, defining 0^0 as 1 avoids unnecessary exceptions and makes things work smoothly.
The confusion arises because 0^0 can mean different things depending on the situation. In some cases, it’s left undefined to avoid ambiguity, while in others, it’s explicitly set to 1 to simplify calculations.
To clarify further, sources like Wolfram state that x^0 = 1 for any x ≠ 0. For x = 0, they treat 0^0 as indeterminate unless it’s in a context, like algebra, where defining it as 1 makes sense.
*Conclusion:*
- In limits, 0^0 is indeterminate.
- In algebra or combinatorics, it’s often defined as 1 for consistency.
Maybe i dont really understand what you mean, but what you mentioned is also mentioned in video. It said the 0^0 is not a limit.
if the limit of x=a for f(x) and g(x) is equal to 0, then
lim_{x->a} f(x)^g(x) seems like to be undetermined because it is the form of 0^0
But the most important thing is that you shouldn't consider it as 0^0 form limit. If every limit is wrote down by epsilon-delta definition, then theres no need to remember 0/0, ∞/∞, 0^0... is undetermined. They teach us this is just because most of the function we saw in real life is continuous, if we know the form of 0^0 is undetermined, then we can deal with it with some other method. But if you use epsilon-delta definition, you will not see any of 0^0 in the proof, then its "perfectly" avoid any 0^0, so in limit, you can do anything as usual even if you dont know what is "undetermined."
The problem is "0^0 cant be CALULATED if we dont define it." As you said, wolfram said that x^0=1 when x not equal to 0. But wolfram is just a calulator. It doesn't even tell why? If anyone create another calulator and said 0^0=3.14, wont you doubt about it?
0^0 is only controversial, theres no proof said it should be undefined. It is undefined is only because we don't define. Maybe undefined is a good choice for its definition, but I trust define it to be 1 will do more good than harm.
At least half agree the undefined has won.
Did you watch the video? All of this was said already.
Fun Fact: if you take the limit of x^x with x approaching 0, it will at first appear to be getting closer to 0, but will actually increase, and approach 1.
The exact turning point is also 1/e
But only from right (positive numbers).
@@joseftrogl6565 Works with negative values because the real part goes to 1 and the imaginary part goes to 0.
@@txorimorea3869there is no imaginary part for this limit, lim x->0^- x^x =-1
@VincentNN limx→0- x^x =limx→0-e^(xln(x))=limx→0- e^(ln(x)/(1/x)=
e^(limx→-0 (ln(x)/1/x)
Da indeterminación -∞/-∞=∞/∞ podemos aplicar l'hopital
e^(limx→0- (1/x)/(-1/x²))=
e^limx→0- (-x²/x)=
e^limx→0- -x=
e^(-(-0))
e^0=1
Dónde entraron los imaginarios?
Si no estoy equivocado creo que sirven para comprobar que lim x→0- de ln(x) = -∞
zero doesn't even sound like a word anymore
The entire video be like
th-cam.com/users/aboutthisad?pf=aos&source=youtube&reasons=AXRXrqkI-8iATWcSlBOY0FmVIL2ZaGjRhDwuqqv4n0InvTNVV_-vuq0U2785FnAGomj1bNg2lJPjzhaid8AshKbXWziybELxLrP5_uwGyx-cC6dFWqo835lImQxv8eMPIqXnj-ZWco4JMUYOT7fn6DYf3pyYRP1350Nw2Em2zGrLgDPrQeGxTBidWFjKjJFSPFkPXG7ZAE0xANcQTWa6cB-lLP3eDMsBhIo2TpekZU5HWJHtiecZkwpcxNIzG74z62vq-c3tL3ZB9c07LKKCnt_G9OfKVKeJt7jS1pJVl3FRo38_RQu7ND-vFP0WSwcNJcayqHsdXznGFhnJkEhnE-zmcFrC8bj9XDdY_duilTcZwPgVJSBgIMuHDGxUDmC2SGpp0iKno9Vi0nqGL6WSuEEvPwmHQxUuLyNpMMi4oKfqunhf5TqlayrwZ2IPzJgl-qCFOET00t1qgQhFWN0B0ejIYtA7q1cQK7KwZ4ytR9oxweT3LhokbOytnl9qtWcp6NDIMT35IXvI1jdvEQ1WhsjuTATzso3NESswCJzVsV4ZI6F1PS1YSFeNesN67oeAWs_KpmH_2E-sSCVngk_R5OqR8ZmvgXTYCMkvMTaEOPOyjZC0PEJJbwtOu_uDbJOJakvlqjUXDBDB_0IizKeqGSHd9MO48kcdUk_LQKB2pOjeS95Ab8kj0tfPj-dDDgLWITTTKPuiZqmcMjml6EqIg73V-1CUthxOaqqAMVgfRYq29rVKSqhcZN_t6dNzajDtLwODS5-ORt4S4Puw0mwbwTxJbFo5MMarKOYuA8hQUuUIVL1bHbnz19v-GQnPg8Wc02uAE2n5BLkM31h05wCSvh0EbnxHpj-zNHRzAEVVjKh8K55Fk-ZZTOAMcedUswoyOG9VQvC7ZNTyHitl6Mf8uOqvinPfCk3izxQFyuwYPxZDR_24ruJUAe4uO7JiKSbj76B3ooQ95TxrVYdwes-KrqBDjiLQBPqXKm8vfL38ppxUIoKpKpdA4Zt-HKl1c3nijx1DQnu_vVkAlBpQn2wArq5YKc0ZsCBcnbESaoLD3_yRkf7ejuZp226wl6A0GanBVCPVr3CbBalt0OpdkdbnMVr6uyyTXWkPO19ElPejrlhxVvKVkBoPNOuDv4bNEozpOXeUZCnjR53u-lXjinQHqdL9U6pw_-tXjw81JnxpTJs2-Elozkeb8AMHl_hqXzBH2cuORdXGwLxAMZmMUcXCCt8b0tNrg7PQeRbU_ZwRraoHt-7RZGpBl1GJ_NNmmH_yW80OUz1YbCIXyfXfYf8ZKCBF5o9O9cLRJRdttFu5XXarL_ZkZHqopurfIRaren8LK-8gT9KPwobMIVwDEbFRvLvPFm_8wOaxmlaaTY_vYFtExva63nfor-KIHmKdgoaMh2FNcj-spt7Ey3qurQ7rk8-c5E8n-zKrTJb4i7pEUd9Y_Giv4eIFv1s-AJxHrrcESDV11xXpB-2voFtPjvZari98zk9y0m_U11KFGwDYDPX0dFNA1BJsRMGAIwHMw52uc9oRNjHOR4jmvkzjofllzAxccNKjdmMJprnMj27wrw4gaInP9x9cF9FJQsAUIPb8xw6aVJbr5y2j0rdwW8rEpCqXyUdF6RHR8A7_rc9sz4qVhHTnp-xLbkeeziEvC-T2t_SDk4LoEAy85I6e38aEtrHUZn0ZNQXLXZrt1yncWXRTyNJahjKsbMk_7ccU9tUGcb4Lh4NMXcYwWS-SBg0CFFgWoBAGpxgTN9RYWr4rkq9FxkavAt5oK-AdUmuD18IaoCBu_TIUYSlkLmOCWUYNBQukLhVLawPmA_iRCgfQgucGcADjH0A3YpP_Y2ZwTmjj9Q6e-30fOc2HwnCHNiny-wYFs-qe-gBa6eXF51zeN_7tV5SOGWPERqLoxUBBQwG17VgXJQEZH4I3oE28rI4YO3BQYHWDlP56OQPEabLphpdkVVcVDn1ubZYANCGItags0CuciuOGtjLXHuTBHPiaROXVZaf5dXFX6p_-YKQKnEBjTSbXAz3J6dmWADHYS9SESkVrRrzId3tO5ms1iDBoraXz2an4gcargwWljnL88RYh3QuHm6CfWMMVuhcyYsvH5ludQxU6GLxCKkZnyDYHKttUuJw4nbWn5g&hl=en&pageId=114547461114575074589&opi=112496729&ata_theme=dark
zir
I like when he writes everytime he looks disappointed
😂
I make a similar face, but when I'm looking at my math grades.
@@Forbidd3n19 😂 Get some help buddy..
I also like that
@@Forbidd3n19 he probably writes it normally but mirrors it in editing
this was the first argument i've ever seen that actually convinced me. it finally clicked! great channel!
yo what's up, what are you doin here
@@noscolludimus ya mom-
@@OrionYTP bro lost all his intellectual aura😭
I originally accepted 0^0 = 1 based on the Desmos plot of y = x^x.
But when you brought in the binomial theorem and taylor series expansion of e^x, it made more sense.
Great!
Actually, Desmos has not exact graph.
Desmos says 0^0 is 1 not because it's inaccurate, but because that's just how many common programming languages implement exponentiation.
Last time I checked x^x was undefined at x = 0 on Desmos, but if you look at the limit (so take x = 0.01), it's pretty obvious that 0^0 -> 1. If you put in additional mathematical evidence like the binomial theorem, then it's a strong case that it equals 1.
@@henrytang2203 It's just a limit, not a function output. We must not say 0^0=1 before it is defined.
I actually was kind of convinced 0^0 should be equal to 1 earlier, but now it makes more sense thanks to you
You bet!
Using the real numbers only and the limit of x^x where x approaches 0, 0^0 is one.
@@legendgames1280^0 IS equal to 1, but the limit of x^x as x goes to 0 has nothing to do with it, you can only prove it from definitions
*0⁰ = 1 Proof*
(This is going to ignite an argument...)
*TL;DR: Using 0 as an exponent is an empty product. Empty products always output 1, regardless of the input.*
Using a capital pi to define integer exponents in both directions (±), x⁰ always results in an empty product (upper bound less than lower bound), which evaluates to 1.
Here are basic formulas for integer exponents (positive/negative integers) using capital Pi.
bⁿ = Πₖ₌₁ⁿ b
b⁻ⁿ = Πₖ₌₁ⁿ (1÷b)
When n is 0, this results in an empty product for both equations.
b⁰ -= Πₖ₌₁⁰ b- = 1
b⁰ -= Πₖ₌₁⁰ (1÷b)- = 1
*This even holds true when b and n are both 0.* (Undefined values are overridden by the empty product.)
*0⁰ **-= Πₖ₌₁⁰ 0-** = 1*
*0⁰ **-= Πₖ₌₁⁰ (1÷0)-** = 1*
The same thing happens with factorials of natural numbers and 0!.
n! = Πₖ₌₁ⁿ k
0! -= Πₖ₌₁⁰ k- = 1
(Posted as a regular Comment on the video as well)
after hearing "zero" so many times, it doesnt even sound like a word anymore
The name for that phenomenon is "semantic satiation". You're welcome.
@@nick012000 "phenomenon" not that word again
zero
@@nick012000 thank you fellow lifeform
jandais vu
Knuth's opinion is that if the exponent is viewed as an integer then 0^0 = 1 (because we don't have to worry about a bunch of annoying special cases as you noted) but if the exponent is viewed as a real number 0^0 is undefined (because the function x^y has an essential discontinuity at x = y = 0).
i've never heard of numbers being controversial. and yet, here we are.
As I remember correctly there was some Greek guy who was killed for saying the square root of 2 was not a rational number.
Many things in mathematics are controversial (mostly with teachers and students, and not actually with mathematicians). For example, you will find that too many people disagree with the equation 0.(9) = 1, despite the fact that the equation is irrefutable. Indeed, you may even be one of those people yourself. But this is not controversial with mathematicians. Mathematicians all agree unanimously that the above equation is true. Teachers and students are the ones who disagree.
666
Ask a group if they believe 0.9999~ = 1.
@@SerunaXI Oh lord.
I've never heard 0^0 being undefined. In school, I was taught 0^0=1 and my teacher even gave us the explanation you gave at the end. 0^0=1 easily makes the most sense to me lol.
My case was the opposite, I was taught it was undefined at school, and then when studying Engineering I was told it was equal to 1 (for practical purposes).
No it really depends on the case just as with infinity. For example let’s consider following divergent series 2*2*2*….. = p
Then p must be 2*p tight ?
p = 2p | solving for p
p = 0
Ups something happened here. Never Play Roth infinity and 0 they’re not good numbers
@@ガアラ-h3h Aah! didn't understand it. 2*2*2*......=p, then 2^(infinity)=p not 2*p=p
@@mathsphysnexus no what I wrote is correct sir. p is the sum so two times p must be 2p = p, since p is infinity it doesn’t change it.
But0 also satisfies the equation
It is undefined because it can lead to various things which are uniquely bad
The more I watch, the more I’m convinced it’s 1/2
That's not how mah works
bro is gonna get a nobel prize
That's not how math works
but 1+1-1+1-1+1-1...=1/2, so you can do that
@@jamesolatunji5 That is not legal in mathematics.
I think of 0^0 as multiplying by zero zeros. If I multiply any finite number by any non-zero quantity of zeros, I get zero. But if I multiply it by no zeros at all, I am not changing it. So the answer must be the multiplicative identity.
Regardless of how you feel about 0^0, I hope you enjoyed the video! To my understanding, there is not a 100% consensus on the definition of zero to the zero power and many (highly qualified) individuals will view 0^0 as undefined. This video is my view on what the logical and intuitive definition of 0^0 should be.
A lot of time and research went into making this. If you enjoyed it, I would really appreciate a 'like' on this video. If you didn't enjoy it, I would appreciate a 'dislike' and a comment with your critiques. Thank you all very much for watching!
While it is true that there are highly qualified mathematicians who will say that 0^0 is undefined, from what I can find in the literature, *most* mathematicians agree that 0^0 = 1, and there are plenty of mathematicians who would argue a consensus does exist. This, in itself, is indicative of there being a consensus, even if that consensus is not as strong as it could be. It should also be noted, though, that the consensus is stronger now than it has ever been.
@@angelmendez-rivera351 Yes, the consensus now is stronger than it ever has been. A lot of the "highly qualified" individuals who disagree are thinking in terms of the old arguments that this very video debunks. Why? Because they heard those arguments when they were mathematically naive, and then they never thought about the topic again.
I would argue that it is indeterminate. Indeterminate is not the same as undefined.
Indeterminate forms are both indeterminate and undefined (without context). Something like 5/0 on the other hand is undefined, but not indeterminate.
This is the definition I use. Graph the function y=x^x. The limit as x-> 0 is 1
@@FurryCombatWombat Yes. This means that x^x is actually right-continuous at 0.
In some cases it’s useful to consider 0^0 indeterminate, or rather, encountering 0^0 means that your current approach isn’t going to yield any useful information. If substitution gives you 0^0 it doesn’t necessarily mean your limit is 1.
Correct; it doesn't mean your _limit_ is 1. But limits are not the same thing as arithmetic. So this isn't a compelling argument to say that 0^0, as an arithmetic expression, should be undefined.
@@MuffinsAPlenty exactly! x^y is discontinuous at (0,0) so the limit at that point does not equal the function value there.
I have never seen so many people so passionate about math. It's nice to see people care so deeply about math.
Yeah, what were the odds?
Suppose x^x = 1
xln(x) = 0
x = 1 or x = 0
for x = 0 to be a root them lim(x->0) xln(x) must equal 0
using L'Hospital's rule, lim(x->0) xln(x) = lim(x->0) -x^1 = 0
Therefore, 0^0 = 1
>> for x = 0 to be a root them lim(x->0) xln(x) must equal 0
no?? that's not how it works. and by the way, you just proved, that x = 0 cannot be a root of xln(x) = 0, because ln(0) doesn't make sense. it doesn't matter if a function approaches some value
I remember in one of my math classes (not set threory but something similar enough) the defenition of x^y is "the number of functions with input from a set size y to a set of size x" and from that 0^0 is just how many functions are there from the empty set to itself, which is 1 (some kind of "epmty function" with no output but also no input space)...
שלום
Funny, but it's way more general than that. x doesn't have to be a number.
@@MrCmon113 of course
But for every 2 finite sets there's an equevelnce so it's easier to talk about actual numbers...
@@roiepeles7831 سلام
x doesn't have to be a number, because we don't have to work with sets and functions. This holds in any closed category with initial and terminal objects; we can show that the unique internal hom object [0,0] from the initial object to itself is equivalent to the terminal object 1. Neatly, the same argument shows that [x,x] will always be non-zero; there's always at least one such arrow, the identity arrow, inhabiting that hom-set.
.......... I’m 22 years old and this was the first time I ever felt like I truly understood exponents. Wow.
The education system is garbage.
Look at Khan Academy for more math topics explained in this style!!
@@the11382 Agreed
@@KaneYork no
this guy is ass at math
In competitive programming, I've mostly encountered problems where assuming 0 ^ 0 = 1, have worked in formula (if my memory serves me right) and given me the correct answer, but I've also encountered one scenario where 0^0 = 1 would lead to the wrong answer, so I think it's best to keep it undefined.
This. 0^0 is undefined because sometimes 1 makes sense, and sometimes 0 does. It doesn't really matter which is more common.
What was the scenario?
No, this is completely false. You did not encounter an scenario where 0^0 = 1 led to the wrong answer. You encountered an scenario where someone made a mistake elsewhere, and instead of correcting the mistake, they incorrectly blamed the failure on 0^0 = 1.
@@gildedbear5355 No. There is literally no scenario in mathematics where 0^0 = 0 makes sense. Literally none. Stop spreading misinformation. I am tired of you people doing this nonsense.
@@gildedbear5355 We need actual instances where 0^0=0 makes sense, preferably a series that requires it in the same way that the exponential and binomial series in this video require 1
An approximation I devised obeying the involution (-x + 1)/(2x + 1) for pairs of values in x, is x^x = (x^2 + x +1)/(-2x^2 + 4x + 1). It is the special case a = 2 in (-x + 1)/(ax + 1), respective ((a^2 - 1)x^2 + (3a +1 - a^2)x + 3)/((a-a^3)x^2 + (a^3 + 2a)x +3). For example it gives (1/4)^(1/4) = (1/2)^(1/2).
By the way, 0^0 must be defined if you want to include tetration in our number system. According to the arithmetic-geometric conversion, any 0's get converted to 1's, as 0 is the arithmetic identity number, and 1 is the geometric identity number. The operations get increased by 1 hierarchical order during an arithmetic-geometric conversion. Therefore, 0^0 is equal to 1 tetrated to 1. However, failing to define 0^0 causes tetration base 1, 0, and negative numbers to become undefined as well, including 1 tetrated to 1. However, 1 tetrated to 1 is just a power tower of 1's with 1 entry, which is just 1. Therefore, 1 is undefined. All integers can be multiplied by 1, and then all integers are undefined. All rational numbers are ratios of integers, and each integer can be multiplied by 1, and the rational numbers get lost in the black hole of undefinedness. Irrational numbers will eventually fall, first the square and cubic roots, then pi and e, and finally the complex numbers until the entire number system gets annihilated except 0. And finally, 0 will accept its fate of being undefined as being a product of 1 and 0, and the entire Mathsverse will collapse.
If 0^0=1, then 1 tetrated to anything is equal to 1, including fractional and irrational numbers. 0 tetrated to anything is 1 if the tetrating number is even, and 0 if the tetrating number is odd. Negative numbers tetrated to anything are:
Defined for all integer values if the negative number, written as a fraction, has both an odd numerator and denominator
Defined for integer values to 2 if the negative number, written as a fraction, has an even numerator and an odd denominator
Defined for integers to 1 if the negative number is either irrational or has an even denominator.
-1 tetrated to anything equals -1 if n is not 0 and is equal to 0 if n is -1, and 1 if n is 0.
I don’t.
@@perfectman3077 Even if you don't, tetration still exists.
@@nbooth Well, the limit as the functions f(x) and g(x) as x goes to 0 doesn't have to be 0.
bro wrote a thesis
Ok bruv, so you’re tellin me that if I multiply nothing by nothing no times, I get one thing?
Video starts at 4:33
You’re a prick
This argument relies on the axiom 1⋅x = x (multiplicative identity property). This applies to any real number x. Is 0^0 a real number? The axiom may not be applicable in the manner suggested.
A lot of math relies on 0^0 being 1, only some of which was brought up in this video. The reality is that the only math in our common axioms that suggests 0^0 should have a value, all agrees that the value should be 1.
This man has the magical ability to write backwards. Incredible. 1:09
Or you flip the video.
He's writing with his left hand so statistics say that's the answer ^^
@@maykstuff oooooooh
@@maykstuff No wait, isn't he writing with his right hand because the video is flipped? I feel nauseous...🤢🤯
@@maykstuffalso his ring finger is on his right hand here
0:10 I think 0^0 is 0 or 1
Wrong, it’s infinity
@@nathancheese8645wrong, it’s indeterminate
@@Mike_Rottchburnswrong it’s an integer
@@zaviyargulwrong, its zero
It's zero
I always thought 1 was logical, but could never explain... thanks for this!
Happy to help!
The real way you start an argument with just a number is just by mentioning 0.999... and 1 in the same breath.
Video on this coming out next week!
@@BriTheMathGuy Have you met Neoicon Mint yet? If not, get ready to meet Neoicon Mint.
@@MuffinsAPlenty No I'm not familiar
Nobody who actually knows mathematics would find that controversial lol
@@OrchidAlloy And yet many teachers and college students disagree with 0.(9) = 1, much in the same vein that many teachers and students disagree with 0^0 = 1, even though no well-educated mathematician would find the latter controversial, provided that you give a definition for exponentiation.
Brilliant, need to show my students this!
I always approached it as letting f(x) = x^x, and testing ever-decreasing numbers to see from raw calculation the limit as x --> 0.
This is really good in how you've conceptualised indices with the "1 (multiplied by)" as an invisible initial operation.
I’ve been thinking about this for days but I can’t wrap my head around it: PLEASE TELL ME HOW YOU RECORD THIS VIDEOS😭🙏🏻
I literally can’t sleep thinking about it. Did you seriously learn how to mirror-write??
I'm sorry I caused you trouble!! I write on a piece of glass normally (not reserved). After I'm done recording, I flip the image horizontally in a video editing software. You may now go to bed :)
@@BriTheMathGuy omg THANK YOU. Me and my father literally had a discussion about it and it was actually much easier than we had imagined: we like to make things harder than they actually are hahaha
It's called a light board, if you want to learn more or see other youtubers use them. They're great for combining hand-written lecture notes with projected powerpoint displays!
@@BriTheMathGuy please can you make a separate video for it, it's necessary
@@BriTheMathGuy this would also explain why you're writing with your left hand in the video, but are, presumably, right handed
Its fantastic how any advanced mathematical functions like exponents and factorials, and maybe others, treat operations on 0, like 0^0 and 0! as 1
Truly fascinating keep these vids up!
Me who failed Precal: I like your funny words, magic man
Hahah yeah, I'm not great with it AND didn't learn math in english so this is a doozey for me :D
I've found that good teachers make all the difference. For something you'd be easily able to access, I think 3Blue1Brown's calculus series is fantastic and highly recommend it, as he does a great job using visual representations of concepts and not getting hung up on rigor.
Excellent explanation. I use the definition of multiplication which states that multiplication must be between two factors, and then you use parentheses and multiplicative zeros to make it work, but your explanation, although less stringent, is quicker to reach the message. Keep up the good work!
Even with limits most limits of consideration will lead to 1 rather than 0 (how useful is the function 0^x?), so it is perfectly natural for it to be defined to be 1 using limits as an intuition. The problem with defining 0/0 is that there are many different relevant limits and they can approach really any value, so a consistent definition doesn’t make sense in that case.
0 / 0 = Aleph-Null.
Type in z=x^y in a 3D graphing software. At the point whete x and y are 0 it's crazy.
This happens because the function is discontinuous at (0, 0). However, this does not imply it is not defined for (0, 0), as the video explains.
@@angelmendez-rivera351 Except that f:R×R→R, (x,y)→x^y isn't defined for (x,y)=(0,0)
@@jadegrace1312 *Except that f : R*R -> R*R, (x, y) -> x^y isn't defined for (x, y) = (0, 0).*
You are wrong on multiple grounds. 1. The function that you defined above is nonsensical, since x^y is necessarily a real number, not an element of R*R. 2. f : (R+)*(R+) -> R, (x, y) |-> x^y, where R+ = {z is real : z = 0 or z > 0} is perfectly well-defined at (0, 0), and the video discussed, f(0, 0) = 1. This is not debatable.
@@angelmendez-rivera351 You're right, I didn't mean to write R×R for the codomain. Regardless, f(0,0) is undefined. It does not equal 1.
@@jadegrace1312 No, you are wrong. It does equal 1. You can choose to believe otherwise, but it is demonstrably wrong. I can prove that it is wrong. Look
z^n is defined as the product where z appears as a factor n times. This is the definition everyone uses. This is how you get that 2^3 = 8, because by definition, 2^3 := 2·2·2, since this is the product where 2 appears as a factor 3 times. This is how you get that 3^1 = 3, because 3 appears in the product only as a factor 1 time, so the product is the factor. This is how you get that 5^0 = 1, because it is the product where 5 appears as a factor 0 times, and since it appears 0 times, the product is empty, and so it is equal to 1. This is also how you get 0^0 = 1, because this is the product where 0 appears as a factor 0 times, and since it appears 0 times, the product is empty, and so it is equal to 1. This can all be presented rigorously, but I do not have a keyboard for mathematical symbols.
If you want to extend the definition for real exponents, then you can, and you still get the same results in the special case that said real exponent is natural. Consider that z^n = lim exp[n·log(x)] (x -> z) is always true when z is real and nonnegative, given how I defined z^n. Then for real nonnegative z and real y, I define z^y = lim exp[y·log(x)] (x -> z) whenever this limit exists. Actually, this definition even work z and y are complex. Anyway, 0^0 = lim exp[0·log(x)] (x -> 0) = lim exp(0) (x -> 0) = exp(0) = 1. The only time definition will not work is when z = 0 and Re(y) < 0 or Re(z) = 0 and |Im(z)| > 0.
0:45 That little disclaimer "In most contexts" made me laugh, because this is a point that could definitely start an argument.
But the "in most contexts" thing is generally nonsense, because contrary to what people like to say, there literally does not exist an scenario where 0^0 = 1 is false. None. No mathematician has ever been able to present such an scenario without someone else showing they made a mistake.
There do exist contexts in which 0^0 != 1. These contexts usually aren't very useful and tend exist for the sole purpose of arguing against 0^0 = 1. For all _practical_ contexts that I'm aware of, 0^0 = 1.
@@angeldude101 That is what I am saying, I think it is funny that it is anticipated someone in the comments is going to start an argument about how this is not the case for the sake of arguing it is not the case.
@jash21222 *lim (x -> 0-) 0^x fails to exist...*
Well, this assumes that 0^x exists for x < 0, which is not the case. Keep in mind: lim f(x) (x -> p, x in S) = L is defined by the proposition that for all real ε > 0, there exists some real δ > 0, such that for all x in dom(f), if 0 < |x - p| < δ and x is in S, then |f(x) - L| < ε. If you analyze this by letting dom(f) = [0, ∞), S = (-∞, 0), f(x) = 0^x, and p = 0, then you will notice that, for all x in [0, ∞), x is in (-∞, 0) is false. Therefore, the material implication is vacuously true for all real numbers L, which means that, in a matter of speaking, a limit _does_ exist, and is not unique. Every real number limit is a valid limit here.
*lim (x -> 0+) 0^x = 0 is reason enough to leave 0^0 undefined.*
No, it is not. At best, if proves f is discontinuous, which I never disputed.
a^b = e^( b(ln|a| + i*arg(a)) )
Substituting in 0, we get
0⁰ = e^( 0(ln|0| + i*arg(0)) )
ln|0| is -∞, and 0 * -∞ = Ø, so instead:
let 0⁰ = lim x→0 e^( 0(ln|x| + i*arg(0)) )
For a principal value, let -π
What
Proof by I said so lol
@@ToshimonsterRealm i said it's not a proof
I subbed to you mainly because of this, I don’t like it when people, especially high school teachers like mine, say “there is no reasonable definition for 0^0”. You gave a clear explanation.
Education in mathematics is a lot like an oral-written tradition: it is passed down and inherited essentially by word of mouth and word of book. Yes, mathematical consensus exists, and yes, peer-reviewed publications exist, but mathematics education completely ignores this and only relies on those very indirectly. In high school and undegraduate school in colleges, teachers do not discuss peer-reviewed publications in the classroom, and they do not even really mention the idea of consensus. They rely heavily on textbooks, some of which are good, some of which are not, and on the curricula that they themselves designed, and which are significantly affected by what they themselves were taught by the teachers. The way education works today is just a very elaborated, obfuscated, convoluted word-of-mouth tradition accompanied by writings. This explains very well why there is such a disconnect between "what teachers told me is true" and "what mathematicians actually hold to be true (in context)."
Having said this, I suspect that the reason teachers keep telling their students that 0^0 is undefined, despite being demonstrably false, is that they are having a conceptual misunderstanding regarding how functions are defined and how limits are defined rigorously, and the distinction between evaluating a function at a point, and evaluating the limit of a function near a point. Studies do demonstrate that functions and limits are two of the most poorly understood mathematical topics by high school teachers and students, possibly the two worst understood. Personal experience also supports this hypothesis (although obviously to a rather small extent, since anecdotes are not statistically significant evidence), because I know of too many instances in which teachers have verbatim told students "if you want to find the limit of this function, as x -> c, then you should substitute the point x = c into it," which is literally and explicitly prohibited by the definition of limits. This, to me, indicates just how poorly understood limits are among high school teachers, and among some undergraduate college teachers. Even on YT, you can find plenty of videos of teachers doing this. The fact that they teach this only reinforces the student's already mediocre understanding of limits, and makes it worse, rather than better. Some of those students go on to become teachers, and then they teach their students whatever nonsense they ended up learning. Of course, this is not true of every high school teacher, but it is common enough to be a legitimate concern that needs to be addressed.
Anyway, because limits are so poorly understood in the education system, they probably confuse the idea of the indeterminate form (->0)^(->0) with the arithmetic operation 0^0. They often treat them as the same thing as use them interchangeably, so much so, that they even use the notation 0^0 when referring to the indeterminate form, which is just strictly incorrect, because the indeterminate form is a limit expression, not an arithmetic operation, and so it should be denoted with a limit. The fact that this is an old controversy does not help the case, because all it does it make some people's incorrect understanding of the topic to feel validated and legitimized by their ancestors who got it wrong too, like Cauchy, for example. All of these factors together are probably what explains why the idea that 0^0 is undefined, despite being contradicted by the mathematical consensus and the literature, and by the simple mathematical logic itself too, really, is so commonly taught in schools. This is why I always say that education for mathematics needs a significant overhaul. Division by 0, 0^0, square roots and nth roots, the order of operations, the definition of the domain of a function, calculus in general, and many more other topics in mathematics have been taught so unbelievably poorly for so many decades now, that even the mathematics teachers are wrong, and these topics need to be approached differently, and be taught better.
@@angelmendez-rivera351 , by the way, what are problems with square root and nth root? And what can be taught poorly about function domain?
@@angelmendez-rivera351 I couldn't agree more, teachers follow the rule that if explaining something a certain way will help the students solve questions and help those students pass that don't have the ability to (or simply don't try to) understand the complicated concepts. They'll not care enough to actually tell them the truth and this perpetuates as you said, and now it has gotten to the point where many teachers themselves are misinformed
@@ВладДрезельс Teachers fail to tell their students that there is a conceptual difference between the roots of a polynomial, and the nth root functions evaluated at s number. This distinction is not harmless at all, because it confuses students and it leads them to believe that, for example, the symbol sqrt(9) is -3 and 3, as opposed to being just 3, with the latter being the correct answer. It causes problems when students have to solve exercises where they solve an equation or evaluate an expression when those contain radical symbols. If you follow different mathematics channels on TH-cams and look at videos on the topic, you will see just how common this misunderstanding is. The vast majority of people will say that "9 has two square roots, -3 and 3, so sqrt(9) = +/-3," or if they accept that sqrt(9) = 3, then they will say "well, 9 has two square roots, but only the positive one is denoted by sqrt(9)." Even Wikipedia will make mistakes concerning this.
Teachers have this super bad habit of assigning classwork exercises or homework exercises where they ask things like "what is the domain of sqrt(4·x - 5) + 7?" or "what is the domain of 1/(x^2 - 5·x + 4)?" Expressions have no inherent domain, so the question is nonsensical. The domain is something that comes specified with the definition of a function, it is not a property of a function that you can figure out a posteriori. This also gets me to my next point: there are studies that show that many teachers do not actually understand what functions are. Teachers are able to give intuitions, they are able to give examples of functions (in a poorly stated manner), and they know about things like the "vertical line test." But they are unable to define what a function is correctly, and they are also unable to explain what are the requirements for a function to be invertible, or even what a surjection is. This explains why the concept is taught so poorly. A function f is a binary relation from set X to set Y such that, for every x in X, there exists exaxtly one y in Y such that (x, y) is in f. Notice how, in order to define a function f, you need to define the domain of f in its definition. This is crucial for understanding how inverse functions work.
@@angelmendez-rivera351 Ok, I see, yeah, at my school there were things similar to what you describe; However, I wanted to mention this your last phrase about square root:
“If they accept that sqrt(9)=3 they will say that 9 has two square roots, but only positive is denoted as sqrt(9).”
I agree this is incorrect terminologically, because nth root is a function(or unary operation), so it is nonsense to say “there are two roots”, but don’t these people just mean “there are two numbers square of which is 9” and we took only “positive” part of x^2 parabola to “invert” it? I mean, I’ll repeat: right terminology does not work like that, but if person is speaking like this you need just to tell him: number square of which is 9 is not necessarily a square root; it’s a root of equation x^2=9. Square root is just positive number square of which is 9. I mean, it seems to me that this term-confusions may not be so crucial and could be fixed by one remark.
And also about domains: I mean again, this is not how adult smart people write in papers, but a slight change of formulation makes the phrase “find domain of «insert expression»” absolutely correct: just say: determine the set values for which the expression after substitution them into x is defined(this is just too long to say I suspect).
I mean, some people may don’t see these subtle differences, but sometimes people may just have a speaking and writing jargon which shortens and optimize communication. I mean for example in my country one never writes sine square of 2x as (sin(2x))^2. We write mostly sin^2(2x)(in fact we omit brackets as well, the power 2 looks small on a paper but not on a keyboard😭)-yeah, this expression is not in fact sine squared of 2x, but in a context everybody understands perfectly(either people with deep or superficial knowledge).
All I wanted to say is that sometimes you may confuse real misunderstanding and innocent jargon which has influence on understating approaching 0.
5:07
*Vsauce music intesifies*
Or does it
Also, if you graph y=x^x and look at what happens when you go from the right towards 0, you can see that it seemingly goes down towards (0,0) but actually starts curving up to (0,1).
Yes, in real number plane, 0^0 indeed approaches 1 from both positive and negative sides.
You might say, well yes, then it's 0^0=1, but no, in complex number plane, this breaks down. So 0^0≠1 unless you're working with only real things.
Source: Numberphile
2:30
bro considering he's actually writing BACKWARDS in his perspective shows how much work he put in this
Great discussion. Could also have looked at x^x as x -> 0+ as well.
This also works from the negative too! I was thinking that this is a much simpler answer.
But 0^0 could actually be two *different* zeroes. For example, if you took the limit of (e^(-1/x))^x that's still 0^0 but you get 1/e.
I've always thought of taking something to the zero power as dividing it by itself. With 0, that would give us the indeterminate form (not undefined) as 0/0 which only has values that can be applied in specific contexts by evaluating limits. Personally, I think 0^0 only works in context
Yeah, that's also how I feel about it!
There are other comments here that say it depends on context, but weirdly none of them actually _name_ a context. They only cite personal feeling or some incomplete memory.
@@muskyoxes ... That's because it's random formulas. Context might be f(x)=(x+2)(x+7)/(x^2-4). Here, there's a hole at x=-2, and you would get 0/0. But by analyzing the limit, you see the hole is at f(x)=5. So in this specific context, 0/0=5
@@AntonioDoukas Yeah, there are real problems with 0/0 and thus i agree it's undefined. I haven't seen any real problem with 0^0.
@@muskyoxesthe density of an object is given by the formula, D=m/v, as the mass and volume both approach 0, the density also approaches 0.
One of many examples
Glad I came across this video, when I asked my teacher in AP Calc (I think it was him) what 0^0 was, he asked me, "What's 1^0, what's 2^0, what's 3^0... so what do you think 0^0 is?"
And that's how I just accepted 0^0 = 1 as the answer.
Cool to see more information about this. Granted when you mentioned series I died internally, I didn't get get part of calc- LOL, I'm not good at Calc.
That's not a good answer. Sadly, the teachers that teach in the K-12 system are often horrible mathematicians. 0/0 is both indeterminate and undefined as the value can be just about anything, depending upon how precisely you got that 0/0. Sadly, this video is missing some pretty basic knowledge which results in the argument not following in any reasonable way. 0! is most certainly not 0. It's 1.
You can just do the same thing for 0^0 = 0
"Whats 0^3, whats 0^2, whats 0^1... So what do you think 0^0 is?"
5:40 but if you are saying that we don't divide by 3 to continue the pattern instead we multiply 1 , 3 less each time then how would you define negative exponents? You pattern will just stop as 3^0. So the pattern indeed continue by dividing by 3 so that all the properties of exponents continue to work for all kinds of exponents. So 0^0 is indeed undefined but we use 1 for certain contexts because it just works fine over there.
As a tutor, I would point out to my students that defining 'raising a number to a power' as 'multiplying *_that number_* times *itself,* that many times' -- which is how most people learn it in school -- is actually *not* a correct definition.
7² is not 7 multiplying *itself* twice: that would be 7 (× 7) (× 7), [where the action/operation of 'multiplying by n' is indicated here as (× n) ], which is 7 multiplied by 7 once, and then multiplied by 7 again to make 'twice', but obviously this is incorrect as there are a total of three 7s instead of just two!
Instead, the more correct way of saying it is 'multiplying *_one_* times the number, that many times', so 7² = 1 (× 7) (× 7) = 1 × 7 × 7, which = 7 × 7, since 1 multiplying any number is just that number itself. See, the 1 is usually omitted because it's easier to write fewer symbols, but it's still part of the definition: omitting the 1 is just a shortcut!
Therefore, whenever you are raising a number to a power, say nᵏ, ask yourself, "How many times do I multiply *_1_* (!! extra emphasis in your head to break old/wrong way of thinking) by the number n? Ah, k times!" Then the resulting expression is "1 (× n) (× n) ... (× n)" with "(× n)" written k times. Thus, if k = 0, then you write "(× n)" 0 times, i.e. you don't write it at all! So, arithmetically, n⁰ = 1 .... and that's it, you don't write any "(× n)"s, just n⁰ = 1, and you're done!
And thus, 0⁰ = 1 .... and that's it, you don't write any "(× 0)"s, just 0⁰ = 1, and you're done!
Bonus: This also explains the whole 'negative powers' thing: The symbol n⁻¹ (or, alternatively, as ⅟n) is actually *just a symbol,* and it means 'the multiplicative *inverse* of n, whatever that might be'. And all numbers in a field -- *except* for 0 -- automatically have a multiplicative inverse defined for them, such that n × n⁻¹ = 1, by definition of what a multiplicative inverse is (it's the thing that multiplies the number to make 1). But there isn't one for 0 (because no number multiplies 0 to make 1), i.e. it is literally 'not defined', aka 'undefined', i.e. "0⁻¹" is an undefined expression.
But when the 'multiplicative inverse' *does* exist, it is usually a *different number* than the original number (except that 1 is its own inverse). And so, when you 'raise a number to a *negative* power' what you're really doing is *just* raising the number's *inverse* (which, again, is its own number) to a *positive* power'. In other words, 7⁻² is not 'dividing 7 by itself twice' (which is just wrong, and further illustrates why the common 'definition' of exponentiation doesn't make sense), nor is it even 'dividing 1 by 7 twice' (although the result of that would be equivalent; it is *not* the definition); instead, it is 'multiplying 1 by the *inverse of 7,* namely 7⁻¹, twice'. Or, more succinctly, it is raising 7⁻¹ to the power of 2. Which = (7⁻¹)² = 1 (× 7⁻¹) (× 7⁻¹) = 1 × 7⁻¹ × 7⁻¹ = 7⁻¹ × 7⁻¹.
Technically, you need a (very simple) theorem to show that n⁻¹ = 1 / n (provided n ≠ 0). And *then,* once you've got that little theorem under your belt, it finally makes sense to talk about 'negative powers' as 'dividing by the number', so the above becomes 7⁻² ≡ (7⁻¹)² = 1 × 7⁻¹ × 7⁻¹ = 1 × (1/7) × (1/7) = 1 (/ 7) (/ 7), which now can be read as 'dividing *_1_* by 7 twice'. Which, by the way = 1 / (7 × 7) = 1 / (7²), which is where the exponent rules come from, like n⁻ᵏ = 1 / (nᵏ) ... provided *_n ≠ 0_* (!! mental emphasis!).
Incidentally, this is why (n⁻¹)⁰ = n⁽⁻¹ ˣ ⁰⁾ = n⁰ = 1, *except* when n = 0, *even though* 0⁰ = 1, because we *started* with n⁻¹, which would have been 0⁻¹, but 0⁻¹ is undefined, and so (0⁻¹)⁰ is also undefined! Order of operations matters!
Again, these are results from the earlier, more basic definitions involving the operation of repeated *multiplication* and the existence (except for 0) of multiplicative *inverses;* they are *not* stand-alone definitions of negative exponentiation themselves.
Very nicely put. This is also why, when the exponent n is a natural number, I always use the English phrase "the product with n instances of the factor z" to define z^n, rather than saying "the product with z multiplied by itself n times." The latter is just a semantic mistake, and a lot more confusing to visualize, which is why people fail to understand that z^1 = z and z^0 = 1 are consequences of the definition itself, not separate definitions we impose afterwards.
Note 1: (-1) is its own multiplicative inverse as well. In fact 1 and (-1) are the only real numbers who are there own multiplicative inverse.
Note 2: You don't need to say 7²=1x7x7 if you define the power of a number x to a natural number n as the product of n factors of x. x to the 0 would be the empty product which is the multiplicative identity, which is 1 in the real numbers.
Rest seems to be just fine.
So my loosely knit definition of 7^2= 2 sevens in the equation and a couple arithmetic signs. so 7^2=7x7. 2 sevens in the equation. Its correct right?
However, is there any proof that ‘one times the number, that many times’ is a real thing? If it isn’t, but it seems like the better definition, would using that definition complicate other formulas, math problems, etc.?
@@jaydenaleung
It pretty much lies in the definition of multiplication. Basically the multiplication has a neutral element that won't change the outcome if multiplied with any other object. In the field of the real numbers this neutral multiplicative element is 1. So not only 1xa=a for any real number a, but also 1x1xa=a and 1x1x...x1xa=a.
Furthermore an empty operation will always return the neutral element. So an empty product will always return 1.
Me who got a C in high school math: “ok”
If I type "0^0" into my phone's calculator I get "undefined, or 1". Talk about controversial lol.
On iPhone calculator I get “Not a number” 😂
Interestingly, doing this again three years later I get "0⁰ is ambiguous"
Regarding series, it's a useful convention that 0^0=1 in the context of series, because it allows formulas to be written more compactly, not needing to split off the k=0 term from the rest of the series. But this doesn't necessarily mean that 0^0=1. Similarly, it's a useful convention that an empty product is equal to 1, as it allows you to avoid stating separate cases for when sets are empty.
Ultimately, I'd say that any value you give to 0^0 is purely definition, and can be useful or not depending on context. It's not something that is proven using other definitions.
I just mentioned the empty product proof on R/learnmath, and got the response below. My Calculus Teacher agrees with the response.
"Indeed, in most contexts.
When doing modern algebra or discrete math, 0^0 = 1 is the best choice. Especially for polynomials, since they can be thought of as objects of the form ∑ (a\_n)x^(n), which yields 0^0 when evaluating at x=0.
When doing more continuous math (e.g. real analysis, topology, differential geometry) though, it's safer to say 0^0 is undefined rather than creating special cases everywhere (e.g. in theorems like l'Hopital's Rule applied to exponentials)"
Man, these math nerd videos always lose me at 3:14
I'm a tauist, therefore I quit at 6:28
The number of times i have heard 0 in this video... it has made zero not sound like a word now🤣
😂😂
The Laplace Transform of f(t)=t^n is F(s)=n!/s^(n+1). Our concern is 0^0, so t=n=0. So, F(s)=0!/s^1=1/s. Inverse Laplace of 1/s is 1. Also, initial value theorem states that lim f(t) as t goes to 0 is equal to lim s*F(s) as s goes to infinity. Well, s*F(s) = s*1/s, which is just 1, so lim 1 as s goes to infinity is still 1. Therefore, 0^0=1 for both methods.
I'm not entirely convinced there are no similarly natural cases which benefit from defining 0^0 = 0.
You are correct. There are no cases where 0^0 = 0 would actually result to be beneficial. Also, the definition of exponentiation itself contradicts 0^0 = 0.
@@angelmendez-rivera351 I actually meant there sure are such cases. Anyway, there's whole Wikipedia article on the topic which I think provides more complete view at the topic:
en.wikipedia.org/wiki/Zero_to_the_power_of_zero
In general I think it only makes sense to define 0^0 = 1 where it helps. There is whole family of functions where it makes sense to define 0^0 = arbitrary number to make the function continuous so I don't think there's point in taking 0^0 = 1 as universal definition.
@@kasuha *I actually meant there sure are such cases.*
If that is what you meant, then you are wrong. Such cases do not exist.
*Anyway, there's whole Wikipedia article on the topoc which I think provides more complete view at the topic.*
Throughout my education, I have read this article more than 10 times at different points in time. One thing that you absolutely did not notice about this article is that it explicitly states a few times that mathematicians always choose to either define 0^0 as 1, or leave it undefined, which actually proves my point. There genuinely is no situation where mathematicians choose to define 0^0 to anything that is not 1. In fact, the same article also states at one point that... there is consensus about there being a mathematical consensus that 0^0 = 1. It makes my point even better.
*In general, I think it only makes sense to define 0^0 = 1 where it helps.*
There is not a single situation where it does not help.
*There is whole family of functions where it makes sense to define 0^0 = arbitrary number to make the function continuous*
No, it does not make sense, because several theorems in analysis demonstrate that those functions would still be discontinuous at that point even if it was defined at that point. For example, 0^x would still be discontinuous at 0 if 0^0 = 0, because 0^x for x < 0 is undefined.
Also, the fact that you are making this argument demonstrates to me that you completely missed the point of the video, and Bri said in the video rather explicitly that using limits as an argument for undefining or defining 0^0 is completely invalid and fallacious.
@@angelmendez-rivera351 "There is not a single situation where it does not help."
Sorry but you're wrong about it. For the simplest example, consider function f(x) =
0^x
@@kasuha I literally just refuted that case. 0^x would still be discontinuous at 0 because 0^x for x < 0 is undefined.
This is a very nice video. Probably the best video on 0^0 currently on TH-cam!
I wonder if, in your research, you came across the topic of the empty product but didn't feel your audience was "ready" for it? Your "multiply both sides by 1" argument seems to be emulating the empty product without going all the way.
I 100% agree with you, both on this being probably the best video on YT on the topic, and on his argument emulating the empty product idea without fully going all the way.
Thanks very much!
I did come across the empty product argument but I wanted to give what I thought was the most simple/intuitive way to get to the result.
Thanks for watching and commenting!
@@BriTheMathGuy I agree that the empty product can be a bit of work to motivate. But I think it's the best way to have this argument. Personal opinion, though!
The rest of this post will be long, and it will be my take on the whole 0^0 thing from a very broad perspective. I know plenty of people won't read this, but I would love to say it anyway. Maybe it will help others.
I don't think the argument for 0^0 = 1 used in this video quite hits the point (though I think this video points out the common flaws people use against defining 0^0 well). I consider the best argument to be a much more general one - 0^0 = 1 because it is an instance of the empty product. Empty operations are _incredibly_ useful. It can clean so many things up, and is a nice unifying theory which explains a lot of seemingly strange conventions, actually making them results, rather than conventions.
When we allow for empty operations, 0^0 = 1 is true for the same reason that 0! = 1, which is true for the same reason that x^0 = 1, which is true for the same reason that 0*x = 0, which is true for the same reason that the empty set is the basis for the 0 vector space, which is true for the same reason that units aren't primes in rings (and why 1 isn't a prime number), which is true for the same reason that the 0 ring isn't an integral domain, which is true for the same reason that a topology must include the empty set and the set itself as open sets, which is true for the same reason that the degree 0 component of the tensor algebra of an R-module is the ring R itself, and so much more. Now, you're free to disagree with any of these things as being "true". You could change the definitions if you want. But these are things we have found to work very nicely, and they can all be explained/motivated with the same basic reasoning - the associative property is _morally_ about extending a binary operation to an operation on finite sequences, and that if empty operations are to be consistent with associativity, then the empty operation is the identity of that operation. So if you set up your mathematical framework to allow for empty operations which are consistent with associativity, then none of these things are "special conventions" anymore - they follow from the basic definitions of things like exponentiation, multiplication, addition, union, intersection, tensor product, etc.
Empty operations also make things so much clearer. With allowing arbitrary finite products, the Fundamental Theorem of Arithmetic can be cleaned up from, "Every integer greater than 1 is either prime or can be written uniquely as a product of primes, up to order of the factors" to "Every positive integer can be written uniquely as a product of primes, up to order of the factors". It really boils the statement down to the heart of the matter.
So there's an extremely general, useful principle, and this principle implies 0^0 = 1. This principle also explains why we know 0^0 = 1 will always give the "right" answer when 0^0 crops up in discrete formulas (including polynomials and power series and the binomial theorem) - because these formulas rely on the associative property of multiplication, which is the same thing the empty product relies on. (The associative property is also the _critical_ property that gives us the exponential rules!)
Now, analysts may say, "0^0 doesn't work well with continuity", and sure, that's a true statement. So analysts are free to have 0^0 undefined in contexts where they care about continuity, if they please. But this shouldn't be seen as the "general situation". Undefining 0^0 because of continuity should be seen as an exception that is made out of convenience. Because it is a matter of convenience, not necessity. And Bri explains this well when explaining why "0^0" being indeterminate as a limiting form is perfectly consistent with 0^0 = 1. So why should this one particular instance of convenience be seen as the general rule when, in every other context, 0^0 = 1 works perfectly and has a very nice theoretical framework?
@@MuffinsAPlenty Well, honestly, to be fair, while I agree with your point, properly motivating the idea of nullary functions would take a whole series of videos, which would be well outside of what this video can encompass.
This is actually one of the reasons I think the video ISN'T very good. He completely ignores the logical result being an empty product set, to insert the *1 option, when you could similarly do just about anything (+0 if you wanted the other result). Then further proceeds to say the "better" calculators are handling it this way.
To claim those calculators are "better" is completely false. In mathematics, the empty product does often become 1, and would lead credence to his theory, but in programming, the actual answer to an empty product is neither 1 nor 0, but null. The reason these programs spit out 1 or 0, or even undef (shorthand for undefined), for 0^0 is because it is an easier form of error handling, as a fetch command involving zero would cause an error.
Unfortunately this video assumes the conclusion for much of its premise.
I came up with this solution (not about 0^0, just talking about how exponent works in general) while in class talking about exponents and I'm so glad its actually held in fact and makes sense :)
tbh reindexing the power series or adding restrictions to the binomial theorem actually makes way more sense than arbitrarily changing the definition of exponentiation
Great video! The end result sounds like an explaination that a 6th student might come up with because of how simple and straightforward it is.
3:46 why does he look so sad while writing math though 😭
here's the problem though, x^x has no limit as x approaches 0 because the positive side will converge to one and the negative converges towards negative one (when defined)
???
0-^0- = 1. Try (-0.001)^(-0.001) in a calculator, maybe you forgot to include the leading minus sign in parentheses.
@@PalladinPoker this is undefined still
@@joeym5243 But we need to define it. "Undefined" is just a code word of saying, "Screw this challenge. I'm turning back". This is very bad as it states that you are fearful and afraid of challenges. This is the exact opposite goal of humanity. Humans are meant to break away from nature using self-awareness, conscience, willpower, and imagination. This is why mankind managed to establish such civilization that sets them apart from all animals. We 21st-century humans must thank our long-gone ancestors by breaking away even more to make them proud. Einstein left in his will saying the first person that uses his theory of relativity to invent time travel must travel back to April 17th, 1955, to make him proud. "Undefined" is basically stating we are not used to those numbers, so let's just don't use them. It all depends on context. If we were living in Minecraft, a world without circles, and all of a sudden, a circle randomly appeared out of the blue, we would call it "undefined", but since in our world, we have polar coordinates, the premium package with the spherical bundle, we are accustomed to seeing circles, and we won't call them "undefined". Also, a long time ago, people worshipped the moon like a god at an "undefined" distance away from us, and they believed the sky's the limit, and everything they see in the night sky are basically pure celestial spheres of light at an "undefined" distance away from us, and the Earth was the point where those "undefined" distances converged to, but we managed to reach the moon and even send space probes outside our solar system, even attempting to reach the end of a universe, making such distances not "undefined" anymore. Finally, infinities are everywhere. Without it, the Big Bang wouldn't have happened, and every time you move, infinities are required to make it happen. Infinities created us, don't disrespect them by calling it "undefined" Divide by 0, spread your wings, learn how to fly, and do the impossible. We need infinities to make our dreams of time travel and superpowers come true.
Since a certain subset of nerds just love defining operations with no practical relevance, I've decided to inject some logic into the process. Any value of x that is raised to the power of n is expressed in the context of multplication n times. So x squared is "x times x", x to the power of 1 ix "x" and x to the power of 0 would therefore be expressed 0 times in the context of multiplication and thus would be "" (null). It really doesn't matter what x is.
You got me before mentioning the 0^1=0^(2-1). I immediately knew you were going to say that, even while you were still writing 0^(1-1). I think the empty product is what got me in the end.
Still what happens with 0^z with complex z? I'm especially concerned with z being real negative or close to a real negative.
0^0 = 1, 0^z = 0 if Re(z) > 0, 0^z is undefined otherwise. This is easy to argue for, too. 0^z = 0^[Re(z) + Im(z)·i] = 0^Re(z)·0^[Im(z)·i]. If Re(z) > 0, then 0^Re(z) = 0, and 0·0^[Im(z)·i] = 0 regardless of what you may expect 0^[Im(z)·i] to be. If Re(z) < 0, then the expression is trivially undefined, because you have to divide by 0. If Re(z) = 0 but |Im(z)| > 0, then 0^Re(z)·0^[Im(z)·i] = 0^0·0^[Im(z)·i] = 1·0^[Im(z)·i] = 0^[Im(z)·i]. 0^[Im(z)·i] is consider to be undefined, since the only real way to deal with imaginary exponents is via b^(i·t) = exp[i·t·log(b)], but log(0) is undefined.
Bella lorè
@@gian2kk yo dawg.
There are infinitely many places where x^y is undefined. The issue here is that 0^0 is not necessarily one of them.
Because 0^0=1, 0^-1 (if defined) would need to be a solution to 0x=1, but this equation has no solutions in real or complex numbers, therefore 0^-1 must remain undefined.
(Note that to keep this argument valid, I use only multiplication, not division. That is because division by zero is not allowed in a valid argument. Because it would make the entire argument itself invalid, division by zero can't be a step in any argument meant to establish that a value is undefined. We have to use other methods, such as multiplication, instead.)
@@waynemv Well, there is a solution of 0^-1, and that is infinity.
Just because 0^0 is notationally convenient for series, doesn't make it defined. All those sums, like in the binomial theorem, that have 0^0 can be perfectly written so that the term where 0^0 is not part of the sum and is considered separately. Like you said, you only need to redefine the sum. You don't really need 0^0 for the concept to work. It's that the formula is not so pretty without it.
The issue is if you redefine that one series you break entire families of related series. The series is defined the way it is for a very good reason...
The taylor series of the exp function around 0 has an infinite radius of convergence, and is continuous everywhere. Thats something you can prove without ever plugging in any values...
If you plug in 0 you need to get 1, because thats what every sequence (exp(z))_z∈C approaching exp(0) converges to.
If you consider the first term seperately and leave 0⁰ undefined you break almost every Taylor series, and you break the entire concept of continuous functions. Thats not something you want to do.
That is not the same thing. Analyzing the origin of the x^0 function is not the same as saying the number 0^0 is undefined.
You can look at the limit of the f(x) = x^0 from the left and right and have a well defined value for f(0). That is completely different than saying you can have a well defined value for the expression 0^0.
For mathematical rigor, what you should write for the first f(0) of the maclurin series of e^x is the limit x->0 of x^0/0!
That limit is well defined as 1.
But if the function was 0^x, the limit would be 0.
What you are talking about has to do with the subject of removable discontinuities. I admit my real analysis is quite rusty and I'm only an engineer, but that is how I learned things in calculus 101.
@@Alkis05 the issue is the following:
You can determine if a function is continuous based on some quite technical characteristics of said function, its entirely overkill but it works and its rigorous.
If it is continuous then its limits tell you what the function evaluates to at any given point, if it isnt they don't.
Now, when you have a continuous function that evaluates to 0⁰ at any point the limit will always be 1, and never anything else. There are plenty of discontinuous functions where that isnt the case, but they do not matter because their limits dont necessariely allow you to deduce anything about what the function actually evaluates to at any point (limits do not commute with discontinuous functions)
So unless you want to break a lot of math you do not want to have 0⁰ be anything other than well defined and 1.
@@msq7041 Are you saying that 0^x is not a continuous function? Is sqrt x a continuous function?
@@Alkis05 depends on the sets you define them over, if you define sqrt x over C, its not continuous,
if you define them over R+ then sqrtx is continuous but 0^x isnt
Thank you for your videos :) Another simple example of 0^0 = 1 is polynomials - the constant term is c = c x^0, regardless of what x is
6:00
That would work also with other numbers like 5:
5*3^3=5*3*3*3
5*3^2=5*3*3
5*3^1=5*3*
5*3^0=5
x^0=5
2:22
'...back in calculus...'
Ugh....I'm going to lock my door and take off my 'Big Boy' pants :- /
Yes, 0^0 can be 1 in some cases but not always. You can get functions in calculus where the limit of a function approaches 0^0 but if you try to rewrite the the function, usually by using L'Hospital's rule, you can get just about any other number. So, 0^0 is actually an indeterminate form in calculus just like infinity-infinity, 0/0 or infinity/infinity. Regardless, you can actually still define the super-square of 0 to be 1 because the limit of the function x^x as x approaches 0 is actually 1, so it might be correct in that sense.
*Yes, 0^0 can be 1 in some cases, but not always.*
No. Stop spreading misinformation. There is no situation in which 0^0 = 1 is false. None. Every example you can think of is an example where you made a mistake elsewhere and you just did not realize it.
*You can get functions in calculus where the limit of a function approaches 0^0, but if you try to rewrite the function, usually by using L'Hôpital's rule, you can get just about any other number.*
No, no such scenarios exist. The problem is that you are failing to understand that if lim a = 0 and lim b = 0, this does not imply lim exp[ln(b)·a] = 1. This has nothing to do with the arithmetic expression 0^0, which appears nowhere in the evaluation of this limit, not if you evaluate it correctly. The only reason it appears in this limit is because people incorrectly write exp[ln(b)·a] as b^a, and because they then incorrectly say lim exp[ln(b)·a] = 0^0, which is false. Consider this: suppose for a minute that b^a = exp[ln(b)·a] really is true (and it is not true, but more on that later). If lim a = lim b = 0, then lim exp[ln(b)·a] = exp[lim ln(b)·a], since exp is a continuous function. However, it is false that lim ln(b)·a = lim ln(b)·lim a, and the reason this is false is because if lim b = 0, then lim ln(b) does not exist: ln(b) is a diverging sequence. So you cannot say exp[lim ln(b)·a] = exp[lim ln(b)·lim a] = exp[ln(lim b)·lim a] = (lim b)^(lim a) = 0^0, which is what you are doing here. But you also cannot say b^a = exp[ln(b)·a] to start with, because b^a is defined for arbitrarily real b and integer a, while exp[ln(b)·a] is defined for positive real b and arbitrarily real a. These are different expressions. If b is positive real and a is an integer, then the two expressions happen to be equal, but they are defined on incompatible domains, and the domain in which we are evaluating those limits is only compatible with the domain of exp[ln(b)·a], not the domain of b^a: you cannot freely vary a to be a real number, since a is an integer, and the integers are isolated points of the real numbers.
*So, 0^0 is actually an indeterine form in calculus just like infinity-infinity, 0/0, or infinity/infinity.*
Cursed be Cauchy for spreading the myth/hoax of indeterminate forms, and cursed be the education system for mathematics, for not correcting this large mistake. There is no such a thing as an "indeterminate form." This is nonsense. If you consider the function f : R*(R>0) -> R defined by f(x, y) = exp[ln(y)·x], then you can say that (0, 0) is a non-removable singularity of f. If you consider the function g : R*R -> R defined by g(x, y) = y - x, then you can say that (♾, ♾) is a non-removable singularity of g. If you consider the function h : (R\{0})*(R\{0}) -> R defined by h(x, y) = x/y, then you can say that (0, 0) and (♾, ♾) are non-removable singularities of h. You can say these things, but these things have absolutely nothing to do with, and have no relationship to, the associated arithmetic expressions 0^0, ♾ - ♾, 0/0, and ♾/♾, that people mistakenly attribute to these singularities. 0^0 = 1. Period. End of discussion. ♾ - ♾ and ♾/♾ are undefined, since there exists no possible elementary algebraic structure you can assign to Union(R, {-♾, ♾}), and 0/0 is undefined, because 0 has no multiplicative inverse. That is all. There is no such a thing as an indeterminate form or an indeterminate algebraic expression. You will never see a mathematician even pretending that these are real things in an academic written work, and I honestly have no clue why the education system chooses to continue insisting on spreading that particular myth around. The calculus curriculum, at a worldwide level, clearly needs some urgent overhaul.
*Regardless, you can actually still define the super-square of 0 to be 1, because the limit of the function x^x as x approaches 0 is actually 1, so it might be correct in that sense.*
No, this would be incorrect. The function j : R>0 -> R defined by j(x) = exp[x·ln(x)] does indeed satisfy lim j(x) (x -> 0) = 1. But that does not imply 0^0 = 1. What it does imply is that if we continuously extend j to [0, ♾), then j(0) = 1, but we have no reason to think that the continuous extension of j to 0 should be given by 0^0. This is absurd, and it betrays a fundamental misunderstanding of how limits and continuity and functions in general work. Simply put, the expressions f(p) and lim f(x) (x -> p) should not be treated as the same expression, ever. There is no context in which this is allowed, except as a literal abuse of notation, or except when you have already proven a priori that f is continuous at p, but this requires that f be defined at p and specified at p, which you cannot use a limit to do, unless f is already defined as the limit of some other function. They are different expressions, in general, with completely different definitions.
@@angelmendez-rivera351 I tried to follow along but got stuck at why exp[ln(b)*a] cannot be rewritten as b^a, the other stuff I could (loosely) follow
@@PhiDXVODs Okay. Let me make this as simple as possible. I think you and I can agree that (-1)^2 is well-defined, and is equal to 1. But exp[2·ln(-1)] is undefined. So that is the first example that b^a is not actually equal to exp[a·ln(b)]. I think you also can agree that 0^2 is well defined, and is equal to 0, yet exp[2·ln(0)] is undefined. So that is the second example. In conclusion, the equality b^a = exp[a·ln(b)] is false.
@@angelmendez-rivera351 "There is no such a thing as an "indeterminate form."" Um... isn't _literally all of calculus_ just solving different versions of 0/0 and 0*∞? The definition of the derivative is lim d->0 (f(x+d) - f(x))/d. If you pretend indeterminate forms don't exist and just substitute, you get (f(x) - f(x))/0 = 0/0. You need to pretend that d isn't 0 until the very end in order to actually solve it.
The only way to get around this is to abandon real numbers and use infinitesimals. Then you'd have Re((f(x + 𝛆) - f(x))/𝛆). Tada! No indeterminate form! And all you had to do was claim that there exists a number infinitely close to 0 but not 0.
You are correct however in that these aren't equivalent to 0^0.
@@angeldude101 *...isn't literally all of calculus just solving different versions of 0/0 and 0·∞?*
No. To the contrary, there is never a situation where you must evaluate 0/0 or 0·∞. Calculus is all about evaluating limits. Limits are rigorously well-defined in the context of real analysis, and things like division by 0 or ∞ never appear in problem-solving.
*The definition of the derivative is lim d->0 (f(x+d) - f(x))/d. If you pretend indeterminate forms don't exist and just substitute, you get (f(x) - f(x))/0 = 0/0.*
Well, yes. Letting d = 0 is _not_ the same as letting d -> 0. I never said they are the same thing. Letting d -> 0 does not require division by 0 here. There is no 0/0 to consider.
*You need to pretend that d isn't 0 until the very end in order to actually solve it.*
Correct. So there is never a point where you actually encounter 0/0. You are proving my thesis here.
*The only way to get around this is to abandon real numbers and use infinitesimals. Then you'd have Re((f(x + 𝛆) - f(x))/𝛆). Tada! No indeterminate form! And all you had to do was claim that there exists a number infinitely close to 0 but not 0.*
No. Nonstandard analysis is a perfectly valid way of handling this, but it is not more valid nor superior to real analysis. Your dismissal of real analysis here is ignorant.
2:48 my man is creating his own gender here
It's 1. The definition of x^y where y is a whole number is: 1 multplied by x, y times. Multiplying 1 by anything 0 times leaves you with 1.
You absolutely deserve at least million subs!
The best math teacher I've come across so far!
Wow thanks so much!
@@BriTheMathGuy yupp!
Can i finally tell this to my math teacher? Lmao
Yes. Please do.
Don't get me in trouble 😬
I was very resistant to "defining" 0^0 but after watching this video setting it to one makes a lot more sense
We always define empty operations as the neutral element of that operation.
@@MrCmon113 yo your picture is amazing
@@MrCmon113 so, empty sum=0
Empty product=1
Empty exponentiation doesn't exist because there's no neutral element that works for both sides.
For 5:40 you say that 1*3^3 = 3 * 3 * 3
And etc.
You also say that to go a exponent lower, you divide by said number
If you apply this for zero, you would be dividing by 0
And it also wouldnt make sense for 0^2 to be lower than 0^0 since 0 is less than 2. So they have to be the same.
Also, in the combinatorial interpretation, x^y is the number of lists of length y of elements taken from a set of size x. Regardless of the set and hence the value of x, we have one list of length 0, namely the empty list.
This popular thought that 0^0 is "indeterminate" or "undefined", and how people use limits to jump to apparent conclusions about set expressions that are unrelated, like 0^0, is a good example of how limits, or more generally, fundamental concepts in analysis are mistaught. It is apparent that many teachers lack a good understanding in these subjects and fail to communicate the ideas properly, leading to confusion among students. As confusing as these subjects are to a new learner, I believe these misunderstandings could be prevented if it wasn't for the current education system (which teaches you more about how to answer exam questions than actually giving you a good foundation for the subjects, generally speaking)
Amen! This is exactly what I have been saying in some of these threads.
The neutral element with regards to multiplication is 1. Meaning that if there are no factors (which means ^0, you get 1.
likewise:
The neutral element with regards to addition is 0. Meaning that if there are no summands (which means *0), you get 0.
Exactly!
Also, evaluate: lim x->0 x^(e^(1/x))
From the left and the right (they differ)
This is going to 0^0
0 is the type of weed that most mathematicians crave for ;p
The set theory explanation seems simpler and more intuitive. There is always one way to put nothing in a set. Raising anything to the zeroth power asks that set question, and putting nothing into the set counts as one solution, hence X^0 has 1 solution, even for 0^0 because an empty set is still a set, and still has 1 solution.
I agree, but this is not a definition of exponentiation that is taught in schools, and writing the definition formally requires concepts that are not taught in school either.
@@angelmendez-rivera351 it should be
@@TheZenytram I agree. This is one of many reasons why the education system needs an overhaul.
There is no good reason to not define 0^0 as 1. It just makes sense. It's the empty product of elements that are all 0. Empty products are 1. That being said, 0^0=1 is not strictly necessary for the purpose of series expansion. Technically, a polynomial ring R[X] over a commutative ring R is defined by extending R by an element X, i.e. we add X and all elements necessary to make the new set a ring itself. That way we obtain all the powers X^n of X and the linear combinations of the resulting elements. At this stage X is not a number or representative of one, even though our intuition obviously tells us that a primary intent is for it to represent a number. This means that we can easily define series by working in the polynomial ring R[X] over the appropriate ring R. In that case, X^0=1. To evaluate a polynomial, we just fix the condition that we evaluate polynomials in their "reduced" form, and so we never get 0^0, because that would only occur when we have the power X^0 which "reduces" to 1 before being evaluated. We never technically need 0^0.
And this isn't as clumsy as my explanation suggests, there's nothing fundamentally hard or wrong about. But the question is: why bother? What are we preserving that is of so much value by not defining 0^0=1 and forcing all these ways to make it work without it? Just make 0^0=1. It makes sense, it works, it's intuitive, and more importantly it's probably simpler for people who already struggle at math to have concrete answers on these things rather than yet another "it's not defined" moment that can be very confusing.
Using the definition of the “one less 3 every time” instead of deviding by 3 every time we reduce its power by 1, how would that work with negative exponents? Doesn’t this kind of break that logic?
Not a critisism, genuinly interested.
Negative whole numbers are not counting numbers. They are defined by way of the notion of "additive inverses." 0 and the positive whole numbers, on the other hand, are natural numbers, and so they can count how many elements there are in a collection. It makes perfect sense to talk about there being 0 copies of an object, or about there being 0 houses in a specifed region, but it does not make sense to talk about having -1 houses. z^n, for natural n and z being an element of a multiplicative monoid, is defined as being equal to the product with n copies of the factor z. I can talk about there being 0 copies of the factor z (including z = 0) and it makes perfect sense, and this can be made rigorous using multisets, but it does not make any sense to talk about there being -1 copies of the factor z in the product.
So does this break the logic? Yes, it does, but this is not a problem. Why is this not a problem? Because when you extend this definition so that z^n is well-defined for every whole number n, as opposed to only every natural number n, this is, we are now working with Z, instead of N, then no matter how you choose to extend this definition, the extension MUST include the previous definition as a special case. Otherwise, it is not actually an extension of the definition in question. So regardless of which route I take to define z^n for negative n, this route has to simplify to z^0 = 1, z^1 = z, etc, when n is nonnegative. For natural numbers n and m, the equation z^(n + m) = z^n·z^m is always satisfied. We would like this to be true for every whole number as well, not just natural numbers, so that I can have, for example, z^0 = z^1·z^(-1). Since z^0 = 1 and z^1 = z, it follows that z·z^(-1) = 1. In other words, given how we have chosen to define exponentiation for arbitrary n, z^(-1) has to represent the multiplicative inverse of z, and this is a consequence of how the definition simplifies as a special case when n is nonnegative. This also explains very nicely why 0^(-1) is undefined: because 0·0^(-1) = 1 must be true, in order to satisfy the fact that 0^(n + m) = 0^n·0^m, in this case having n = -m = 1. However, as we know that the equation 0·x = 1 has no solutions in the field of complex numbers, it follows that 0^(-1) is undefined. In light of this, it must be clarified that z^(n + m) = z^n·z^m is true whenever z^(n + m), z^n, z^m are all well-defined.
@@WallCarBoatHead Yes, you are dividing by 3, but the point to understand here is that this needs to be motivated properly. We use division for negative exponents, because it does not make sense to talk about multiplying a quantity by -1 repetitions of the factor 3, so talking about one less repetition is also nonsensical in that case. However, it does make sense for 0 and other natural numbers, which is the point the video tried to get at.
3^0/3 = 3^-1. Not sure what the problem is.
@@innocentsmith6091 This misses the point of the post entirely.
@@innocentsmith6091 that’s circular logic
Isn't the introduction of 1 * 3^0 kind of arbitrary?
What you really have here is a geometric sequence.
For some x^n, we have that t(n-1)=x*t(n).
Suppose we know that some t(n-1)=3^1, x=3, then we simply have 3^1 = 3*t(n) ==> t(n)=3^1/3 = 1 = 3^0. But this breaks down with x=0. t(n-1)=0^1=0 ==> 0 = 0*t(n) ==> t(n)=0/0=0^0. (undef.)
He already debunked this argument at the very beginning of the video. Obviously, if you try to divide by 0, then you will obtain nonsense, but no one told you that you need to divide by 0. Also, there is nothing arbitrary in introducing 1·3^0. It is a fact of reality that 1·x = x is true for every x. If 0^0 has a value, then it will not produce any contradictions if you multiply by 1.
Your reframing of his argument is also wrong. There is no division to be done here. What he is doing is not a recursion, but rather, he is using exponents as a way of counting the number of the base factor in the product, which is, you know, actually the definition of exponentiation in the first place. If there are 0 copies of the base factor 0 in the product, then the product cannot be equal to 0. In fact, since there are no other factors either, the product is empty, hence equal to 1.
By the way, the argument he used is also the same argument that proves that 0! = 1.
@@angelmendez-rivera351
Factorial is defined as follows;
T(n)=n*T(n-1),
T(1)=1! = 1*T(0) ==> T(0) = 0! = 1!/1 = 1.
The argument used to prove 0!=1 is precisely my argument that 0^0=1.
@@BiscuitZombies You don't need to divide by 1 to show that 0! = 1. You can use the empty product argument that Angel Mendez-Rivera mentions as well. With the empty product, 0! = 1 is _immediate_ from the definition of factorial.
Just because you've seen a justification for something in the past doesn't mean it's the one true way to do things. The empty product is a _far_ cleaner and more useful approach than what you've seen in the past.
@@BiscuitZombies I know what you are trying to get at. Division is the intuitive introduction to how we do empty products, but it is not how the idea is made formal in mathematics, which is really my point here.
The video managed to do a good job of emulating the concept without throwing a rigor bomb at the audience, but also without carelessly appealing to division where it does not apply.
@@angelmendez-rivera351 My question though is how does his concept lend itself to being the *correct* method of calculating the value of 0^0?
The pattern you used in the end also leverages limits as essentially you are looking at 0 / 0 each time you lower the power. 0 / 0 isn't defined but the limit of x / x as x approaches 0 certainly approaches 1. 0^0 isn't equal to 1.
No, the pattern he used at the end does not leverage any limits, and it does not even use division. The pattern he used at the end is what any reasonable person on planet Earth would call "counting." Yes, this is something that even a Kindergarden child can do. You can count how many copies n of the base z you have in the product, and then you can denote that product as z^n. In fact, this IS the *definition* of exponentiation, and I would hope that you know this. So z^0 is nothing but the product with 0 copies of the factor z. Since you have 0 copies of the factor, the product has 0 factors, and the product of 0 factors is equal to 1. Therefore, z^0 = 1, for any z... including z = 0. Did I use division at any point in my demonstration? No, and neither did Bri.
@@angelmendez-rivera351 At 5:18 he quite visibly uses the notation for division to show what he is doing at each step down. Also how would fractional exponents work with this line of thinking? We know that they are perfectly well defined and 0^0 would have to fit that definition as well.
@@theimmux3034 You are ignoring that, almost immediately after that, from 5:24 to 5:32, he explicitly states: "it's not that we are dividing by 3 each time, it's that we're multiplying by one less 3 each time." He did begin by mentioning division, only to clarify afterwards that division is not necessary to actually get there, division is merely the first intuition people recur to get there, but it does not accurately capture the idea of counting.
*Also, how would fractional exponents work with this line of thinking. We know that they are perfectly well-defined, and 0^0 would have to fit that definition as well.*
The definition for fractional exponents is an extension of the definition for natural exponents, so you are correct that 0^0 = 1 would have to fit such a definition as well, and it does fit it. Every rational number can be expressed as a product t·u, where t is an element of Z, and u is a unit fraction. Given that exponentiation to the power of t would already be defined prior to defining it for fractional exponents, all we need to do is find an appropriate definition for z^u, and then z^(t·u) := (z^u)^t. It should be important to note, though, that (z^u)^t may not necessarily be equal to (z^t)^u, so the order absolutely matters. Now, every unit fraction is equal to u = 1/m =: m^(-1), where m is an element of Z. Given that we want to preserve z^n·z^m = z^(n + m) whenever z^n, z^m, and z^(n + m) are well-defined, an appropriate way of defining it would be to say z^u = z^(1/m) := root(m, z), where root(m, z) is the mth root function evaluated at z. In fact, this is the only way of defining z^q for rational q I have ever seen in my life. 0^0 = 1 still satisfies this definition.
If you want to go further and define it for real exponents, then you can run into problems using the base 0 if you are not careful, but in general, you actually run into problems with multiple bases. While there are ways to define exponentiation for real exponents, they all do involve some ad hoc post rationalization when the base is not a positive real number. So ultimately, you may not even want to use power notation to work with real numbers. This is why in analysis, nearly everything is done in terms of the exp function and the ln functions instead, but even then, this is not proof. For example, defining exponentiation x^y as exp[y·log(x)] does not work for x = 0, not even if y > 0 because log(0) is undefined. So you have a few alternatives: piecewise define x^y so that you can define 0^y separately, but if you do not want to rely on the definition for rational y, then defining 0^y for any y at all is arbitrary; or you use a limit argument that 0^y for y > 0 should be 0 because lim exp[y·log(x)] (x -> 0) = 0 for y > 0. This same argument also results in 0^0 = 1, because lim exp[0·log(x)] (x -> 0) = lim exp(0) (x -> 0) = lim 1 (x -> 0) = 1. However, this basically requires you that you switch your definition to being simply x^y := lim exp[y·log(t)] (t -> x), which many take to be unsatisfactory. So defining exponentiation for real and complex exponents can be done unambiguously, but whether those definitions are satisfying is up to debate. Many mathematicians opt to simply not use powers when they are working with nonrational quantities. However, these definitions all result in 0^0 = 1. I have yet to see a definition that, if correctly stated and applied, does not result in 0^0 = 1.
That is also something I noticed. It is still limits--yes, even if it's talking about removing a multiplication. It still goes from a number above 0 down to 0 to find the value of a function at 0.
That doesn't make it wrong, because, as I argue above, using limits is a standard way to try and find a value for something.
It is however, due to using limits, a definition rather than a proof. Just like x^0 = 1 is a definition, not a proof. It's an extension of the meaning of exponents. It is a natural definition, but it is a definition all the same, just like analytic continuation.
The main thing is that 0^0 = 1 is useful in sequences, hence it makes sense to define it that way. The funny thing is, the natural limit definition also works in that context, too.
@@ZipplyZane I am on the side of limits too but at the start of the video he threw limits into the trash.
The meaning of power is x^a*x^b=x^(a+b) and x^1=X.
So x^a*x^0=x^a and 0^0*0^1=0^1.
So x^0=1 and 0^0*0=0.
So 0^0=all numbers because any number times 0=0.
0:20
0° = 1 (I don't know how to keyboard powers so I used the degree symbol as exponential 0).
Now to watch the video :P
Here is a ⁰
Here: ⁸¹³³⁷⁴²⁰⁶⁹⁵
For the power definition, couldn’t you technically multiply 0 by any coefficient, because of the multiplicative identity of 0? Wouldn’t this mean that 0^0 could equal any number and therefore is undefined?
Yes this is completely correct
You can, but you still get 0⁰ = 1. For example, let’s use the coefficient 4.
First we’ll do threes because that’s what’s in the video.
4 × 3³ = 4 × 3 × 3 × 3
4 × 3² = 4 × 3 × 3
4 × 3¹ = 4 × 3
4 × 3⁰ = 4
We can solve this last equation by dividing both sides by four to get 3⁰ = 1.
Now let’s try with 0.
4 × 0³ = 4 × 0 × 0 × 0
4 × 0² = 4 × 0 × 0
4 × 0¹ = 4 × 0
4 × 0⁰ = 4
We can solve this last equation by dividing both sides by four to get 0⁰ = 1.
@@TheBasikShow That assumes that 0^0 is 1 to begin with. If you remove the 4 * on the left side, the right side still gives the correct answer. If the left is 0^0 in this case, the right side would say 4. Nothing has been proven.
@@noahali-origamiandmore2050 Of course this isn’t a proof, it’s a pattern that gives a justification for the definition that 0⁰ = 1. If you want a /proof/ then you first need to explicitly define what exponentiation is, and when you do-surprise, surprise!-you always get 0⁰ = 1.
I’m confused about what you said with removing the 4 from the left side; if you remove the four it becomes a different equation.
@@TheBasikShow No you do not get that 0^0=1 because no such "proof" proves that 0^0=1.
0^4=4*(0*0*0*0)
0^3=4*(0*0*0)
0^2=4*(0*0)
0^1=4*(0)
0^0=4
I don't need to multiply the left side by four because you can see that all lines before the 0^0 line are valid. You can replace 4 with any number and end up proving that 0^0 can be any number. "Taking away" factors is not valid in this case.
But you're probably thinking that it's only valid to use 1 and not 4 because any number times 1 is itself. However, going from 0^1 to 0^0 involved this "taking away" a factor of zero. That's absurd because that's the same thing as dividing by 0, which is invalid.
0^2=0 not because "taking away" a factor of 0^3 gives you 0^2 but because 0^2=0*0. The only reason that you can "take away" factors from nonzero bases is because dividing by nonzero numbers is okay, but to get from 0^1 to 0^0, a division by zero was involved.
WE NEED MORE BRI FACIAL EXPRESSIONS 😍
Great video, btw.
I'll continue to do my best! 🤨😒🤔
Thanks so much!
3:10
Another problem with the limits argument is that the limit of 0^x doesn't actually exist
Sure, from the right it equals 0
But 0 to any negative power n is 1/(0^n) or 1/0. There is no limit from the left because 0^x is undefined to the left of 0. The 2nd limit in this argument doesn't even exist
I still don't see how this gets around the limit issue however, that seems genuinely more perplexing?
The limit is an idea. In the same way a line can be discontinuous at certain points, both limits that he drew are approaching different things, but both share the point 1 on the y axis
@@halfcadence1417 Do they really share 1 at the y axis? I thought he showed that one approached 0 and the other approach 1..
@@kanewilliams1653 Consider g(x, y) = x^y (assuming we know what is being meant by x^y in the first place, because this is a discussion that needs to be had). It can be proven that lim g(x, 0) (x -> 0) = 1, and that lim g(0, y) (y -> 0+) = 0. For this reason, you can safely say that lim g(x, y) (x -> 0+, y -> 0+) does not exist. This is the argument being used for declaring that 0^0 must be undefined. But what the video is saying is that lim g(x, y) (x -> 0+, y -> 0+) not existing does not actually imply 0^0 is undefined. So the argument that #TeamUndefined is resting on is completely invalid. In fact, there is no contradiction in saying that g(0, 0) = 0^0 = 1 and lim g(x, g) (x -> 0+, y -> 0+) not existing. Seriously, there is no contradiction. Because the first equation is about what g is *exactly at* (0, 0), while the limit is about that g is *very close to* (0, 0). Do you see how they refer to different things?
Look at this way. You are familiar with the floor function, right? The floor function, also known as the greatest integer function, is defined for every real number, and what it does is output the greatest integer that is smaller or equal to x. If you want a formal symbolic definition, then floor(x) = n n =< x < n + 1, n is an integer. So having established this, you may now ask yourself, what is lim floor(x) (x -> 1)? If you look at the graph on Wikipedia, what you will see is that lim floor(x) (x -> 1-) = 0, but lim floor(x) (x -> 1+) = 1. So clearly, lim floor(x) (x -> 1) does not exist. Now I ask you: should the nonexistence of lim floor(x) (x -> 1) lead you to conclude that floor(1) does not exist? Because I just gave you the definition of the floor function, and if you substitute x = 1 into said definition, then floor(1) = 1. In fact, this is mathematically correct, and no one on planet Earth disputes that this is correct: everyone knows that the greatest integer that is smaller or equal to 1 is 1 itself. But if you then say that lim floor(x) (x -> 1) implies floor(1) is undefined, then you are saying 1 is undefined. Do you see the problem with this logic? What needs to be understood here is that limits, given how they are defined, are about how functions behave *near* a point, not *at* a point, so you cannot use any information about lim floor(x) (x -> 1) to make any conclusions about floor(1). The only thing that lim floor(x) (x -> 1) not existing tells you is that the floor function is discontinuous at 1 (in fact, discontinuous at every integer, if you look at the graph), but it being discontinuous at 1 does not make it undefined at 1. To actually know what floor(1) is, you need to use the definition of the floor function, not limits, in the same way that if you want to calculate 2·3 or sqrt(9) or sin(π), you use the definition of multiplication, the square root function, and the sine function, respectively, you do not use limits. Arithmetic, algebra, and computation exist for this precise reason.
Similarly, if you want to determine what g(x, y) = x^y is when x = y = 0, then you need to look at the definition of the symbol x^y itself, not use limits. Limits can tell you something about whether g is continuous at (0, 0), but they cannot tell you whether it is defined at g, and what its value is, if any. So you need to ask yourself, what is the definition of x^y? What is exponentiation? These two questions have clear answers, and those answers do entail that 0^0 = 1. The video explained very briefly near the end why 0^0 = 1 is entailed by the definition, but it did not go into a whole lot of detail about it.
@@angelmendez-rivera351 Thanks for the detailed reply. I understand the idea that limits approach a function's point and are not actually representative of a function at a certain point. Your floor function shows this well. What I am still confused about is why the definition of 0^0 = 1 appears out of an everywhere-else continuous function where all other values are 0 (in the case of 0^x, as x varies).
The floor function is DEFINED to be such that the piecewise graph is how it is. But exponentiation has no analogous definition.
Instead, the arguments given in the video appear to be that i) It is just convention, because it simplifies some equations we use in higher math (which is itself a circular argument, because were we to use alternative definitions of 0^0 we might have alternative definitions of these further higher-level equations, and ii) because it follows from the "dividing by 3" property which holds naturally for all integers excluding zero, in which case 0^3 = 0, 0^2 = 0, 0^1 = 0, 0^0 = 1 (!), and 0^(-1)=0, and so on.
Do you see my confusion? Hope that makes sense
@@kanewilliams1653 *What I am still confused about is why the definition of 0^0 = 1 appears out of an everywhere-else continuous function where all other values are 0*
I understand what you are getting at, but 0^x is not everywhere else continuous. It is not even defined everywhere. For example, I think you would agree that 0^(-1) is undefined, because 0^(-1) would have to be equal to 1/0, but 1/0 is undefined. However, what you can say is that 0^x is continuous everywhere in the positive real axis, yet right-discontinuous at x = 0 if we accept that 0^0 = 1. And yes, I can see how that can be a little puzzling at first glance, but I think it still makes perfect sense.
*The floor function is DEFINED to be such that the piecewise graph is how it is. But exponentiation has no analogous definition.*
This is true, but this is also because the floor function is a unary function: it takes a single real number as input. Exponentiation is a binary function: rather than taking a single real number as input, it takes a pair of real numbers (x, y) as input, and it gives you an output that we denote x^y. This makes exponentiation more complicated and more prone to having discontinuities, especially because the definition of x^y for real numbers is ultimately an extension of the definition of natural numbers too.
For a second, let us just think about x^n looks like, when is a natural number. How is this defined? Well, the exponent is there to *count* how many "copies" of the factor x we want in the product. So if I write x^4, this means that I am denoting a product with 4 copies of the factor x, x·x·x·x. There is a rigorous way of formulating this intuitive definition, but I do not really have a mathematical keyboard so it would be sort of tedious to do so. Anyway, you should be familiar with this idea already, since this is how polynomials work. So when thinking about the case n = 0, you are denoting the product with 0 copies of rhe factor x, a.k.a, a product with 0 factors. And this may seem to be counterintuitive, and it could even sound nonsensical. What even is a product of 0 factors? Well, imagine this. You can always multiply x^n by c to get c·x^n. Again, you probably have seen this, since this is idea polynomials are based on. The product c·x^n can easily be seen to be the product where c is multiplied by x exactly n times. So now, the exponent *counts* how many times you apply the action of multiplying by x to the "input" c. Done like this, it is now very easy to justify why x^1 = 1 and x^0 = 1 have to be true for every x. c·x^1 denotes the product where c is multiplied by x one time, so it is just c·x. Since this holds for arbitrary c, this means x^1 = x. Similarly, c·x^0 denotes the product where you multiply c by x zero times. If you do the multiplication 0 times, that means you just do nothing to c, leaving it unchanged. So c·x^0 = c. Since this holds for arbitrary c, this implies x^0 = 1. But notice that x itself was arbitrary too, so this has to hold for every complex number x. Notice how this includes x = 0. So this implies that 0^0 = 1, and that makes sense given the way exponentiation is usually defined. So this is how you end up with 0^0 = 1 but 0^1 = 0.
Of course, for a variety of reasons, it is useful to extend this to rational exponents or real exponents, and not just work with natural exponents. This is tricky, but it can be done. The idea is that we would like for x^(y1)·x^(y2) = x^(y1 + y^2) to always be true whenever the individual parts can be defined, and we would obvious like our definition of x^y to simplify to the definition above when y happens to be a natural number. You may notice that x^n = lim exp[n·ln(t)] (t -> x) is true for every n and every x, and you realize that if you define x^y := lim exp[y·ln(t)] (t -> x) for real y, it does satisfy the properties we want it to satisfy. You may wonder why there is limit in there, and that is because without the limit being there, not only would 0^0 be undefined, but actually, 0^n would always be undefined for any n, since ln(0) is undefined as well. The limit solves this problem, but some people choose to omit it and take as an implied-by-context notation to simply write exp[y·ln(x)] instead. This is still somewhat sloppy notation, but okay.
*Instead, the arguments given in the video appear to be that i) It is just convention, because it simplifies some equations we use in higher math ... and ii) because it follows from the dividing by 3 property...*
I completely agree with you on point i). Convenient notation is itself not a proof that 0^0 = 1 is true, it is only an argument that says that 0^0 = 1 is a useful convention, though that does not make it a definition. And in fact, I agree that ultimately, we could just change the notation that is used in the binomial theorem and in the Taylor series theorem and not have to use 0^0 = 1 at all. As for it being a common convention, this is accurate if taken to be just an oversimplification of the debate. There is some nuance and historical context regarding how mathematicians view 0^0 and why it is taken "as a convention," and not as, say, a theorem. But that nuance also involves a debate concerning notation in mathematics too.
As for point ii), I definitely see where you are coming from. As Brian clarifed in the comment thread started by Muffins, Brian wanted to essentially emulate the idea of the empty product, but without actually having to rigorously make use of that, and instead appeal to the concept by trying to get the viewer to develop the intuition for themselves with his argument. But the way he presented that argument was a little confusing, and it may just have been better if he had chosen to explain it using an arbitrary constant c instead of using 1 specifically, because then it makes people think that the argument has something to do with 1 multiplied by 0 is 0, which is superficially undermined by what some people brought up: that 0 multiplied by any number is 0 too. But if he had done it with an arbitrary constant, then the logic he is trying to use would be more clear.
Awesome video!
I'd like to give my opinion on the argument you gave at 3:50. You mentioned that the binomial formula and the infinite sum of e^x require 0^0 to have a definition to prevent any restrictions like x cannot equal 0. However, I believe that that the binomial formula and Taylor series (infinite sum thingy) would work just as well if 0^0 isn't defined as a value. The reason is that the zero in the exponent is defined as an integer. Since it is an integer, it can't get arbitrarily close to zero (it can equal zero, but you cannot take a limit as it approaches zero). Thus, if you fill in x=0 into any of the formulas, you can use a limit to give it a value, namely the limit as x→0 of x^0, which equals 1 (reminder, the reason why I can write a zero in the exponent is because it is an integer).
If you disagree with my reasoning, feel free to leave a reply and discuss.
Your argument is fallacious. What you are essentially saying is that, in the binomial formula, (0 + 0)^n should be replaced with lim (x + y)^n (x -> 0, y -> 0). Doing this misses the point of what limits are. By definition, limits only tell you about the value of a function *near* a point, not *at* a point. At this point, I am repeating myself, because I have had to say this throughout many other comment threads to this video. So limits cannot answer the question of what is (0 + 0)^n, and replacing this with lim (x + y)^n (x -> 0, y -> 0) is not actually valid.
@@angelmendez-rivera351 One can easily show this is continuous everywhere and therefore the limit is actually the value of the function.
@@davidmelo9862 No, because to show that it is continuous everywhere for n = 0, it necessarily has to be the case that 0^0 is defined in the first place.
If you want the binomial theorem to hold for every natural n, and every complex number x and y, then you necessarily need to have 0^0 = 1. Using limits is not going to work, because then it will not hold for every complex number x and y, and it would not be binomial theorem if you had to use limits.
@@angelmendez-rivera351 For n=0, the binomial theorem will state that (x+y)^0 = x^0y^0, in which case what I said has to define 0^0 of course as it is the way it is defined in a combinatoric setting.
I agree that 0^0=1, but the argument that "we have to change our whole notation if we say it's undefined" is kinda not true because one can get (0+y)^n = lim(x+y)^n (x->0) = lim(x->0) x^0 y^n = y^n
Which is the case when these have continuity in any meaningful form. (n!=0)
Of course there are problems with this since the binomial theorem is not really about numbers as it is commutative structures.
It's not controversial: it is undefined.
That's what real mathematicians say when there are two (or more) different arguments that lead to differing results.
"Controversy" => "undefined".
Please now produce some real maths instead of nonsense
Wrote 0^0 as e^ln(0^0), properties of logarithms state that this is e^(0ln(0)). What if 0=2*0, then we have e^(0(ln(2)+ln(0)), which is different to our original but sill the same. We also have an ln(0) here which does help its definition.