@@DrTrefor i'd expect the definition of decreasing be just as ambiguous as the definition of increasing. actually checking Wolfram Mathworld, it does the same distinction between decreasing and strictly decreasing. So according to Wolfram f(x)=1 is both increasing and decreasing, and "non-decreasing" would be incorrect
"Non-decreasing" and "strictly increasing" feel like the "increasing function" equivalent of using "non-negative integers" and "positive integers" instead of "natural numbers"
The number n^m counts the number of functions from a m-element set to a n-element set, so you could argue that 0^0 counts the number of functions from a 0-element set to a 0-element set (i.e. from the empty set to the empty set). Any such function would have an empty domain (and range), so the empty set is the only candidate. It also fits all the criteria: it is a relation, since it is no element in it that isn’t an ordered pair, and it is functional, since there are no two ordered pairs (x,y) and (x,z) with y =/= z as elements. So there is exactly one such function, which gives another context where 0^0 = 1 is the sensible choice.
I firmly believe that we can rewrite a^b as a*a*a... b times, assuming b is integer. With that in mind, without anything else, if we send in 0 of those a's into multiplication, we would get undefined/indeterminate values, but we can just multiply everything by 1: 1*a*a = 1*a^2 = a^2 1*a = 1*a^1 = a 1 = 1*a^0 = 1 I know it is a bit naïve, but hey, 1's are powerful in math
Yeah, it depends on your definition of algebraic priority (is distributivity prioritized over multiplication) and how you interpret the awfull ÷ sign. And please, use parenthesis in the right form... or use fractions 😀. Overall is a very badly writen question.
Put me in that camp as well, it's the correct answer. I can't think of many greater losses for mathematics than order of operations becoming such a large part of the general public's concept of what math is. There's a reason no one serious uses the division sign, or even, in most cases, the forward slash. Hell, this is also the reason we killed the multiplication sign.
8:00 The interesting part about this graph too is that in wheel theory, where 0 and absolute infinity are considered multiplicative inverses and 1/0 can be defined as infinity, and both positive and negative infinity join at the same point, this graph actually CAN be defined to be continuous even by including 0 in the domain. The simple fact is, what is and isn't allowed in math is defined by the rules you are working with, and it's entirely possible to make up NEW rules any time you wish and perform math with them. Math simply doesn't care about what you do with it, it is merely systems of inputs and outputs defined by rules that you impose on them, like creating an algorithm and executing that algorithm. You want there to be a square root for negative 1? Define it. You want there to be zero divisors? Define them. Do you want there to be a new integer between 3 and 4? Go ahead, as long as you're being consistent with your logic, you can do math with it, your only responsibility is to make it clear what those rules ARE so others can reproduce your results.
Careful, Wheel theory, if I recall correctly, defines division differently than _"a multiplied by b^−1"_ which means that 1/0 does not represent the multiplicative inverse of 0. Also, I don't think that 1/0 is the same thing as the infinity in the projectively extended real line and positive and negative infinities in the affinely extended real line.
@ Inaccuracies of the specific example aside, that misses the point of the entire second paragraph, which is that math is a system of objects and rules. Define the objects you’re working with and the rules that apply to them, and anything you can do with those objects within those rules is possible. If I wished to, I could absolutely define absolute infinity to be the multiplicative inverse of 0 and ignore the concept of positive or negative infinity, and in doing so I could draw the exact same graph but instead of saying the point at 0 is undefined or indeterminate, I could just call it absolute infinity with the inclusion of my rule to define it. What’s important is being able to replicate the steps, which means people only need to know how I did it. Sure, if you’re following someone else’s rules then you’re bound by them, but that doesn’t mean it’s the only set of rules.
@@GameJam230 Oh, I am sorry that I made write all of this. I am actually fine with the 2nd paragraph I have nothing against it. I just wanted to point some minor details, that's all. Nevertheless, I will read your response as I am sure it will be interesting. *After the read* OK, it is basically a reiteration of the main comment, I just have one disagrrement with it. _"The multiplicative inverse"_ is an agreed upon term that has a very precise definitions which I am sure you know it. You can't just arbitrarily assign the term to the new element 1/0 in wheel algebra because that means you basically disregarded the definition of the term. Remember that definitions and rules also needs to be consistent as you have said in your main comment, so they aren't completely arbitrary. There are some definition and rules that are incompatible with each other like for example the fact that you can't have a ring whose 0 element is multiplicatively invertible without it being the trivial ring which basically means that we should accept that in a non trivial ring we should accept that 0 can't have multiplicative inverse.
@ I want to be clear that in my second comment I wasn’t implying 1/0 WAS the multiplicative inverse of infinity in any existing system, just that somebody COULD define it to be and then perform math with it. If I were to leave all existing rules of math required to graph y = 1/X alone and ONLY added a single new rule that said 1/0 equaled “absolute infinity”, the graph would look exactly the same but I’d be saying there is a value it is equal to instead of saying the value cannot be determined. I’m not saying any existing system DOES work that way anymore, just that if one created such a definition, doing math with it would be entirely possible, same as zero divisors and square roots of negatives.
@DrTrefor comes up a lot in Electrical Engineering esp signals, where a linear system must preserve the frequencies present in the input signal, without adding any new ones. If you had an input V(t)=sin(t), it has one frequency component; if your system has transfer function x + 1, the output is now sin(t)+1. The 1 has added a DC bias to the signal, shifting it up; this is an added frequency (0 Hz). So the nuance between linear and affine is very important for my field :)
I guess words mean different things in different contexts. A bounded linear functional need not be bounded (in the function sense, even though a functional is a function).
I only feel strongly about two of them: 0 is a natural number (it's the type of the answers to questions like "how many cows do you own?"); and 0^0 = 1 (the product of no things is 1, just like the sum of no things is 0). People who are thinking about limits should know that the function x^y is not continuous at x=0, y=0. There's nothing wrong with non-continuous functions existing.
I say 0 isn't a natural number also because of counting. N is the basis for countably infinite. If something is countably infinite, we can biject it with N. This equivalently means we can define a first element, bijected with 1, and a second element, bijected with 2, and so on. And for x^y, any value could work just as well as 1 because none would be continuous. I'm curious though. Would you also consider ∞⁰ to be 1? The extended real number, possibly the element of our wheel or Riemann sphere.
@@xinpingdonohoe3978 Computer programmers have discovered that we should think of "first" as mapping to the number 0, "second" as 1, etc. We do the same with age, which starts at 0 when you are born. This is a good thing. The reasons for 0^0 being 1 have nothing to do with continuity. How do you multiply a collection of things? You start with a 1 and you multiply each of the things into it. If you are multiplying an empty collection of things, the answer is 1. This is the case even if the things you don't have any of were zeros.
@alonamaloh indeed. That's why we call them computer programmers, and not mathematicians. Also, not everywhere follows that convention with age. The empty product doesn't really matter for the multiplicative absorber. For other stuff, it's necessary that a^0=1. It means we can use a^0×a^b=a^(0+b)=a^b and tells us that we should be expecting a^0 to be 1, as a^b is an arbitrary number. But for 0? 0^0×0^b=0^(0+b)=0^b If b>0, 0^b=0 so any value of 0^0 works. If b=0 and we assume it's defined, then 0^0 could be either 0 or 1, as they both satisfy x=x², and this empty product argument can't find a valid candidate because both are perfectly valid. For all other complex b, 0^b is unambiguously undefined, so why are we even trying it?
@@xinpingdonohoe3978 Computer programmers have found better conventions, because getting the conventions right matters more in programming than in math. We would do well to follow their wisdom.
@alonamaloh not at all. Their field is a lot less rigourous. They accommodate mistakes all the time. We can contemplate taking them seriously when their field can tell us 0.1+0.2 correctly.
Good point, though I guess it's not really all-encompassing. Consider the series Σ[n=0,∞] n^(1/ln(n)) /n! n→0 and 1/ln(n)→0, so the first term takes the form 0⁰/0! And yet, the series would sum to e², because for each n we'd have n^(1/ln(n))^e^(ln(n)/ln(n))=e The limiting value is what gets used, so in this case we find 0⁰=e.
@@xinpingdonohoe3978yeah this would not be possible if 0⁰ is anything but 1. Since e⁰ is 1, hence 0⁰=1. If you were to say that this is false, you would have to make it so the sum starts from 1, not 0
@@xinpingdonohoe3978 maybe I confused this one with the series definition of e^x, if you plug in e⁰, the first term will be 0⁰/0!, and all other terms cancel out to 0, but e⁰ is 1 since x⁰ for all real numbers is 1 and this term 0⁰/0! must be 1 and so 0⁰ is 1. Otherwise this series would have to be re indexed so the sum starts from 1. I'm not sure how this is in your sum but we will run into 0⁰ power when we plug in 0's
I would be curious to what the background of the majority was here. A lot of these answers seem to be coming from a perspective of an applied mathematician instead of a pure mathematician. I might be wrong, but it makes me wonder. It would be interesting to see the breakdown of answers based on the specific field of study
I suspect the background of most respondents were at best recreational mathematicians, and probably a fair amount of students. I personally find it hard to believe a survey that had a sizable amount of full mathematicians would have "1/x is not continuous" as a majority. The linked survey in the description mentions it was sent to various places. Also it states "This project was supposed to be a joke, so I did not have high standards of scientific rigor in mind when conducting this survey."
@@pixel-hy4jx I checked a bit, and it isn't quite. From what I understood from the methodology portion is that the author checked that each participant was associated with a university. So it's not entirely random people. But indeed, it doesn't give much insight into what most of these participants specialise in
@@methatis3013 the author might’ve checked if the person gave an association with a university, but they say that the chart at the beginning only shows the breakdown of those who gave it. That association to a university also doesn’t mean “professor” or even “mathematician”, it just means they are currently at a university in some capacity (ie most likely a bunch of undergrads)
80% of the people said numbers are real so it's likely mostly mathematicians. I doubt anyone applying math as an engineer or physicist or chemist or computer scientist would answer that numbers are real. Pure math is really just another way of saying "Idealism" math.
I think the best explanation/argument for why 0^0 (in most cases, not all of course like we saw with limits) should be defined to be 1 comes from our understanding of the empty product. For example, when we write something like 5 * 2, we can think of this as starting at 5 and then add 5 again to get 10, but a more helpful way of thinking of this is starting at the additive identity of zero and then adding 5 to zero two times. 5*2 is adding 5 to zero 2 times, 3*6 is adding 3 to zero 6 times, etc. Understanding this explains why 0 * x = 0 for all x since we add x to zero 0 times so we stay at zero. The same process works for exponentiation. People will explain that x^0=1 makes sense because 1=x/x=x^(1-1)=x^0 for nonzero x and they’ll also use this to justify why 0^0=1 cannot be true since you would be dividing by zero. First of all notice that this logic would imply that no powers of 0 can work since for example 0^2=0^(3-1)=0^3/0, but of course we know 0^2=0 so using this method doesn’t make sense for zero so it shouldn’t be used in the 0^0 case either. Next, the empty product can be used to understand why x^0=1 so we don’t have to use the division method. Just like in the multiplication case, we start at the multiplicative identity when doing exponentiation, which is 1 in this case, so we interpret 5^3, for example, as multiplying 1 by 5 three times, or 1*5*5*5, so 5^0=1 since we multiply 1 by 5 zero times. This is no different for 0. 0^4 would be multiplying 1 by 0 four times, or 1*0*0*0*0, so 0^0=1 since we multiply 1 by zero 0 times. The example you gave with the binomial theorem and Pascal’s triangle is good since it shows why this definition of 0^0=1 is useful as in others cases like Taylor series. But I think this explanation is the best because it explains more why logically it makes sense 0^0=1 fundamentally rather than why it useful for it to be this way in certain contexts.
you can even use 0^0=1 in calculus. But then keep in mind that (0,0) is a singularity of f(x,y)=x^y. It's not differentiable there and that's a shame for an analytic function 😂
@@kruksog It still goes back to 0^x vs. x^0 which was the problem from the start. As a hobby I'm developing a theory that x^0 is 1+ an infinitesimal multiple of log(x). This infinitesimal term can be neglected anywhere except x=0. Needless to say it's not going very fast.
@@Milan_Openfeint You can define x^0 in an arbitrary monoid though-not so for 0^x. And the monoid definition leaves us no other option except 1. We _can_ try to make piecemeal definitions for idempotent x taken from a semigroup and end up with a collision, but I hope it’s pretty evident why it’s a bad way to generalize.
i think the distinction between whole number and natural number in that way makes a lot of sense, but unfortunately there isnt really a popularized “blackboard bold” symbol to denote the whole numbers like there is for the naturals
@@yfidalv there are enough symbols. They can frequently use N and Z+ for {1,2,...} and W or N_0 or Z+_0 for {0,1,...}. Alternatively, we could just use {1,2,...} and {0,1,...}. We could even use either Z∩[1,∞) or Z∩(0,∞), and then Z∩[0,∞).
@@IsaacDickinson-tf8sf The word "integer" literally means "whole number", so saying that half the integers are not whole numbers (handwaving what "half" means for an infinite set) makes a mockery of the etymology. Which admittedly isn't the most compelling argument, considering how many words are used to mean things other than what their derivation would suggest. There are plenty of people who use "whole numbers" to mean "numbers without a fractional part". And then there are "counting numbers" which often start with 1, but some people like to be able to count a null set too. Some people like to define counting numbers as excluding zero while natural numbers include it, others like to define the two as synonymous terms, and there are even some who include 0 as a counting, but not a natural number.
It should be noted that 0^0 = 1 is the only non-contradictory value for 0^0, which agrees for the usual rules of powers (a^(n+m) = a^n *b^m and the all so mighty a^0 = 1). The only reason why 0^0 is sometimes called undefined, is because the function x^y is discontinuous at (0,0), however many well defined functions are discontinuous at some points :)
Also if we assume 0^0 = 1 the discrete fourier transform of a grid-aligned cosine will give exactly 0^x for a whole spectrum, which generally makes the whole DSP convention (sinc(0) = 1 and the rest of normalized trigonometry) consistent unlike other values. So yeah, imho 0^0 = 1 is by far the most elegant convention in most use cases.
@@awesomedavid2012 Do you have a problem with 0^2=0? Look: 0^2 = 0^(3-1) = 0^3 * 0^(-1) = 0^3 / 0^1 = 0/0. If you multiply and divide by zero, you'll get nonsense. But this doesn't mean there is a problem with the thing you started with.
I really like this video because it doesn't sensationalize about the topic - mathematicians aren't at odds about any of these, they're just trivial edge cases in which each discipline has chosen a convention that works for them. One of the great things about math is that it's both discovered and invented, and these examples illustrate that perfectly.
Fwiw, it's not always because of specific disciplines... some conventions were made for cultural or political reasons (e.g. New Math redefining whole numbers, and i think whether 0 is natural or not).
For the 6 / 2(1+2) thing, I’ve noticed I have a really weird way of ordering. The order is “do whatever is written closer together first”. Which means, “6/2 (1+2)” and “6 / 2(1+2)” are interpreted differently for me. I think it’s because when doing math work with pen and paper, often I just skip a bracket here or two if the process is extremely obvious. In these cases, the way how close I write everything together usually implies priority of things. This is super evident with “6 / 2x”, where it’s super clear that it’s dividing by “2x”, because there is just never a reason to write down “6/2” times “x” as a step in a solution. So the bracket around “2x” often just becomes implied. This happens a lot when the paper doesn’t have enough space to write a horizontal division line. So it’s also important to draw the diagonal division line reeeeeally long to indicate we do it last.
not sure if I get your point. this problem is all about conventions that were made to simplify writing and especially printing because it was so much easier and thus cheaper to print inline expressions than deal with horizontal line every time there's a division, and leaving out some brackets again made it shorter and less cluttered, for the cost of every reader now having to remember some additional notation rules but! no matter how you explain each of the convention, the key thing for this viral problem is that these conventions didn't spread to the whole math world, so different people interpret this expression differently, depending on how they were taught in school
🟠🟠 The calculation of 6 / 2(1+2) is logically done by computing 2(1+2) first. 2 is also a character (numbers are included in characters) and 'a' is a character. Why do we calculate 6/2 when 'a' is replaced with 2, but calculate a(1+2) when 2 is replaced with 'a'? This is illogical because it's performing different actions in identical situations. We use 'a' instead of 2 either because we don't know the value of 'a' yet, or we do know it but use it for computational convenience. Therefore, the same formula should be solved using the same method. Both 'a' and 2 are characters. Numbers are included in the set of characters. 6 / 2(1+2)의 계산은 논리적으로 2(1+2)를 먼저 계산하는 것이 맞습니다. 2는 문자(숫자는 문자에 포함됨)이고 'a'도 문자입니다. 'a'를 2로 대체할 때는 6/2를 계산하지만, 2를 'a'로 대체할 때는 a(1+2)를 계산하는 이유는 무엇입니까? 이것은 동일한 상황에서 다른 작업을 수행하기 때문에 비논리적입니다. 우리는 'a'의 값을 아직 모르거나, 값을 알지만 계산 편의를 위해 'a'를 사용합니다. 따라서 동일한 공식은 동일한 방법으로 풀어야 합니다. 'a'와 2는 모두 문자입니다. 숫자는 문자 집합에 포함됩니다.
People are being taught that 2(x + y) = 2 * (x + y). It doesn't. It actually means (2 * x + 2 * y) or ((x + y) + (x + y)) So 2(1 + 2) means ( 2 * 1 + 2 * 2), which is 6 The equation is now 6/6, which is 1. Here is a simple explanation of how people get it wrong: If x = y, then: x/x = 1, x/y = 1, y/x=1 and y/y = 1 So, let's say that x = 2(5) and y=5(2) Obviously x / x = 2(5) / 2(5) = 1, but people rewrite it as 2 * 5 / 2 * 5 and end up with 25 x/y is rewritten as 2 * 5 /5 * 2 and ends up with 4 :/ Likewise a/bc means a / (b * c) not a / b * c Interestingly, Mathermatica is often used by some people to "prove" that a/bc is a / b * c, but neglected to read the documentation that states that while a/bc does indeed mean a / (b * c) they _chose_ to ignore this and implement it as a / b * c. The problem is that people believe that PEMDAS is "the order of operations" ... it's not. It is "the order of BASIC operations", i.e. + - / * ONLY. 2(1 + 2) is essentially a function, which is NOT covered by PEMDAS.
@@Kyrelel > Obviously x / x = 2(5) / 2(5) = 1 and then 2 * 5 / 2 * 5 I would instead write (2 * 5) / (2 * 5) , since i am unpacking the "/". Does this circumvent all of possible problems, or do you have another example?
Great video! One more that deserves to be here is what ab means if a and b are permutations: for analysts, it means do b then a (i.e. go from right to left); for algebraists it means do a then b (i.e. go from left to right).
0:10 At source, "mathematician" appears to be very loose: basically anyone who has an interest in math entertainment (e.g. many responses were recruited from social media and math entertainers' fan groups) - not simply professional mathematicians.
I'm one of the mathematicians who filled out the survey and said it was discontinuous. Granted I was not thinking precisely about the question, considering there are 100 questions to get through. But, I don't think you can expect people, whose main interest is algebra, combinatorics, logic, or literally any other subject besides theory-heavy analysis, to remember the exact definition of continuity. And besides, in some areas of study, people consider functions up to an equivalence class (agreeing almost everywhere, i.e. other than a set of measure zero), and in that case 1/x is not continuous. I probably would say it is continuous now, after remembering the topological definition involving preimage of an open set, but again, there are 100 questions, and I'm not treating it as an exam where each question is very serious. Please do not discount the author's work or the survey. I know numerous people in my department and advanced undergrads/grad students from other universities who completed the survey. The math discord server that the surveys are shared in, are pretty elite.
For the linearity property, I've seen some schools use the term proportional instead. Generally speaking, saying x is proportional to y means that y =mx where m is the constant of proportionality, which can be interpreted as the slope when graphing. Due to this interpretation having no y-intercept, it fills the definition of linearity.
I am consistently part of that 1/3rd in the second half of the video and suprised that 2/3rd disagree. Especially baffling to me is that 2/3rd of people apparently think f(x) = 1/x is not continuous, based on a percieved discontinuity at a point at which it is not even defined! Really part of the problem is that something is missing in the question statement; rather than just giving an expression like f(x) = 1/x, to define a function you should really also have a given domain or codomain that it maps from and maps to. (This is part of the formal definition of a function in set theory.) Presumably in this case f(x) = 1/x is meant to be a map from R - {0} to R, and that map is very clearly continuous. Additionally, even though it is true that it can't be continously extended to a map from R to R, it can be extended continously to a homeomorphism of the extended real line, or of the Riemann Sphere. I think the only way you might think it's not continuous is if you take the "a continous function is one that you can draw without lifting your pen" intuition too literally. After how broad "continuous" becomes with topology, that intuition isn't always too helpful anyway.
Maybe the axiom of choice is like Euclid's parallel postulate. It's very specific and contrived and leads to a mathematical object that is very well-behaved and familiar, but without it there can be other objects that are useful and less restricted. Although, I've heard people say that AoC leads to paradoxical conclusions, and maybe Euclid's 5th postulate doesn't really do that. They are both oddly specific axioms though.
One point when it comes to the question of a continuous function: In real analysis we may often use a topological definition of a continuous function: a function f is continuos if and only if a preimage (with respect to f) of an open set is an open set. So, using this definition (which is standard in most real analysis books), if f(x) = 1/x, it is possible to show that a preimage of an open set will, again, be an open set (in the domain of f). So I don't think there is any discussion of whether 1/x is continuous. It simply is.
Ya absolutely from a topological perspective this isn’t really controversial. But it sure surprises calculus students that it would ever be continuous!
@@DrTreforI agree it may be counterintuitive. But it is no longer a question of "definition". It is now a question of what is actually correct and incorrect. The topological definition of continuity is the strongest one we have, and I think most people dealing with real analysis would agree. I think this again relates to my previous comment that is curious about the breakdown of the background of people who were a part of this research. It seems to me like the ones who would say f is not a continuous function would usually be applied mathematicians, while those who would call it continuous are usually pure mathematicians. This is, of course, just my conjecture, but I would be curious to see if my guess is right
@@methatis3013 It is still about definitions. The topological notion of continuity being the "strongest" in of itself requires some agreed-upon idea about what "strongest" means. At the end of the day, _any choice of words or symbols is arbitrary_ and thus some form of "definition". There can be no notion of "correct" when it comes to _writing_ mathematics, there can only be convention.
@@methatis3013 i don't know how it's in other countries, but in Germany, all students who study anything with mathematics, let it be electrical/structural engeneering, even those studying in schools of applied sciences, which don't focus on theory that much, learn the formal definition of continuity, at least in the real numbers.
The way I visualize 0^0=1 results from a conversation I had with a friend several years ago while wondering about how negative exponents work. This only works for integer exponents, I think, but if you visualize n^x as a number line that starts at 1 for index 0 and then you multiply the 1 by the value of n x number of times for positive exponents and divide it by n x number of times for negative exponents. So, for example, 2^3 is 1 * 2 * 2 * 2 = 8. 2^0 is just 1 with no step in any direction. 2^-2 is 1 / 2 / 2 = 0.25. As a result, this also means that 0^0 is 1 with no step in any direction as well.
That is true. In France, they say that 0 is both positive and negative. If they want to say a number x is greater than 0, they say x is "strictly positive." Similarly, if we wish we can say f(x)=1 is an increasing function and that f(x)=e^x is strictly increasing.
I'm having a lot of trouble with the fact that f(x) = constant can be described as increasing. Surely the value of mathematics lies in its ability to describe and model the universe and our actions and experiences in the universe. I know a little bit about mathematics and a little bit about words. I thought I knew what the word "increasing" meant but now... I imagine someone having the following experiences: "Boss, remember when we talked last month you said that I'd get a pay increase, but I just checked and it's exactly the same". Boss: No. no. no. We totally increased it. Go out to dinner and celebrate. At dinner: "I ordered my steak well done, but this is undercooked" Server: "I could increase the cooking time" *picks up the plate and puts it back down on the table* Diner: But it's the same! Server: no. I increased the cooking time. Arriving home from dinner to find the house on fire: "increase the water flowrate!!!" Fireman: yep. Done. Now, with no house to live in... *Boards plane to go live with relatives in another city*. Copilot to pilot: "We're about to fly into the side of that mountain. Better increase our altitude" Pilot: done Co-pilot: but ... *Crashhhhh* On mountain side surrounded by debris, paramedic 1 says: "increase the pressure on that wound or he'll bleed out". Paramedic 2: ok. Done it. But, he's .... 🤔🤷🏻♂️
The trouble is that if you don't count f(x)=constant as increasing, you also don't count a staircase as increasing because it has horizontal bits in. The constant is the weird corner case of allowing functions to still count as increasing even if they have flat bits in.
@rmsgrey I'm still not getting it. 🤔. I've certainly heard of stepwise functions or staircase functions. Surely they don't increase over the interval where they have a constant value. If they step up to a higher value then they've increased. I was talking about a function with a constant value for its entire domain of inputs, Because I thought that's what the video was referring to. I'm not sure how a client would explain to a builder that they want a staircase just made out of flat bits, but no risers, but they want it to reach to the next floor up.
@@russelleverson9915 If you try coming up with a simple mathematical definition of an "increasing" function, the simple ones are "a function f(x) such that whenever a
relations are usually assumed to be strict by default, yet for some reason the inverse is true here. consider a function the derivative of which is not constant ; essentially, a function with more variations than an affine one. then, were the function to generally increase, but plateau on some interval, would you prevent yourself from referring to it as increasing just because it does not strictly do so along its whole domain ? defining increasing as f(b) ≥ f(a) for all b ≥ a allows for such a notion to hold, by describing the general tendency. this does have the side effect of considering a constant function to be increasing and decreasing, although not strictly so. 1 ≥ 1 ≤ 1, so what ? one ought to make use of the appropriate comparisons. i for one hold the belief that a shift to symbolic notation in place of ambiguous natural language expressions would prevent such issues from arising.
I think I got it! A definition that solves all these issues. We consider staircase function as increasing because they increase somewhere even though they rest constant over some intervals. We should call a function *increasing* if it increases somewhere. This will rule constant functions as increasing. The problem is, some functions are increasing over some intervals but also decreasing over other intervals. Should we call them increasing or not? I will just call them increasing and use other terms to describe the function that are increasing but non decreasing Next, call the functions such that whenever a
5:37 OMG. I was in a Comp Sci (Python) class and I was programming the Monty Hall game. In my program, I commented that "0 is not a natural number" because this helped my algorithm to randomly select another curtain using mod. The professor took off 0.5 pts and sent me dissertations that explains why 0 is a natural number. I know my major was math but in the back of my head, I was like "This is a doggone basic computer science class - not math!".
My background is in signal processing. I’m left confused by 1/x defined as continuous. I was taught that a function f is continuous at a point a if lim a+ = lim a- the definition does not require a to be in the domain of f. By this definition 1/x does not appear to be continuous at x=0.
Continuous does need a third value f(a) as well. Otherwise a function with a removable discontinuity (like a point missing) would be called continuous by your definition.
Continuous means continuous in its entire domain. The function 1/x has R\{0} as its domain, and its continuous on those points. But it has no extension to R that keeps it continuous.
There is one more thing to add with 0^0 . I agree that it can be used to represent an indeterminate, however, the algebraic definition of x^n , where n is a non-negative integer, is x multiplied with itself n times. a^0 is therefore the empty product, which equals 1 . I find it annoying that people focus in on the continuity of x^y as if this is the sole arbiter of function definitions. If we have a sensible definition like the one for x^n and it has a natural expansion to 0^0, then that should be the definition, unless we find that something else works better, like 1 not being prime. It is also funny that there are a lot of people insisting that it is undefined only to then use it as 1 in the formulas that you refer to. Taylor series is probably one of the best examples as x is clearly meant to be variable and 0 is usually in the domain of the base number. I doubt many of these people write T(x) = f(a) + \sum_{n=1}^\infty ... That being said, I don't think this is an argument for 0^0 = 1 as much as it just confirms that the sensible definition is useful. If it had turned out that 0^0 = 0 worked best for most formulas, then that would grounds for changing the definition.
Completely agree with you! I want to point out something tangentially related to your post. You mention 1 not being prime. It's kind of funny, because if you use the abstract algebra definition of prime, and then expand it from binary products to arbitrary finite products, the empty product allows us to see that 1, and units more generally, aren't prime without explicitly excluding them. Take this definition of prime element in a ring: Let p be a nonzero element of a ring R. Then p is prime if, whenever p divides a product of elements in R, then there exists a factor of that product which p divides. All units (and only units) divide the empty product; however, the empty product has no factors. So units can't be prime if we take the above definition of prime. It's astonishing to me how many of those situations where people give explanations like "Well, you would expect [insert math thing here] to be true, but _out of convenience,_ we say it isn't true in this case" can actually be explained by empty operations.
@@MuffinsAPlenty Nice! Yeah, most exceptions are a sign that a better definition is lurking out there. Not sure that I have seen this one before. There is a more crude one if we stick to the natural numbers: A number is prime if it has exactly two divisors. Still a bit awkward.
I agree. Also notice that every single time a math problem calls you to use 0^0, it works best when it's 1. You *only* run into issues when you are looking at x^y when x and y are close to 0 and assume it's close to 0^0.
@@JahacMilfova OK. Then I'll say one thing: A calculator is programmed to give an answer but it doesn't tell you why it is true. Would a calculator that says 1÷0 = 0 be correct?
@@JahacMilfova But it just outputs an answer without giving any explanation as to why it is correct. And it is possible that it gives a wrong answer that can be proven wrong by simply going back to the definitions and analysing them.
we have a term that includes 0: whole numbers... I agree that "whole numbers" seems relatively confusing due to the concept of "whole"... whole numbers are nonnegative integers and natural numbers are positive integers (counting numbers)... this categorization works, imperfect with terminology as it is...
At 12:35 I hope I am understanding this correctly, essentially if one assumes the axiom of choice, that would assume that a function with an infinite domain of objects f(x) = y has for every input x, strictly one output y. Which couldn't that be argued as an extension of transitivity? E.g. if the axiom of choice is not true, then f(x) = y, f(x) = z, and y != z is a possible combination of equalities since an input x does not necessarily need to output y, but this would contradict transitivity, since if f(x) = y, and f(x) = z, then y = z
Not exactly. It's saying that the Cartesian product of an arbitrary collection of non-empty sets is a non-empty set. This sounds natural enough, until you start to see how it can be abused to prove very strange things. Perhaps the most shocking one is the Banach-Tarski paradox. A favorite of mine is that the function from real numbers to real numbers f(x)=x is a sum of two periodic functions.
In the case of 0^0, it is instructive to remember, in general, that x^0 = x^1-1 = (x^1)(x^-1) = x/x = 1 (x =/= 0). 0^0, then, can be expanded: 0^0 = 0^1-1 = (0^1)(0^-1) = 0/0 = indeterminate. This should hopefully be a sufficiently convincing argument as to value what 0^0 objectively takes.
But this uses the rule x^(n-1) = x^n/x which is not valid for x = 0. If you are supposing it is valid even for x = 0, we can use the same reasoning to prove that any power of 0 is undefined 0^(n-1) = 0^n/0 = 0/0 Another thing; indeterminate is just a term used to describe limit forms, we are dealing with an algebraic expression and not a limit. Either an expression is defined and have a value, or it isn't.
Interesting. In the school I attended, we rarely used the term "natural" numbers. Just whole numbers and counting numbers. And 0 is a whole number but not a counting number so that provides a difference. Though I have seen books call counting numbers and natural numbers the same thing
richard borcherds said it best that what is 0^0 ? well it is either indeterminate or it is whatever you want it to be as long as you define what your symbols mean. he mentions further that the above question is not the right question to ask. he asks what is the most useful definition of 0^0
I think starting natural numbers at 1 gives you the notational advantage of being able tojust use N_0 in case you want to refer to {0,1,2,...}. All the profs using N for N_0 I've seen always have to add some sort of ≠0 condition or exclude it some other way which just makes some exercises/statements/proofs a bit bulkier. But at the end of the day you just get used to the current context (even if slightly annoying at first). Notation will always be different in math. But the math behind it will stay the same :P
Learning pure mathematics, swapping in and out axioms and definitions comes pretty natural. Think of the parallel postulate in geometry. You can take it to be true and get Euclidian geometry, or leave it out and get other equally valid and interesting types of geometry. The axiom of choice leads to the Banach-Tarski paradox. So if you especially want your higher dimensional measure theory to be more consistent, it might be best to leave it at home. On the other hand, it is super powerful and useful in topologie and other fields, so bring it along for those days.
I'm pretty confused on the axiom of choice and marbles from a bag thing. Is there a finite set of marbles in each bag? What's the question here, eventually? For this to be anything that causes any disagreement that makes sense, I have to guess it's a question along the lines of the probability of picking a duplicate marble in an ordered/ordinal set from an infinite cardinality of bags, or something like that. Am I on the right track here?
In the standard set theory ZF, when not adopting the axiom of choice, it's consistent for there to be a countable collection of two-element sets (think infinitely many sock), but no (choice) function which picks one of each of them. Relatedly, ZF does not prove the countable union of two-element sets to be countable again. Adopting the axiom of countable choice resolves this, but then there's still similar examples that are not resolved. The situation becomes a bit more intuitive in the following scenario: Take the set R of real numbers. Consider the set S:=P(R)\{{}}, i.e the set of all non-empty subsets of R. In particular, by definition, S is a set holding only sets that each contain some element. However, without adopting further exotic axioms, it's not possible to name a function f that takes a member U of S and maps it to some value f(U) contained in U. On way of looking at this is: Being able to write down a \forall symbol and establishing that each set U contains a member, does not help much in providing a function (or functional description) that "uniformly" (i.e. in some prescribed way) constitutes such an element picking. Said by analogy: Everybody in the classroom having a name is not the same as there being a list at the teacher's desk with everybody's name.
Let X be the set of all non-empty subsets of the real numbers, and R the set of real numbers. Is there a function from X to R, that takes every nonempty set of real numbers S to a real number that is an element of S? Since by definition every nonempty set of real numbers has a real number as an element, it seems like the answer to be yes. But if you try to define a specific rule for this function, you quickly find it's not so easy to find one. The axiom of choice allows us to assume there exists such a function, without having to give a rule for a specific function.
We could say that 0^0=1 and also 0^0 is an indeterminate form. Saying 0^0 is an indeterminate form does not mean it is undefined, rather it is a shorthand way of saying the function x^y is not continuous at (0,0).
So you are saying that we should distinguish between the algebraic expression 0^0 and the indeterminate limit form 0^0 ?? Well, that's also my stance. What I don't understand is why are you using the same symbols to denote them? They are not conceptually the same after all.
@@TH-cam_username_not_found That happens sometimes. We can use (2,5) to designate a point in the plane, or it can designate the open interval from 2 to 5. We have to determine which it means from the context. Perhaps there's another notation we could use for the indeterminate form 0^0, indicating the 0 in the expression is some sort of limit that is approaching 0. I can think of some ideas (that I don't know how to type into a youtube comment), but I don't know of a standardized way to do that.
@@roderictaylor What makes it worse is the fact that this abuse of notation is confusing calculus students and making them come up with misconceptions like 1/0 = inf or 0^0 = undefined and also obfuscate the fact that limits are conceptually different from function evaluations and that they are unrelated to each other. I have seen someone in a comment using the notation (→0)^(→0). It is sensible and intuitive. By the way, there is the French notation ]0,1[. I think It is better.
@@TH-cam_username_not_found That's a good notation. One can write things like (→1)/(→0+) = infinity. One can think of →0 as a sequence converging to 0 but being non-zero, and justify limits this way.
Very interesting video. Thanks for sharing it. It has put me thinking if these ambiguities in mathematics reveal anything about its nature. Do these ambiguities point to gaps in formalism and to the idea that math is invented and not discovered?
In some sense I think the words form this video discovered and invented are the ones that aren’t particularly precise. A formal logical system is much more precise
I think for 0^0 the right answer would be "underdefined": You *can* make sense of the value but it requires extra information about the context. If you take the function (f[x]-f[0])^(g[x]-g[0]) and perturb it a little bit around 0, you get 1 + Log[x f'[0]] g'[0] x + O(x²) So if x=0, *usually* this will just be 1, but if you manage to get two functions such that Log[x f'[0]] g'[0] blows up to counteract the x=0, you can get other answers.
There's no reason to invoke limits when defining 0^0. You can raise any element of a multiplicative group to a natural power, even if there is no notion of limit: Start with the multiplicative identity and multiply it by the element as many times as the exponent indicates; if the exponent is 0, multiply by the element zero times, which leaves you with the multiplicative identity. x^y is not continuous at 0^0, so you have to be careful when computing limits.
@DrTrefor Uhm, you just made a mistake regarding the 6÷2(1+2). This is always 9 in your case. 6/2(1+2) has the implication of a fraction rather than a division which then would be 1. In this case the numerator would be 6 and the denominator would be 2(3) equalling 6 and therefore equating 1 in total. Using an obelus means the 6/2(3) cannot be considered a fraction bar due to implicit multiplication and therefore can never be 1. However, an obelus can mean subtraction, which, using simple pemdas, will come out as 0, because 6-2(3)=6-6=0.
It would be interesting to see mathematicians thoughts on pi vs tau, base 10 vs base 12 (vs base 16), the sum of all natural numbers (ie divergent series or -1/12?), whether 0.999... is equal to 1 or infinitesimally away from 1, and whether mathematics was invented or discovered.
Couldn't you define increasing such that for any x you can choose both y > x such that f(y) > f(x) and z < x such that f(z) < f(x)? This would take care of the strictness issue but still define f(x) = 1 as not increasing.
5:07 The argument for 0^0 being 0 is that, when a > 0, the exponential function a^x is defined to be continuous. 2^(pi) is defined to be the value that makes 2^x continuous at pi. This makes a^x continuous on its domain. IF you believe 0^x should also be continuous on its entire domain, then 0^0 should be 0 This is just an example of a limit, which is something you addressed. But the argument for this particular limit being the “correct” one to consider is that it is aligned with the definition of the exponential function for strictly positive numbers.
I wouldn't define a^x over the irrationals just because it extends a function continuously. I would just denote the outputs of the continuous extension with something else just like how one also denotes the output of the exponential function with exp(x) and I would keep the notation a^b just for the usual values. For example, the sinc function extends x↦sin(x)/x at 0 but no one is claiming that 0/0 = 1
My grade school math textbooks made a big deal out of defining the 'natural' numbers and 'integers' and quizzing on which ones did and did not include 0 ad nauseum. There was a term for positive integers and another for nonnegative integers one of which was 'natural'. I don't remember the other one or which was which. It would be nice if grade school math textbooks had to get signoff by actual mathematicians.
@@Lily-Carruthers My kids are now learning the same thing. However, mathematicians don't use the name "whole numbers", and things get really dicey if you aren't working in English, since both "integer" and "whole" often translate to the same word ("integer" is the Latin word for "whole").
What is a closed interval? Of course the interval [a,b] is closed, Is the interval [a,infinity) closed? I would prefer it if calculus texts used the term, a "compact" interval [a,b], which is less ambiguous.
WRT 0^0=1 - there's another justification which I find much more compelling than "because it's convenient for these results". Powers are repeated multiplication. To multiply, you need something to start with that you'll multiply. For the number of multiplications to match the power, that's not the number you're raising - it's most logically the identity for multiplication, i.e. 1. So 0^x is zero only providing x is not zero - because you're starting with 1 and multiplying it by zero x times. But for 0^0, you start with 1 and multiply it by zero zero times - ie you don't do any multiplications so the original 1 remains unchanged. This is a logical way to approach powers that is consistent with what we see in algebra, whereas if you don't have that initial 1 to start from, then multiplying together zero instances of any number is equally logically undefined - you don't have any instances of that number to multiply together or even a single instance as a starting point, the expression you're describing with the power notation is literally a blank space with no defined value. So reasoning from 0^x=0 for x non-zero to the case where x=0 is a fallacy - the repeated multiplication logic leaves that case undefined, and "here's a pattern we'd like to continue" isn't proof of anything. Redefining positive integer powers using the identity for multiplication as a starting point for the multiplications extends the defined cases to cover cases that are otherwise only given any value by a fallacy, and the logic of the extension isn't arbitrary - it makes sense and is at least as reasonable as defining powers as "multiplying this many instances of a number together". As far as I can tell, 0^0=1 was the standard position in mathematics until limits and analysis were first developed, because there just wasn't anything that contradicted this idea that 0^0=1 before the idea of limits. To me, that just means limits are not automatically the same thing as the value of the function you're taking a limit of. BTW - to get the limit of (ax)^(bx) as 0 with a and b constant, you have to approach x=0 from a very specific direction - a must be zero and b must be non-zero. Anything else and the limit is 1. You can also use a limit that can approach the point via curves - unless you approach along the straight line a=0 for at least some finite distance, again the limit is 1. It's not just two directions with one yielding the limit of 0 and the other the limit of 1 - you have infinite directions and approach curves yielding the limit of 1, because you have two dimensions-worth of directions and approach curves. However... once you have non-negative integer powers, you can use power identities to define negative and rational powers, but how do you define irrational powers? AFAICT, the only reasonable way is using limits. And having irrational powers be undefined or indeterminate is quite a price to pay for saying "the limit isn't the value", especially if you want to extend from the reals to the complex numbers and write complex exponentials based on angles in radians. And while math may be under no obligation to make limits be the actual values, when a particular kind of math is incomplete there's a simple answer - add more axioms, as long as the results are self-consistent that's a valid new kind of math (axiomatic system), and as long as it matches what you observe in reality, it makes sense there too. "The limit is the value when the limit is defined and unambiguous" seems at least a reasonable axiom for the real (and complex extension of real) numbers. I didn't know about the 1/x one - that's very interesting. I'd also say that one way you can define infinity is by analogy to projective geometry, effectively borrowing some projective geometry axioms. If you do that, you can define the value of 1/x where x=0 unambiguously because +infinity and -infinity are exactly the same thing - there's a whole horizon line of infinities, but infinity forwards and infinity backwards are identical. Whether that means the curve is continuous or not through that point could be a controversy in itself, but AFAICT there's no point where a co-ordinate system smoothly transitions from finite to infinite co-ordinates (no largest finite number) - that infinite point is arguably therefore a discontinuous point (disconnected from finite points) even though it's a well-defined point where the curve passes through infinity. On the other hand, infinity is also the limit as you approach that point from either direction - reaching the point you expect as the limit from either direction seems like arguably a kind of continuity.
I have few points to share: 1st, we can define x^0 without starting with 1 and starting from a blank space, look up the empty product. 2nd, it is true that limits are not equal to the value of the function, but it goes deeper than this. Conceptually, limits and function evaluations are not the same, if we analyse their definition, we find that limits tell the behaviour of the function near a point and not at the point like function evaluations. For this reason, I would rather leave irrational powers undefined. 3rd, even some rational powers are undefined, because sometimes there is no unique solution to x^n = a, why should you prefer a solution over the others when you define x^1/n ? So I wouldn't refer to the solutions of the equation by x^1/n and instead us a different notation. This is probably one of the reason why the radical symbol exists.
@@TH-cam_username_not_found WRT "look up the empty product." - seems to say "it's one" anyway, but with a convention (ie assumption or axiom) rather than a logical justification. I mean, I'm basically introducing one axiom rather than another anyway (what precisely does the power notation mean, at least for positive integer powers), but this doesn't really seem to disagree, it just takes the empty product as something to directly axiometise rather than "my" slightly more indirect version. By the scare quotes on "my" - I hope it's obvious I'm not claiming it as my idea, AFAIR I was taught this justification in pre-O-level (Brit version of early high school) algebra 40 years or more ago (and it seemed ridiculously obvious even then, which is part of why I remember that), and TBH it seems odd that it's not often mentioned these days AFAICT, as if maybe it's even a point people deliberately avoid mentioning. For 2 - that makes sense, but I'm not sure how irrational powers are normally defined except as a limit. There's infinite series, of course, but their values (when defined at all) are defined as a limit anyway. And of course the fact that commutativity seems to break down for some series - re-ordering the terms gives a different sum - also argues against the limit being the value, though you can equally argue that the proof of commutativity only applies for finite sums. You definitely need powers of irrational numbers - irrational complex powers of irrational numbers even - for complex exponentials. Except maybe the "behaviour near the point" resolves that - when the behaviour is infinitessimally close to the point anyway, what difference does it make? For 3 - I'd argue that there's a difference between being completely undefined and being multi-valued. Yes, of course rational powers often have multi-valued roots - that's why the concept of a principle root exists. You can still usefully state the set of possible values. And I'm aware that a set {0,1} of possible values seems valid in that context, but I'm fairly sure any logic leads to some dubiousness WRT 0^0, the real point is a wider understanding of the options and their implications, but I honestly don't remember if I was thinking that way or even thought about multivalued roots when writing my previous comment.
@@stevehorne5536 I was a little imprecise with my wording. The reason why empty products is 1 is similar to the reasoning you shared above, although we treat x^0 as Prod(()). Here it is: Prod(S)×Prod(T) = Prod(S↔T) where S and T represent tuples and ↔represents the concatenation of tuples. Example, S = (a,b) and T = (c,d,e) and S↔T = (a,b,c,d,e). Of course, we can't define the generalised product function without the binary operation × being associative. Now lets apply this for T = () (the empty tuple) Prod(S)×Prod(()) = Prod(S↔()) but S↔() = S (the empty tuple is the neutral element of concatenation) so we get Prod(S)×Prod(()) = Prod(S). For this to be true for any tuple S we must have Prod(()) = 1, the neutral element of multiplication. x^n is a product with n copies of x, and for n = 0 we get the empty product. Here, x^0 was never defined as 1 multiplied by x, 0 times. However, to derive this result, we made a huge assumption, which is Prod(S)×Prod(T) = Prod(S↔T) being true for any tuple, even empty tuples, but how could we check that this is true without 1st defining Prod(())? The answer is, we can't define Prod(()) recursively like Prod((c,d,e)). We have to start form somewhere else, and we started from declaring Prod(S)×Prod(T) = Prod(S↔T) as true even for (). We used the property as a definition. I honestly wouldn't call exp(x) as e^x , The fact that the function matches e^n at integers doesn't mean that we should extend e^n to all real numbers. >> *when the behaviour is infinitessimally close to the point anyway, what difference does it make?* There is a difference and I explained it to you. limits tell the behaviour of the function near a point and not at the point like function evaluations. The floor function is an example, it behaviour near the integers is different than at the integers. >> *There's infinite series, of course, but their values (when defined at all) are defined as a limit anyway* If we want to be rigourous, an infinite series is not a sum of infinitely many terms but a limit of finite sums. >> *You can still usefully state the set of possible values* The problem with this is that we can't use operations on the set of things to do algebra so better keep it undefined and personally I would rather use different notation to refer to the set of values and keep algebraic notation for unique values.
TBH no one cares about these splits until they start deducting marks over it Also, for the 0^0 case, since it most likely to show up in multiplicative contexts and the 0th term is likely to be a null case, it makes sense to declare the result as the multiplicative identity instead of the multiplicative blackhole
For ℕ, I have seen text books counting them from 0 and other books counting them from 1. I have also seen one book defining the variant starting from 0 as ℕ₀ to differ from the normal ℕ starting from 1.
@nbooth I have been doubting myself about my comment lately, after reading your comment. So much I had to clear up if I simply remember wrong. Searching Google for ‘natural numbers N N0’, there is a section called ‘People also ask’. There is a question ‘What does N0 mean in math?’ Clicking on that and following the link to Wikipedia, reveals a disambiguation page that suggests the following: ‘ℕ₀, the natural numbers including zero’ So that is the second source where I have seen that, no matter how backwards it is. Most books define their conventions early in the book anyway, so you would know how that particular book interpret concepts anyway.
for the continuity of 1/x, Saying it is continouos, is odd.because continuity requires there to be no vertical asymptotes. (There are more requirements, but that is the only one that is relevant in this context)
Hmmm, I'm starting yo doubt that survey, f(x)=1/x is continuous by definition and it would be VERY weird to call f(x)=3x+1 linear (just asymptotic linear, but not "linear"). Those seem like errors from math teachers not people with a degree in math.
@@matthieubrilman9407 They don't. The most people with a degree in math don't end up teaching, as it's not lucrative enough or they simply don't want to teach. As such, people without degree will have to fill the spots.
Agreeing on the basics is the hard part. It's literally the foundation of everything else, almost per definition it requires the most meticulous (borderline pedantic) discussion
I am really surprised that so many mathematicians answer 1 for the viral problem. I would be curious to see results of the same study but where the subjects didn't learn PEMDAS but some other mnemonic device like "point before dash".
I think it's because we are so used to writing something like 1/2x on the blackboard to mean 1/(2x) where if we meant (1/2)x we would write x/2. That pattern of having implied brackets in handwritten mathematics is quite common, even if it isn't the type of thing that is great for a formal rule.
Basically, the same thing in math, you do in language. You don't go by the literal equation alone, but what it applies to in the real world. That's why context always matters. I learned that in High School, I don't know how we forgot it. It's the same thing for interpretation, you can have the exact same sentence mean a thousand things. You don't take things in isolation, but draw them through their contexts, what something actually means. It's the same in math, I'd suppose, if you needed 0^0 to be indeterminate, 1 or undefined, that's what you'd do in that specific instance, based on the geometry of what you were using. As it'd relate to that specific shape.
I thought the axiom of choice was controversial because of the idea that it may introduce a contradiction that is not present in ZF. Then I found out that ZFC is consistent if and only if ZF is consistent and I no longer see what the issue is.
The thing is in exponents , we can always assume that everything is multiplied by 1 to the power of something . Say 10 power 2 is 100 * 1 *1 will be still 100 . And we also know that anything multiplied by zero is zero so zero to power any power is zero cause zero is multiplied by itself . The 0 power anything is 0 . O multiplied by anything (1) is zero . So 0 power zero is zero
0^n is a product with n copies of 0. 0^0 is the empty product, which is equal to the neutral multiplicative element, aka 1 Same goes for 0!, which is less controversially known to be 1
I remember my math teacher stating that parallel lines cross at infinity. I stated, parallel lines can never intersect, even at infinity. Tha's why they are parallel, which IMHO doesn't change at infinity.
here's how i see it: when the power of a number goes up by one, you multiply by the base (e.g. 3^2=9 3^3=27 3*9=27) ergo to reduce a number's power by one you divide by the base (same example as before but backwards now) to go from 0^1 to 0^0 you divide by zero, ergo the answer is whatever you think 0/0 is, be that 1, 0, undefined, or infinity
But by the same reasoning, 0^n would be 0^(n−1)/0 which is undefined since we can't divide by 0 so any power of 0 is undefined which is absurd. The conclusion: you can't proof that something is undefined based on other undefined expressions.
If it was both of them then we have (1+i)/√2 = √i = −(1+i)/√2 which by transitivity of equality implies that (1+i)/√2 = −(1+i)/√2 which is a contradiction. The same can be said about any mathematical object. It can't equal to 2 non equal things.
@@DrR0BERT I am glad that you agree about this. I thought at 1st that you are unsure and looking for an answer. Well, while it can't equal both of them, one can assign to it one of the values. The question now becomes which one should you choose. We could make a choice based on restricting the domain of x^2 to a "nice set" for example. arcsine is the inverse of the restriction of sine to ]-pi/2,pi/2[. This interval was chosen specifically because it is symmetrical. Maybe there are other ways or criteria to favour a value over the other.
@@TH-cam_username_not_found I have gotten into too many arguments over the fact that once you write down the √ symbol, the choice has been made. Still I get hammered with this. The argument that usually stops it is if √25=±5 then why does the quadratic formula need to include the ±√(b^2-4ac)? The ± would be unnecessary. This is the whole theme of Dr. Bazett's video.
@@DrR0BERT That's not the point of my 2nd response ?? Your argument is also cool, but I prefer mine as it is more general and works for any expression. However, that's not what my 2nd response is about. It is about hoe one makes the choice of _one unique_ value for, say √25 for example. One could argue the following: why √25 was chosen to be 5 and not −5 ? . Of course, choosing positive values would make x↦√x a multiplicative function which is nice so this choice is better. My personal preference is to keep the √ symbol just for positive real numbers so anything like √(1+i) is considered undefined, just use a different notation to denote it.
Ah, I wish they had asked about approaches that deny infinities (like real numbers) or the use of counterfactuals in proofs. That seems to be a fundamental divide that can potentially affect what we can prove, and it requires different formulations and proofs for a lot of theorems.
Consider the function f(n)=n^2, where the domain of f is all integers. Is f continuous at 0? In calculus we do not mention this. If we study an advanced real analysis textbook, the answer is yes. A function is always continuous at an isolated point in its domain. Ok, so what is the limit of f(n) as n->0? The answer is undefined. So the definition we give students, that f is continuous at a if the limit of f(x) as x->a is equal to f(a) is, in general, wrong. And of course if the domain of f(x) is the closed interval [a,b], we have to modify the definition and say f is continuous at a if the limit of f(x) as x approaches a _from the right_ is f(a), which is highly ad hoc. I find these issues to be a headache when I'm teaching about continuity in calculus. The French have a different definition of the limit that fixes this problem. They define the limit of f(x) as x approaches a to be equal to L if and only if (1) for every delta>0 there exists at least one x in the domain of f satisfying |x-a|0 there exists a delta>0 such that for every x in the domain of f satisfying |x-a|
It is this comment that encouraged me to initiate this discussion with you. You shocked me with the fact that functions are continuous at isolated points despite them not having limits at those points. I am curious to see what the definition of continuous you are using. The French definition is the one I have been taught and the thing that made me not like it is that functions like x↦floor(x^2) no longer have limits at the points where the right limit = left limit but f(a) is different. By the French definition If a limit exists then it must necessarily be f(a). I don't know what is superior about it. *Rerearding the rest of the comment* OK, I had to reread it to get the idea. The limit definition was changed this way to guarantee that a function that isn't defined around an isolated point is also continuous at it. Am I getting this right?? If so, then there is something unjustified. Why should an isolated point be a continuity point? It's as if there is some other definition of continuous lurking around and you are appealing to it. By the way, I'll be replying to every comment of yours. I hope you don't mind 😊
I forgot to adress the part where you mentioned continuity at a for a function defined over [a,b]. If I recall correctly, the English 2-sided limit definition contains the proposition "for all x in D such that 0 < |x-a| < delta" , which means that right-sided limits are equivalent to 2-sided limits when D is [a,b] , same thing for left-sided limits.
@@TH-cam_username_not_found Hello. In a typical analysis book, the definition is as you say. In introductory calculus books, it depends on the text. For example in Stewart, it assumes the function is defined on every point of an open interval of a except possibly at a itself before defining the two sided limit at a. I do think the definition from analysis textbooks simplifies things.
@@TH-cam_username_not_found You said "OK, I had to reread it to get the idea. The limit definition was changed this way to guarantee that a function that isn't defined around an isolated point is also continuous at it. Am I getting this right??" I suspect that is the case; it does have that advantage. I prefer the definition of the limit typically given in analysis texts, where the limit as x approaches a of f(x) is L means that a is a limit point of the domain of f and if for every epsilon there exists a delta such that for every x in the domain of f satisfying 0
@@roderictaylor Thanks for replying! >> *For example in Stewart, it assumes the function is defined on every point of an open interval of a except possibly at a itself before defining the two sided limit at a* I see, well, it is a little ad hoc if you asked me. The English definition in the way I and analysis textbooks present it is more sensible. After all, if there is no left direction toward a, then all directions = right direction. Same thing when there is no right direction. Now what about my previous reply? You haven't answered my question about why an isolated point should be a continuity point. Edit: I didn't notice that you have adressed my other reply later on, I was writing this one when you have sent yours.
I don't know if there is a similar term in english, but here in finland we say that f(x)=1 is increasing/decreasing, but not *truly* increasing/decreasing. It's sometimes very useful to consider constant functions as monotonous so i think it's good to separate the two definitions.
We have the same thing in French, constant functions are both increasing and decreasing, but neither *strictly* increasing nor strictly decreasing, in the same way that 0 is both positive and negative but neither strictly positive nor strictly negative.
In the study of real analysis we'd refer to natural numbers as {0,1,2...} So i guess that's not a problem I've never used the term "whole numbers" since 6th grade
Whole numbers always included -1, -2, ... when I was in school. Not an English speaking country though. If we talk about "natural" numbers, it's telling that it took at least 1000 years to invent 0. You can have 1 apple, 2 apples, ... but if you include 0, do you also have 0 oranges, 0 plums, 0 footballs? We used N_0 when we needed that in classes, and it wasn't often.
@@Milan_Openfeintthis is why I believe zero isn’t a natural number. Nobody starts counting at zero. We don’t (naturally) think about having zero of something
As a programmer, I've always considered the natural numbers to start at 1. Whole numbers include 0. Ints include negatives. Makes Peano's a bit more tedious to get started, but makes it way easier to define it all with lambda calculus. Then our axioms are beta-equivalence and eta-equivalence.
Here another example which I encountered when trying to solve a maths problem in a TH-cam video, but got the problem definition wrong: in number therory etc. "log" without a given base usually means the natural log, whereas in applied maths it usually means the log to base 10. IMHO, "ln" should be eliminated, "log" without a base should always mean the natural log, and if you absolutely need a log to this rather arbitrary base 10, just add the damned base to it. But unfortunately I'm not in power ;-)
In South Africa (Language = Afrikaans)[direct translation] we have Natural numbers(N) and Counting numbers (N₀). All Natural numbers (and zero) are Counting numbers.
Weird idea: instead defining 0^0 as 1, let's define 1 as 0^0, probably not feasible as exp is defined by multiplication, which is definied by addition for which we need 1, but curious if anyone considered this, maybe we can get "everything" from exp.
Zero != null.. Zero is a property of an existing object, so it always physically represents something or lack there of. It's not "null". So, 0^0 represents an operation of existing objects that result in zero of that objects property. As zero is a property of the object, it's a viable result. You have to realize the difference between an existing object and its properties. This detail is crucial for constructing real world simulations.
Axion of Choice is an axiom you use or don't use. There is no right answer, both ZF and ZFC are consistent in itself. 0^0 should be agreed upon being not defined. If a mathematiican rather looks at a limit, that is not the same as the expression itself. The PEMDAS term does have well-defined a solution. If zero is considered part of the set of natural numbers or not depends on the problem one is tackling. It can be more or less elegant, depending, to include 0 or not. "Natural number" is a weird concept anyway, while "Integer" would be well defined, or "positve Integer" is also well-defined, or "non-negative Integers", too.
Saying 0^0 is indeterminate because it can be multiple values is like saying when x^2 = 1, x is indeterminate because it can be multiple values. It's just completely short-sighted. 0^0 and all other indeterminate forms equal the entire domain, either the real numbers or the complex numbers. We just need to accept the obvious fact that these operations and functions that produce indeterminate forms are really just set-valued functions. And as for 1/x, it's not discontinuous because the function is well defined and continuous at 0. 1/0 = infinity, obviously. 1/x is a continuous transformation of the Riemann Sphere, a 180 degree rotation, so it obviously is a continuous function.
As a physicist rather than a mathematician, most of these seem to have pretty obvious "correct" solutions: - 0^0 in isolation is undefined since you don't know what the numbers represent. If you're in a context where you know you're dealing with a variable base and fixed exponents then it makes sense to set it equal to 1, but that seems more like a shorthand instead of a proper definition. - 1/x is not continuous unless you're doing projective geometry or something where your number line wraps around so positive and negative infinity are actually the same point. It seems obviously wrong to call a function continuous if it has a bunch of holes where it isn't even defined, there should just be a different term for that. - A constant function is not increasing, if you define it that way you either have an asymmetry between increasing and decreasing or you have to say it is simultaneously increasing and decreasing which just seems silly. You can just say non-decreasing if you want the looser definition. - A function for a line should be a linear function. Again I think we should just have a different name for linear combinations without any constant term, but I guess we're stuck with it. Finally, I would say that 6 / 2(1 + 2) = 1. This one is subtle, and is less about order of operations and more about the cooperative principle from linguistics. Writing the multiplication without an operator creates a strong visual connection and makes it pretty obvious that is meant to take precedence over the division, like how if I say "yellow bedroom" I am talking about a yellow room with a bed and not a room with a yellow bed. Inserting an explicit multiplication operator would instead give the multiplication and division equal footing, changing the answer to 6 / 2 * (1 + 2) = 9.
the last one is, at the end of the day, about conventions that were created to simplify writing and especially printing math expressions because inline forms are easier and cheaper to print, but can require a lot of brackets to accurately show how the elements of an expression relate to each other. since not all of these conventions have spread to the whole math world, expressions like this viral one don't have one universally accepted answer; it depends on what notation agreements you were taught or are using
@@vsm1456 That's fair, the last one is definitely an outlier since the notation is really completely arbitrary. It just seems to me that people who think the division should occur first would be unlikely to write the equation this way, but maybe I'm wrong about that.
Interesting how mathematicians come up with 9 as an answer. All really depends on whether you do multiplication or division first following the action in parentheses. If you divide first, then you have 3*3, which is 9, but if you multiply first, you get 6/6, which is 1.
I certainly want the function f(x)=1/x to be continuous. I find it annoying that Stewart, a textbook I otherwise like, does not make this convention. It's annoying to have to have to say things like, the sum of two functions that are continuous on their domains is a function that is continuous on its domain. The composition of two functions that are continuous on their domains is a function that is continuous on its domain. And so on. And of course the convention that makes f(x)=1/x is consistent with the definition in more advanced texts, where a function is continuous if the inverse image of any open subset of the codomain is open with respect to the domain.
>> *I find it annoying that Stewart, a textbook I otherwise like, does not make this convention.* It's not a matter of convention, it's a matter of satisfying a definition. If x↦1/x satisfies the definition of continuity, then it is continuous. And that's it. One doesn't get to choose the results coming from a definition. We aren't allowed to make any conventions about them. This makes the textbook, strictly speaking, wrong about this point. Also, let's say temporarily that the points outside the domain of a function are discontinuity points, then every function whose domain isn't entirely R is discontinuous, even when the domain is an interval, which crazy, it makes the property of continuity meaningless. This is an indicator that continuity shouldn't depend on points outside the domain.
My takes and reasoning *0^0 is indeterminate* and dependent on context. You can’t just slap on a 1 and call it a day *Natural Numbers start at 1* because we have another set called the Whole numbers that start at 0 *1/x is not continuous* because by there being a gap in the domain it is not continuous by that alone f(x) = 1 is increasing because funny *3x+1 is linear* because it takes the form of a linear equation, mx+b. I would consider the linear equation that has the point (0,0) to be proportional, and one that doesn’t to be affine. I would consider affine and proportional to both be a type of linear equation, not affine or proportional being separate from a linear equation. I’m not getting the axiom of choice so imma skip it 9
It's my turn to share my take and reasoning: There is an explanation for 0^0 = 1. Start with a×x^n where n is a natural number. This quantity is "a multiplied by x, n times". If n is 0, then this is "a multiplied by x, 0 times" , which means it is "a" not being multiplied by anything. Since a was not multiplied by anything we have done nothing to it, it stays the same. Thus we get the equality a×x^0 = a. The only way for this to be possible is for x^0 to be 1. x was arbitrary so it could be anything, even 0. Thus proved. There is no contradiction if we have 0^0 = 1 and wherever it appears, it is 1. Some countries call the integers starting from 0 the natural numbers and do not call the integers starting from 1 anything, after all, no one needs to call the integers starting from 2 or 5 or 1000 anything. And of course, in turn, they denote the naturals numbers without 0 by N* or N_>0 (the >0 is suppose to be a subscript). There is nothing wrong with this. the term "Continuous" has a very precise definition which is the following: For all a in Domain, lim(x→a) f(x) = f(a) . 0 is not in the domain of x↦1/x so it cannot be considered a point of discontinuity. Also the fact that the graph is not connected doesn't mean that a function is discontinuous, the source of the disconnectedness is the domain. The function transforms the domain without inducing other shreds in it. The word "increasing" also has a definition and from that definition one knows whether a constant function is increasing. Though the world loses its actual meaning in English. Linear equation are called that because they can be written in the form f(x) = b where f is a linear function and the definition for linear is the following: for all x,y in D and c in R, f(x+cy) = f(x) + cf(y) The above definition implies that f(x) = mx. It is true that labels are arbitrary and we could have used them to refer to whatever we want, nut if we don't agree on the what the terms mean, communication would be impossible, so we should agree on them, and you will find that the term linear is used in the way I used it.
@@sdspivey Most definitions of "Counting Number" *don't* include zero. It's a semantic distinction, but you haven't done any counting until you get to 1. Most people these days *do* include zero in the Natural Numbers ℕ, though. It just works better most of the time.
What do you mean "second case"? Are you talking about the binomial theorem? If you are, I want to point out that the binomial theorem doesn't just hold for the real/complex numbers - it holds in any commutative unital semiring, including in settings where limits don't even make sense. And in all such settings, we need 0^0 = 1 in order for the binomial theorem to be completely true. But 0^0 = 1 can be explained in these contexts using the empty product convention.
@MuffinsAPlenty Oh that's fair, I was talking of course about how in the real/complex case the choice for 0^0 is aligned with that limit. Generalizations where limits/convergence are not defined will follow the same rule. But I'd say that writing x^0 is a shortcut/abuse of notation: strictly speaking the notation x^0 is ambiguous and 1 should be written instead, but that would make for uglier expressions where an extra term (zeroth order monomial) has to be treated especially.
0 is not natural, but rather a whole (an extension of the naturals). Every natural number has a unique expression by the fundamental theorem of arithmetic of the form (p_1^a_1)(p_2^a_2)… where p_i is the ith prime. It can also be considered as an infinite list [a_1,a_2,a_3,…]. 1 has the list [0,0,0,…], whereas 0 doesn’t have any unique list. Whole numbers require an extension by including 0, while integers require that plus the inclusion of a special “prime” -1. Extensions of number systems are often huge and expand our way of thinking, which 0 definitely was.
@ set theory starts by defining the wholes, which it mistakenly calls the naturals. Arithmetic (a much older field of mathematics) begins by defining the naturals. 0 definitely is in a class of its own as far as properties go and shouldn’t be confused with being in the “smallest” of the naturals. Set theory just abused the notation and completely missed that 0 was never natural when they were defining its basic axioms. There is an important distinction to make, so just dismissing the arithmetic definition as “positive integers” is really downplaying what they actually are and how they’re defined in context of their set.
@@ClementinesmWTF Your opinion. Not necessarily everybody's. (As most of the issues that Trefor is pointing out, it's a matter of definitions - and not all of those are agreed upon universally) FWIW - Peano arithmetic also starts (axiomatically) from zero. I'm not aware of any other rigorous definitions of the naturals outside those.
@ ok then how do you distinguish the two sets, while also keeping them distinctive from the integers (ie no “non-negative” or “positive” integer)? It’s not an opinion, it’s just fact that they are different sets, with different usages and different histories and contexts. In this case, it is possible to have a wrong opinion, and anyone who includes 0 in the naturals is absolutely wrong.
0⁰ = 1 and 0! Because n^m is the number of functions from a set with m elements to a set with n elements And Because n! Is the number of bijections from a set of n elements and itself And the empty relation from the empty set to itself Is a function and is a bijection
Ambiguous questions -> ambiguous answers
0^0 is “undefined.” This means that not enough information is specified to answer the question
1/x is continuous for all x != 0. Continuity is a property associated with domain ranges of a function, not the function in its entirety
(green)
@@chrisglosser7318What do you mean by "the function in its entirety"? The function does not exist outside its domain.
@@chrisglosser73180^0 is 1. The reason the limiting form 0^0 is undefined is because f(x, y) = x^y is discontinuous at f(0, 0)
In my university we would call f(x)=1 non-decreasing
I definitely like that nomenclature because it avoids the ambiguity
@@DrTrefor i'd expect the definition of decreasing be just as ambiguous as the definition of increasing. actually checking Wolfram Mathworld, it does the same distinction between decreasing and strictly decreasing. So according to Wolfram f(x)=1 is both increasing and decreasing, and "non-decreasing" would be incorrect
Non decreasing and increasing are both important and common enough to deserve different names rather than just call both increasing.
In my university as well
"Non-decreasing" and "strictly increasing" feel like the "increasing function" equivalent of using "non-negative integers" and "positive integers" instead of "natural numbers"
The number n^m counts the number of functions from a m-element set to a n-element set, so you could argue that 0^0 counts the number of functions from a 0-element set to a 0-element set (i.e. from the empty set to the empty set).
Any such function would have an empty domain (and range), so the empty set is the only candidate. It also fits all the criteria: it is a relation, since it is no element in it that isn’t an ordered pair, and it is functional, since there are no two ordered pairs (x,y) and (x,z) with y =/= z as elements.
So there is exactly one such function, which gives another context where 0^0 = 1 is the sensible choice.
I think 0^0 is also 1. Another place where 0^0 must be 1 is with the growth formula, not A=Pe^rt, but the other one.
I firmly believe that we can rewrite a^b as a*a*a... b times, assuming b is integer. With that in mind, without anything else, if we send in 0 of those a's into multiplication, we would get undefined/indeterminate values, but we can just multiply everything by 1:
1*a*a = 1*a^2 = a^2
1*a = 1*a^1 = a
1 = 1*a^0 = 1
I know it is a bit naïve, but hey, 1's are powerful in math
So you're saying, "If you squint hard enough, any featherless biped looks like a human" 🤔
Funny that **** you was an option 😂
I think it accurately captures the sentiment of the community on this problem:D
Yeah, it depends on your definition of algebraic priority (is distributivity prioritized over multiplication) and how you interpret the awfull ÷ sign. And please, use parenthesis in the right form... or use fractions 😀. Overall is a very badly writen question.
Put me in that camp as well, it's the correct answer. I can't think of many greater losses for mathematics than order of operations becoming such a large part of the general public's concept of what math is. There's a reason no one serious uses the division sign, or even, in most cases, the forward slash. Hell, this is also the reason we killed the multiplication sign.
@@andreyhenriquethomas9554arguably that's not a question because you would distribute (6/2), not 2
@@Slackow A scientific calculator wouldn't, that's the point that causes dissagreament (and the calculators do not "settle" the issue)
8:00 The interesting part about this graph too is that in wheel theory, where 0 and absolute infinity are considered multiplicative inverses and 1/0 can be defined as infinity, and both positive and negative infinity join at the same point, this graph actually CAN be defined to be continuous even by including 0 in the domain. The simple fact is, what is and isn't allowed in math is defined by the rules you are working with, and it's entirely possible to make up NEW rules any time you wish and perform math with them.
Math simply doesn't care about what you do with it, it is merely systems of inputs and outputs defined by rules that you impose on them, like creating an algorithm and executing that algorithm. You want there to be a square root for negative 1? Define it. You want there to be zero divisors? Define them. Do you want there to be a new integer between 3 and 4? Go ahead, as long as you're being consistent with your logic, you can do math with it, your only responsibility is to make it clear what those rules ARE so others can reproduce your results.
well said!
Careful, Wheel theory, if I recall correctly, defines division differently than _"a multiplied by b^−1"_ which means that 1/0 does not represent the multiplicative inverse of 0.
Also, I don't think that 1/0 is the same thing as the infinity in the projectively extended real line and positive and negative infinities in the affinely extended real line.
@ Inaccuracies of the specific example aside, that misses the point of the entire second paragraph, which is that math is a system of objects and rules. Define the objects you’re working with and the rules that apply to them, and anything you can do with those objects within those rules is possible. If I wished to, I could absolutely define absolute infinity to be the multiplicative inverse of 0 and ignore the concept of positive or negative infinity, and in doing so I could draw the exact same graph but instead of saying the point at 0 is undefined or indeterminate, I could just call it absolute infinity with the inclusion of my rule to define it.
What’s important is being able to replicate the steps, which means people only need to know how I did it. Sure, if you’re following someone else’s rules then you’re bound by them, but that doesn’t mean it’s the only set of rules.
@@GameJam230 Oh, I am sorry that I made write all of this. I am actually fine with the 2nd paragraph I have nothing against it. I just wanted to point some minor details, that's all.
Nevertheless, I will read your response as I am sure it will be interesting.
*After the read*
OK, it is basically a reiteration of the main comment, I just have one disagrrement with it.
_"The multiplicative inverse"_ is an agreed upon term that has a very precise definitions which I am sure you know it. You can't just arbitrarily assign the term to the new element 1/0 in wheel algebra because that means you basically disregarded the definition of the term.
Remember that definitions and rules also needs to be consistent as you have said in your main comment, so they aren't completely arbitrary.
There are some definition and rules that are incompatible with each other like for example the fact that you can't have a ring whose 0 element is multiplicatively invertible without it being the trivial ring which basically means that we should accept that in a non trivial ring we should accept that 0 can't have multiplicative inverse.
@ I want to be clear that in my second comment I wasn’t implying 1/0 WAS the multiplicative inverse of infinity in any existing system, just that somebody COULD define it to be and then perform math with it.
If I were to leave all existing rules of math required to graph y = 1/X alone and ONLY added a single new rule that said 1/0 equaled “absolute infinity”, the graph would look exactly the same but I’d be saying there is a value it is equal to instead of saying the value cannot be determined.
I’m not saying any existing system DOES work that way anymore, just that if one created such a definition, doing math with it would be entirely possible, same as zero divisors and square roots of negatives.
5:13 thank you so much for saying this. I get so disheartened when a "mathematical" discussion is really just people arguing about some convention...
Hell yeah, affine vs linear mentioned
My favorite one lol
@DrTrefor comes up a lot in Electrical Engineering esp signals, where a linear system must preserve the frequencies present in the input signal, without adding any new ones.
If you had an input V(t)=sin(t), it has one frequency component; if your system has transfer function x + 1, the output is now sin(t)+1. The 1 has added a DC bias to the signal, shifting it up; this is an added frequency (0 Hz).
So the nuance between linear and affine is very important for my field :)
I guess words mean different things in different contexts. A bounded linear functional need not be bounded (in the function sense, even though a functional is a function).
I only feel strongly about two of them: 0 is a natural number (it's the type of the answers to questions like "how many cows do you own?"); and 0^0 = 1 (the product of no things is 1, just like the sum of no things is 0). People who are thinking about limits should know that the function x^y is not continuous at x=0, y=0. There's nothing wrong with non-continuous functions existing.
I say 0 isn't a natural number also because of counting. N is the basis for countably infinite. If something is countably infinite, we can biject it with N. This equivalently means we can define a first element, bijected with 1, and a second element, bijected with 2, and so on.
And for x^y, any value could work just as well as 1 because none would be continuous.
I'm curious though. Would you also consider ∞⁰ to be 1? The extended real number, possibly the element of our wheel or Riemann sphere.
@@xinpingdonohoe3978 Computer programmers have discovered that we should think of "first" as mapping to the number 0, "second" as 1, etc. We do the same with age, which starts at 0 when you are born. This is a good thing.
The reasons for 0^0 being 1 have nothing to do with continuity. How do you multiply a collection of things? You start with a 1 and you multiply each of the things into it. If you are multiplying an empty collection of things, the answer is 1. This is the case even if the things you don't have any of were zeros.
@alonamaloh indeed. That's why we call them computer programmers, and not mathematicians. Also, not everywhere follows that convention with age.
The empty product doesn't really matter for the multiplicative absorber. For other stuff, it's necessary that a^0=1. It means we can use
a^0×a^b=a^(0+b)=a^b
and tells us that we should be expecting a^0 to be 1, as a^b is an arbitrary number.
But for 0?
0^0×0^b=0^(0+b)=0^b
If b>0, 0^b=0 so any value of 0^0 works.
If b=0 and we assume it's defined, then 0^0 could be either 0 or 1, as they both satisfy x=x², and this empty product argument can't find a valid candidate because both are perfectly valid.
For all other complex b, 0^b is unambiguously undefined, so why are we even trying it?
@@xinpingdonohoe3978 Computer programmers have found better conventions, because getting the conventions right matters more in programming than in math. We would do well to follow their wisdom.
@alonamaloh not at all. Their field is a lot less rigourous. They accommodate mistakes all the time. We can contemplate taking them seriously when their field can tell us 0.1+0.2 correctly.
Consider a polynomial p(x) = aₙxⁿ + ... + a₁x¹ + a₀x⁰. We expect p(0)=a₀, which is equivalent to 0⁰=1.
Good point, though I guess it's not really all-encompassing.
Consider the series
Σ[n=0,∞] n^(1/ln(n)) /n!
n→0 and 1/ln(n)→0, so the first term takes the form 0⁰/0!
And yet, the series would sum to e², because for each n we'd have
n^(1/ln(n))^e^(ln(n)/ln(n))=e
The limiting value is what gets used, so in this case we find 0⁰=e.
@@xinpingdonohoe3978yeah this would not be possible if 0⁰ is anything but 1. Since e⁰ is 1, hence 0⁰=1. If you were to say that this is false, you would have to make it so the sum starts from 1, not 0
@Hagurmert I don't know what you just said. How does that relate to my series?
@@xinpingdonohoe3978 maybe I confused this one with the series definition of e^x, if you plug in e⁰, the first term will be 0⁰/0!, and all other terms cancel out to 0, but e⁰ is 1 since x⁰ for all real numbers is 1 and this term 0⁰/0! must be 1 and so 0⁰ is 1. Otherwise this series would have to be re indexed so the sum starts from 1. I'm not sure how this is in your sum but we will run into 0⁰ power when we plug in 0's
@@Hagurmert precisely. In his series, 0⁰ appears and equals 1. In my series, 0⁰ appears and equals e.
I would be curious to what the background of the majority was here. A lot of these answers seem to be coming from a perspective of an applied mathematician instead of a pure mathematician. I might be wrong, but it makes me wonder. It would be interesting to see the breakdown of answers based on the specific field of study
I suspect the background of most respondents were at best recreational mathematicians, and probably a fair amount of students. I personally find it hard to believe a survey that had a sizable amount of full mathematicians would have "1/x is not continuous" as a majority. The linked survey in the description mentions it was sent to various places. Also it states "This project was supposed to be a joke, so I did not have high standards of scientific rigor in mind when conducting this survey."
the study says two versions were sent out mostly to discord groups, facebook groups, and reddit groups so mostly undergrads and enthusiasts
@@pixel-hy4jx I checked a bit, and it isn't quite. From what I understood from the methodology portion is that the author checked that each participant was associated with a university. So it's not entirely random people. But indeed, it doesn't give much insight into what most of these participants specialise in
@@methatis3013 the author might’ve checked if the person gave an association with a university, but they say that the chart at the beginning only shows the breakdown of those who gave it. That association to a university also doesn’t mean “professor” or even “mathematician”, it just means they are currently at a university in some capacity (ie most likely a bunch of undergrads)
80% of the people said numbers are real so it's likely mostly mathematicians.
I doubt anyone applying math as an engineer or physicist or chemist or computer scientist would answer that numbers are real. Pure math is really just another way of saying "Idealism" math.
Enjoyed watching, many of these I get asked often as a high School teacher so nice with a new perspective on some of these
I think the best explanation/argument for why 0^0 (in most cases, not all of course like we saw with limits) should be defined to be 1 comes from our understanding of the empty product.
For example, when we write something like 5 * 2, we can think of this as starting at 5 and then add 5 again to get 10, but a more helpful way of thinking of this is starting at the additive identity of zero and then adding 5 to zero two times. 5*2 is adding 5 to zero 2 times, 3*6 is adding 3 to zero 6 times, etc.
Understanding this explains why 0 * x = 0 for all x since we add x to zero 0 times so we stay at zero. The same process works for exponentiation.
People will explain that x^0=1 makes sense because 1=x/x=x^(1-1)=x^0 for nonzero x and they’ll also use this to justify why 0^0=1 cannot be true since you would be dividing by zero. First of all notice that this logic would imply that no powers of 0 can work since for example 0^2=0^(3-1)=0^3/0, but of course we know 0^2=0 so using this method doesn’t make sense for zero so it shouldn’t be used in the 0^0 case either.
Next, the empty product can be used to understand why x^0=1 so we don’t have to use the division method. Just like in the multiplication case, we start at the multiplicative identity when doing exponentiation, which is 1 in this case, so we interpret 5^3, for example, as multiplying 1 by 5 three times, or 1*5*5*5, so 5^0=1 since we multiply 1 by 5 zero times. This is no different for 0. 0^4 would be multiplying 1 by 0 four times, or 1*0*0*0*0, so 0^0=1 since we multiply 1 by zero 0 times.
The example you gave with the binomial theorem and Pascal’s triangle is good since it shows why this definition of 0^0=1 is useful as in others cases like Taylor series. But I think this explanation is the best because it explains more why logically it makes sense 0^0=1 fundamentally rather than why it useful for it to be this way in certain contexts.
For sure I love the empty product interpretation!
This is actually a very nice argument. I had never thought of it this way, and was uncomfortable with calling it 1.
you can even use 0^0=1 in calculus. But then keep in mind that (0,0) is a singularity of f(x,y)=x^y. It's not differentiable there and that's a shame for an analytic function 😂
@@kruksog It still goes back to 0^x vs. x^0 which was the problem from the start.
As a hobby I'm developing a theory that x^0 is 1+ an infinitesimal multiple of log(x). This infinitesimal term can be neglected anywhere except x=0. Needless to say it's not going very fast.
@@Milan_Openfeint You can define x^0 in an arbitrary monoid though-not so for 0^x. And the monoid definition leaves us no other option except 1. We _can_ try to make piecemeal definitions for idempotent x taken from a semigroup and end up with a collision, but I hope it’s pretty evident why it’s a bad way to generalize.
In India, we were taught that a number system starting with 0 is a whole number system, whereas a system starting with 1 is a natural number system.
i think the distinction between whole number and natural number in that way makes a lot of sense, but unfortunately there isnt really a popularized “blackboard bold” symbol to denote the whole numbers like there is for the naturals
@@yfidalv Wouldn't you say that negative integers are whole numbers?
No, negative integers would not be whole numbers. Also sometimes people can use a blackboard W.
@@yfidalv there are enough symbols. They can frequently use N and Z+ for {1,2,...} and W or N_0 or Z+_0 for {0,1,...}.
Alternatively, we could just use {1,2,...} and {0,1,...}.
We could even use either Z∩[1,∞) or Z∩(0,∞), and then Z∩[0,∞).
@@IsaacDickinson-tf8sf The word "integer" literally means "whole number", so saying that half the integers are not whole numbers (handwaving what "half" means for an infinite set) makes a mockery of the etymology. Which admittedly isn't the most compelling argument, considering how many words are used to mean things other than what their derivation would suggest.
There are plenty of people who use "whole numbers" to mean "numbers without a fractional part".
And then there are "counting numbers" which often start with 1, but some people like to be able to count a null set too. Some people like to define counting numbers as excluding zero while natural numbers include it, others like to define the two as synonymous terms, and there are even some who include 0 as a counting, but not a natural number.
It should be noted that 0^0 = 1 is the only non-contradictory value for 0^0, which agrees for the usual rules of powers (a^(n+m) = a^n *b^m and the all so mighty a^0 = 1).
The only reason why 0^0 is sometimes called undefined, is because the function x^y is discontinuous at (0,0), however many well defined functions are discontinuous at some points :)
Also if we assume 0^0 = 1 the discrete fourier transform of a grid-aligned cosine will give exactly 0^x for a whole spectrum, which generally makes the whole DSP convention (sinc(0) = 1 and the rest of normalized trigonometry) consistent unlike other values.
So yeah, imho 0^0 = 1 is by far the most elegant convention in most use cases.
Sure but 0^0 = 0^(1-1) = 0^(1 + -1) = 0^1 * 0^(-1) = 0^1 / 0^1 = 0/0
So clearly some rule won't apply unless we also say 0/0 = 1
@@awesomedavid2012your proof also show that 0^1 should be left undefined:
0^1 = 0^(2-1) = 0^2/0^1 = 0/0.
So clearly the argument doesn't work.
@@awesomedavid2012you’re using the rule wrong, there’s no problem with 0^0 itself.
@@awesomedavid2012 Do you have a problem with 0^2=0? Look: 0^2 = 0^(3-1) = 0^3 * 0^(-1) = 0^3 / 0^1 = 0/0. If you multiply and divide by zero, you'll get nonsense. But this doesn't mean there is a problem with the thing you started with.
I really like this video because it doesn't sensationalize about the topic - mathematicians aren't at odds about any of these, they're just trivial edge cases in which each discipline has chosen a convention that works for them. One of the great things about math is that it's both discovered and invented, and these examples illustrate that perfectly.
Fwiw, it's not always because of specific disciplines... some conventions were made for cultural or political reasons (e.g. New Math redefining whole numbers, and i think whether 0 is natural or not).
For the 6 / 2(1+2) thing, I’ve noticed I have a really weird way of ordering.
The order is “do whatever is written closer together first”. Which means, “6/2 (1+2)” and “6 / 2(1+2)” are interpreted differently for me.
I think it’s because when doing math work with pen and paper, often I just skip a bracket here or two if the process is extremely obvious. In these cases, the way how close I write everything together usually implies priority of things.
This is super evident with “6 / 2x”, where it’s super clear that it’s dividing by “2x”, because there is just never a reason to write down “6/2” times “x” as a step in a solution. So the bracket around “2x” often just becomes implied. This happens a lot when the paper doesn’t have enough space to write a horizontal division line. So it’s also important to draw the diagonal division line reeeeeally long to indicate we do it last.
not sure if I get your point. this problem is all about conventions that were made to simplify writing and especially printing because it was so much easier and thus cheaper to print inline expressions than deal with horizontal line every time there's a division, and leaving out some brackets again made it shorter and less cluttered, for the cost of every reader now having to remember some additional notation rules
but! no matter how you explain each of the convention, the key thing for this viral problem is that these conventions didn't spread to the whole math world, so different people interpret this expression differently, depending on how they were taught in school
@ Honestly I’m just talking about my personal styles. The way I write and interpret math stuff.
🟠🟠 The calculation of 6 / 2(1+2) is logically done by computing 2(1+2) first.
2 is also a character (numbers are included in characters) and 'a' is a character.
Why do we calculate 6/2 when 'a' is replaced with 2, but calculate a(1+2) when 2 is replaced with 'a'?
This is illogical because it's performing different actions in identical situations.
We use 'a' instead of 2 either because we don't know the value of 'a' yet, or we do know it but use it for computational convenience.
Therefore, the same formula should be solved using the same method.
Both 'a' and 2 are characters. Numbers are included in the set of characters.
6 / 2(1+2)의 계산은 논리적으로 2(1+2)를 먼저 계산하는 것이 맞습니다.
2는 문자(숫자는 문자에 포함됨)이고 'a'도 문자입니다.
'a'를 2로 대체할 때는 6/2를 계산하지만, 2를 'a'로 대체할 때는 a(1+2)를 계산하는 이유는 무엇입니까?
이것은 동일한 상황에서 다른 작업을 수행하기 때문에 비논리적입니다.
우리는 'a'의 값을 아직 모르거나, 값을 알지만 계산 편의를 위해 'a'를 사용합니다.
따라서 동일한 공식은 동일한 방법으로 풀어야 합니다.
'a'와 2는 모두 문자입니다. 숫자는 문자 집합에 포함됩니다.
People are being taught that 2(x + y) = 2 * (x + y). It doesn't. It actually means (2 * x + 2 * y) or ((x + y) + (x + y))
So 2(1 + 2) means ( 2 * 1 + 2 * 2), which is 6
The equation is now 6/6, which is 1.
Here is a simple explanation of how people get it wrong:
If x = y, then: x/x = 1, x/y = 1, y/x=1 and y/y = 1
So, let's say that x = 2(5) and y=5(2)
Obviously x / x = 2(5) / 2(5) = 1, but people rewrite it as 2 * 5 / 2 * 5 and end up with 25
x/y is rewritten as 2 * 5 /5 * 2 and ends up with 4 :/
Likewise a/bc means a / (b * c) not a / b * c
Interestingly, Mathermatica is often used by some people to "prove" that a/bc is a / b * c, but neglected to read the documentation that states that while a/bc does indeed mean a / (b * c) they _chose_ to ignore this and implement it as a / b * c.
The problem is that people believe that PEMDAS is "the order of operations" ... it's not. It is "the order of BASIC operations", i.e. + - / * ONLY.
2(1 + 2) is essentially a function, which is NOT covered by PEMDAS.
@@Kyrelel > Obviously x / x = 2(5) / 2(5) = 1 and then 2 * 5 / 2 * 5
I would instead write (2 * 5) / (2 * 5) , since i am unpacking the "/".
Does this circumvent all of possible problems, or do you have another example?
Great video! One more that deserves to be here is what ab means if a and b are permutations: for analysts, it means do b then a (i.e. go from right to left); for algebraists it means do a then b (i.e. go from left to right).
it was definitely time for a video like this…thanks!
0:10 At source, "mathematician" appears to be very loose: basically anyone who has an interest in math entertainment (e.g. many responses were recruited from social media and math entertainers' fan groups) - not simply professional mathematicians.
yeah the moment "mathematicians" said 1/x is discontinuous I knew this was sketchy
Apperantly a bunch of people from reddit and math discord servers were asked. No mathematician will say that 1/x is discontinuous.
They also said 0⁰ is indeterminate. It's a function, it's either defined or undefined. What indeterminate even means? That word only applies to limit.
@@Noname-67 Yeah I caught that too
I'm one of the mathematicians who filled out the survey and said it was discontinuous. Granted I was not thinking precisely about the question, considering there are 100 questions to get through. But, I don't think you can expect people, whose main interest is algebra, combinatorics, logic, or literally any other subject besides theory-heavy analysis, to remember the exact definition of continuity. And besides, in some areas of study, people consider functions up to an equivalence class (agreeing almost everywhere, i.e. other than a set of measure zero), and in that case 1/x is not continuous. I probably would say it is continuous now, after remembering the topological definition involving preimage of an open set, but again, there are 100 questions, and I'm not treating it as an exam where each question is very serious.
Please do not discount the author's work or the survey. I know numerous people in my department and advanced undergrads/grad students from other universities who completed the survey. The math discord server that the surveys are shared in, are pretty elite.
For the linearity property, I've seen some schools use the term proportional instead.
Generally speaking, saying x is proportional to y means that y =mx where m is the constant of proportionality, which can be interpreted as the slope when graphing. Due to this interpretation having no y-intercept, it fills the definition of linearity.
I am consistently part of that 1/3rd in the second half of the video and suprised that 2/3rd disagree. Especially baffling to me is that 2/3rd of people apparently think f(x) = 1/x is not continuous, based on a percieved discontinuity at a point at which it is not even defined! Really part of the problem is that something is missing in the question statement; rather than just giving an expression like f(x) = 1/x, to define a function you should really also have a given domain or codomain that it maps from and maps to. (This is part of the formal definition of a function in set theory.)
Presumably in this case f(x) = 1/x is meant to be a map from R - {0} to R, and that map is very clearly continuous. Additionally, even though it is true that it can't be continously extended to a map from R to R, it can be extended continously to a homeomorphism of the extended real line, or of the Riemann Sphere. I think the only way you might think it's not continuous is if you take the "a continous function is one that you can draw without lifting your pen" intuition too literally. After how broad "continuous" becomes with topology, that intuition isn't always too helpful anyway.
Maybe the axiom of choice is like Euclid's parallel postulate. It's very specific and contrived and leads to a mathematical object that is very well-behaved and familiar, but without it there can be other objects that are useful and less restricted. Although, I've heard people say that AoC leads to paradoxical conclusions, and maybe Euclid's 5th postulate doesn't really do that. They are both oddly specific axioms though.
One point when it comes to the question of a continuous function:
In real analysis we may often use a topological definition of a continuous function:
a function f is continuos if and only if a preimage (with respect to f) of an open set is an open set.
So, using this definition (which is standard in most real analysis books), if f(x) = 1/x, it is possible to show that a preimage of an open set will, again, be an open set (in the domain of f). So I don't think there is any discussion of whether 1/x is continuous. It simply is.
Ya absolutely from a topological perspective this isn’t really controversial. But it sure surprises calculus students that it would ever be continuous!
@@DrTreforyes just like me😢
@@DrTreforI agree it may be counterintuitive. But it is no longer a question of "definition". It is now a question of what is actually correct and incorrect. The topological definition of continuity is the strongest one we have, and I think most people dealing with real analysis would agree. I think this again relates to my previous comment that is curious about the breakdown of the background of people who were a part of this research. It seems to me like the ones who would say f is not a continuous function would usually be applied mathematicians, while those who would call it continuous are usually pure mathematicians. This is, of course, just my conjecture, but I would be curious to see if my guess is right
@@methatis3013 It is still about definitions. The topological notion of continuity being the "strongest" in of itself requires some agreed-upon idea about what "strongest" means. At the end of the day, _any choice of words or symbols is arbitrary_ and thus some form of "definition". There can be no notion of "correct" when it comes to _writing_ mathematics, there can only be convention.
@@methatis3013 i don't know how it's in other countries, but in Germany, all students who study anything with mathematics, let it be electrical/structural engeneering, even those studying in schools of applied sciences, which don't focus on theory that much, learn the formal definition of continuity, at least in the real numbers.
The way I visualize 0^0=1 results from a conversation I had with a friend several years ago while wondering about how negative exponents work. This only works for integer exponents, I think, but if you visualize n^x as a number line that starts at 1 for index 0 and then you multiply the 1 by the value of n x number of times for positive exponents and divide it by n x number of times for negative exponents. So, for example, 2^3 is 1 * 2 * 2 * 2 = 8. 2^0 is just 1 with no step in any direction. 2^-2 is 1 / 2 / 2 = 0.25. As a result, this also means that 0^0 is 1 with no step in any direction as well.
asking if f(x) = 1 is increasing is quite literally like asking if 0 is positive
That is true. In France, they say that 0 is both positive and negative. If they want to say a number x is greater than 0, they say x is "strictly positive." Similarly, if we wish we can say f(x)=1 is an increasing function and that f(x)=e^x is strictly increasing.
I'm having a lot of trouble with the fact that f(x) = constant can be described as increasing. Surely the value of mathematics lies in its ability to describe and model the universe and our actions and experiences in the universe. I know a little bit about mathematics and a little bit about words. I thought I knew what the word "increasing" meant but now... I imagine someone having the following experiences:
"Boss, remember when we talked last month you said that I'd get a pay increase, but I just checked and it's exactly the same".
Boss: No. no. no. We totally increased it. Go out to dinner and celebrate.
At dinner: "I ordered my steak well done, but this is undercooked"
Server: "I could increase the cooking time" *picks up the plate and puts it back down on the table*
Diner: But it's the same!
Server: no. I increased the cooking time.
Arriving home from dinner to find the house on fire: "increase the water flowrate!!!"
Fireman: yep. Done.
Now, with no house to live in... *Boards plane to go live with relatives in another city*. Copilot to pilot: "We're about to fly into the side of that mountain. Better increase our altitude"
Pilot: done
Co-pilot: but ...
*Crashhhhh*
On mountain side surrounded by debris, paramedic 1 says: "increase the pressure on that wound or he'll bleed out".
Paramedic 2: ok. Done it. But, he's ....
🤔🤷🏻♂️
The trouble is that if you don't count f(x)=constant as increasing, you also don't count a staircase as increasing because it has horizontal bits in. The constant is the weird corner case of allowing functions to still count as increasing even if they have flat bits in.
@rmsgrey I'm still not getting it. 🤔. I've certainly heard of stepwise functions or staircase functions. Surely they don't increase over the interval where they have a constant value. If they step up to a higher value then they've increased. I was talking about a function with a constant value for its entire domain of inputs, Because I thought that's what the video was referring to. I'm not sure how a client would explain to a builder that they want a staircase just made out of flat bits, but no risers, but they want it to reach to the next floor up.
@@russelleverson9915 If you try coming up with a simple mathematical definition of an "increasing" function, the simple ones are "a function f(x) such that whenever a
relations are usually assumed to be strict by default, yet for some reason the inverse is true here.
consider a function the derivative of which is not constant ; essentially, a function with more variations than an affine one.
then, were the function to generally increase, but plateau on some interval, would you prevent yourself from referring to it as increasing just because it does not strictly do so along its whole domain ?
defining increasing as f(b) ≥ f(a) for all b ≥ a allows for such a notion to hold, by describing the general tendency.
this does have the side effect of considering a constant function to be increasing and decreasing, although not strictly so. 1 ≥ 1 ≤ 1, so what ? one ought to make use of the appropriate comparisons.
i for one hold the belief that a shift to symbolic notation in place of ambiguous natural language expressions would prevent such issues from arising.
I think I got it! A definition that solves all these issues. We consider staircase function as increasing because they increase somewhere even though they rest constant over some intervals.
We should call a function *increasing* if it increases somewhere. This will rule constant functions as increasing. The problem is, some functions are increasing over some intervals but also decreasing over other intervals. Should we call them increasing or not? I will just call them increasing and use other terms to describe the function that are increasing but non decreasing
Next, call the functions such that whenever a
5:37 OMG. I was in a Comp Sci (Python) class and I was programming the Monty Hall game. In my program, I commented that "0 is not a natural number" because this helped my algorithm to randomly select another curtain using mod. The professor took off 0.5 pts and sent me dissertations that explains why 0 is a natural number. I know my major was math but in the back of my head, I was like "This is a doggone basic computer science class - not math!".
My background is in signal processing. I’m left confused by 1/x defined as continuous. I was taught that a function f is continuous at a point a if lim a+ = lim a- the definition does not require a to be in the domain of f. By this definition 1/x does not appear to be continuous at x=0.
Continuous does need a third value f(a) as well. Otherwise a function with a removable discontinuity (like a point missing) would be called continuous by your definition.
Continuous means continuous in its entire domain. The function 1/x has R\{0} as its domain, and its continuous on those points. But it has no extension to R that keeps it continuous.
There is one more thing to add with 0^0 . I agree that it can be used to represent an indeterminate, however, the algebraic definition of x^n , where n is a non-negative integer, is x multiplied with itself n times. a^0 is therefore the empty product, which equals 1 .
I find it annoying that people focus in on the continuity of x^y as if this is the sole arbiter of function definitions. If we have a sensible definition like the one for x^n and it has a natural expansion to 0^0, then that should be the definition, unless we find that something else works better, like 1 not being prime.
It is also funny that there are a lot of people insisting that it is undefined only to then use it as 1 in the formulas that you refer to. Taylor series is probably one of the best examples as x is clearly meant to be variable and 0 is usually in the domain of the base number. I doubt many of these people write T(x) = f(a) + \sum_{n=1}^\infty ...
That being said, I don't think this is an argument for 0^0 = 1 as much as it just confirms that the sensible definition is useful. If it had turned out that 0^0 = 0 worked best for most formulas, then that would grounds for changing the definition.
Completely agree with you!
I want to point out something tangentially related to your post. You mention 1 not being prime. It's kind of funny, because if you use the abstract algebra definition of prime, and then expand it from binary products to arbitrary finite products, the empty product allows us to see that 1, and units more generally, aren't prime without explicitly excluding them.
Take this definition of prime element in a ring: Let p be a nonzero element of a ring R. Then p is prime if, whenever p divides a product of elements in R, then there exists a factor of that product which p divides.
All units (and only units) divide the empty product; however, the empty product has no factors. So units can't be prime if we take the above definition of prime.
It's astonishing to me how many of those situations where people give explanations like "Well, you would expect [insert math thing here] to be true, but _out of convenience,_ we say it isn't true in this case" can actually be explained by empty operations.
@@MuffinsAPlenty Nice! Yeah, most exceptions are a sign that a better definition is lurking out there.
Not sure that I have seen this one before. There is a more crude one if we stick to the natural numbers: A number is prime if it has exactly two divisors. Still a bit awkward.
I agree.
Also notice that every single time a math problem calls you to use 0^0, it works best when it's 1. You *only* run into issues when you are looking at x^y when x and y are close to 0 and assume it's close to 0^0.
I asked Wolfram alpha to calculate 0^0.
The result = { undefined }.
Therefore, not enough info, you have to define it. The Wolfram has spoken🙂
I hope you are joking.
@TH-cam_username_not_found no
@@JahacMilfova OK. Then I'll say one thing:
A calculator is programmed to give an answer but it doesn't tell you why it is true.
Would a calculator that says 1÷0 = 0 be correct?
@TH-cam_username_not_found Obey to the king of "calculators": Wolfram Alpha/Mathematica. It's Majesty knows the best!
@@JahacMilfova But it just outputs an answer without giving any explanation as to why it is correct.
And it is possible that it gives a wrong answer that can be proven wrong by simply going back to the definitions and analysing them.
we have a term that includes 0: whole numbers... I agree that "whole numbers" seems relatively confusing due to the concept of "whole"... whole numbers are nonnegative integers and natural numbers are positive integers (counting numbers)... this categorization works, imperfect with terminology as it is...
At 12:35 I hope I am understanding this correctly, essentially if one assumes the axiom of choice, that would assume that a function with an infinite domain of objects f(x) = y has for every input x, strictly one output y. Which couldn't that be argued as an extension of transitivity? E.g. if the axiom of choice is not true, then f(x) = y, f(x) = z, and y != z is a possible combination of equalities since an input x does not necessarily need to output y, but this would contradict transitivity, since if f(x) = y, and f(x) = z, then y = z
Not exactly. It's saying that the Cartesian product of an arbitrary collection of non-empty sets is a non-empty set. This sounds natural enough, until you start to see how it can be abused to prove very strange things. Perhaps the most shocking one is the Banach-Tarski paradox. A favorite of mine is that the function from real numbers to real numbers f(x)=x is a sum of two periodic functions.
In the case of 0^0, it is instructive to remember, in general, that x^0 = x^1-1 = (x^1)(x^-1) = x/x = 1 (x =/= 0).
0^0, then, can be expanded:
0^0 = 0^1-1 = (0^1)(0^-1) = 0/0 = indeterminate.
This should hopefully be a sufficiently convincing argument as to value what 0^0 objectively takes.
But this uses the rule x^(n-1) = x^n/x which is not valid for x = 0. If you are supposing it is valid even for x = 0, we can use the same reasoning to prove that any power of 0 is undefined
0^(n-1) = 0^n/0 = 0/0
Another thing; indeterminate is just a term used to describe limit forms, we are dealing with an algebraic expression and not a limit. Either an expression is defined and have a value, or it isn't.
Interesting. In the school I attended, we rarely used the term "natural" numbers. Just whole numbers and counting numbers. And 0 is a whole number but not a counting number so that provides a difference.
Though I have seen books call counting numbers and natural numbers the same thing
Amazing man! I thought I knew them but now I know that I don’t know
richard borcherds said it best that what is 0^0 ? well it is either indeterminate or it is whatever you want it to be as long as you define what your symbols mean. he mentions further that the above question is not the right question to ask. he asks what is the most useful definition of 0^0
Really nice video. Thanks for making this.
I think starting natural numbers at 1 gives you the notational advantage of being able tojust use N_0 in case you want to refer to {0,1,2,...}. All the profs using N for N_0 I've seen always have to add some sort of ≠0 condition or exclude it some other way which just makes some exercises/statements/proofs a bit bulkier.
But at the end of the day you just get used to the current context (even if slightly annoying at first). Notation will always be different in math. But the math behind it will stay the same :P
0^0 is obviously 1/e from the definition of exponentiation /j
0^0=exp(0 ln(0)=exp(0 -∞)=exp(-(0•∞))=exp(-(0•1/0))=exp(-1)=1/e
0^0 = 10 ^(0 lg10) = ... =10^(-1) =1/10.
Wait, in a wheel algebra, what would this become? exp(-1)=1/e, or exp(⊥), which almost certainly just equals ⊥?
Learning pure mathematics, swapping in and out axioms and definitions comes pretty natural.
Think of the parallel postulate in geometry. You can take it to be true and get Euclidian geometry, or leave it out and get other equally valid and interesting types of geometry.
The axiom of choice leads to the Banach-Tarski paradox. So if you especially want your higher dimensional measure theory to be more consistent, it might be best to leave it at home. On the other hand, it is super powerful and useful in topologie and other fields, so bring it along for those days.
I'm pretty confused on the axiom of choice and marbles from a bag thing. Is there a finite set of marbles in each bag? What's the question here, eventually?
For this to be anything that causes any disagreement that makes sense, I have to guess it's a question along the lines of the probability of picking a duplicate marble in an ordered/ordinal set from an infinite cardinality of bags, or something like that. Am I on the right track here?
In the standard set theory ZF, when not adopting the axiom of choice, it's consistent for there to be a countable collection of two-element sets (think infinitely many sock), but no (choice) function which picks one of each of them. Relatedly, ZF does not prove the countable union of two-element sets to be countable again. Adopting the axiom of countable choice resolves this, but then there's still similar examples that are not resolved. The situation becomes a bit more intuitive in the following scenario: Take the set R of real numbers. Consider the set S:=P(R)\{{}}, i.e the set of all non-empty subsets of R. In particular, by definition, S is a set holding only sets that each contain some element. However, without adopting further exotic axioms, it's not possible to name a function f that takes a member U of S and maps it to some value f(U) contained in U. On way of looking at this is: Being able to write down a \forall symbol and establishing that each set U contains a member, does not help much in providing a function (or functional description) that "uniformly" (i.e. in some prescribed way) constitutes such an element picking. Said by analogy: Everybody in the classroom having a name is not the same as there being a list at the teacher's desk with everybody's name.
Let X be the set of all non-empty subsets of the real numbers, and R the set of real numbers. Is there a function from X to R, that takes every nonempty set of real numbers S to a real number that is an element of S? Since by definition every nonempty set of real numbers has a real number as an element, it seems like the answer to be yes. But if you try to define a specific rule for this function, you quickly find it's not so easy to find one. The axiom of choice allows us to assume there exists such a function, without having to give a rule for a specific function.
We could say that 0^0=1 and also 0^0 is an indeterminate form. Saying 0^0 is an indeterminate form does not mean it is undefined, rather it is a shorthand way of saying the function x^y is not continuous at (0,0).
So you are saying that we should distinguish between the algebraic expression 0^0 and the indeterminate limit form 0^0 ?? Well, that's also my stance.
What I don't understand is why are you using the same symbols to denote them? They are not conceptually the same after all.
@@TH-cam_username_not_found That happens sometimes. We can use (2,5) to designate a point in the plane, or it can designate the open interval from 2 to 5. We have to determine which it means from the context. Perhaps there's another notation we could use for the indeterminate form 0^0, indicating the 0 in the expression is some sort of limit that is approaching 0. I can think of some ideas (that I don't know how to type into a youtube comment), but I don't know of a standardized way to do that.
@@roderictaylor
What makes it worse is the fact that this abuse of notation is confusing calculus students and making them come up with misconceptions like 1/0 = inf or 0^0 = undefined and also obfuscate the fact that limits are conceptually different from function evaluations and that they are unrelated to each other.
I have seen someone in a comment using the notation (→0)^(→0). It is sensible and intuitive.
By the way, there is the French notation ]0,1[. I think It is better.
@@TH-cam_username_not_found That's a good notation. One can write things like (→1)/(→0+) = infinity. One can think of →0 as a sequence converging to 0 but being non-zero, and justify limits this way.
In my real analysis class, every statement about continuity is "f is continuous at x or on a set S" or something like that. this avoids the ambiguity
Very interesting video. Thanks for sharing it. It has put me thinking if these ambiguities in mathematics reveal anything about its nature. Do these ambiguities point to gaps in formalism and to the idea that math is invented and not discovered?
In some sense I think the words form this video discovered and invented are the ones that aren’t particularly precise. A formal logical system is much more precise
I think for 0^0 the right answer would be "underdefined": You *can* make sense of the value but it requires extra information about the context.
If you take the function
(f[x]-f[0])^(g[x]-g[0]) and perturb it a little bit around 0, you get
1 + Log[x f'[0]] g'[0] x + O(x²)
So if x=0, *usually* this will just be 1, but if you manage to get two functions such that Log[x f'[0]] g'[0] blows up to counteract the x=0, you can get other answers.
There's no reason to invoke limits when defining 0^0. You can raise any element of a multiplicative group to a natural power, even if there is no notion of limit: Start with the multiplicative identity and multiply it by the element as many times as the exponent indicates; if the exponent is 0, multiply by the element zero times, which leaves you with the multiplicative identity.
x^y is not continuous at 0^0, so you have to be careful when computing limits.
@DrTrefor Uhm, you just made a mistake regarding the 6÷2(1+2). This is always 9 in your case. 6/2(1+2) has the implication of a fraction rather than a division which then would be 1. In this case the numerator would be 6 and the denominator would be 2(3) equalling 6 and therefore equating 1 in total. Using an obelus means the 6/2(3) cannot be considered a fraction bar due to implicit multiplication and therefore can never be 1. However, an obelus can mean subtraction, which, using simple pemdas, will come out as 0, because 6-2(3)=6-6=0.
It would be interesting to see mathematicians thoughts on pi vs tau, base 10 vs base 12 (vs base 16), the sum of all natural numbers (ie divergent series or -1/12?), whether 0.999... is equal to 1 or infinitesimally away from 1, and whether mathematics was invented or discovered.
Couldn't you define increasing such that for any x you can choose both y > x such that f(y) > f(x) and z < x such that f(z) < f(x)? This would take care of the strictness issue but still define f(x) = 1 as not increasing.
0^0 just purely by itself with no context should be defined as 1. It seems satisfying
5:07 The argument for 0^0 being 0 is that, when a > 0, the exponential function a^x is defined to be continuous. 2^(pi) is defined to be the value that makes 2^x continuous at pi. This makes a^x continuous on its domain. IF you believe 0^x should also be continuous on its entire domain, then 0^0 should be 0
This is just an example of a limit, which is something you addressed. But the argument for this particular limit being the “correct” one to consider is that it is aligned with the definition of the exponential function for strictly positive numbers.
I wouldn't define a^x over the irrationals just because it extends a function continuously. I would just denote the outputs of the continuous extension with something else just like how one also denotes the output of the exponential function with exp(x) and I would keep the notation a^b just for the usual values.
For example, the sinc function extends x↦sin(x)/x at 0 but no one is claiming that 0/0 = 1
Many funky math paradoxes and disagreements involve 0 and 1 for some reason
"Additive and multiplicative identity"
That sums up the reason for that
My grade school math textbooks made a big deal out of defining the 'natural' numbers and 'integers' and quizzing on which ones did and did not include 0 ad nauseum. There was a term for positive integers and another for nonnegative integers one of which was 'natural'. I don't remember the other one or which was which. It would be nice if grade school math textbooks had to get signoff by actual mathematicians.
The word I was taught in elementary school for non-negative integers was "whole numbers"
The positive integers were called "Counting Numbers" in my grade school.
@@Lily-Carruthers My kids are now learning the same thing. However, mathematicians don't use the name "whole numbers", and things get really dicey if you aren't working in English, since both "integer" and "whole" often translate to the same word ("integer" is the Latin word for "whole").
Nice I had a situation similar and helps to have this knowledge
The way of explanation is excellent...Hats off you ❤
Thanks a lot 😊
I'd like to see mathematicians get into a cage match to work out their differences! 🤣
What is a closed interval? Of course the interval [a,b] is closed, Is the interval [a,infinity) closed? I would prefer it if calculus texts used the term, a "compact" interval [a,b], which is less ambiguous.
WRT 0^0=1 - there's another justification which I find much more compelling than "because it's convenient for these results". Powers are repeated multiplication. To multiply, you need something to start with that you'll multiply. For the number of multiplications to match the power, that's not the number you're raising - it's most logically the identity for multiplication, i.e. 1. So 0^x is zero only providing x is not zero - because you're starting with 1 and multiplying it by zero x times. But for 0^0, you start with 1 and multiply it by zero zero times - ie you don't do any multiplications so the original 1 remains unchanged.
This is a logical way to approach powers that is consistent with what we see in algebra, whereas if you don't have that initial 1 to start from, then multiplying together zero instances of any number is equally logically undefined - you don't have any instances of that number to multiply together or even a single instance as a starting point, the expression you're describing with the power notation is literally a blank space with no defined value. So reasoning from 0^x=0 for x non-zero to the case where x=0 is a fallacy - the repeated multiplication logic leaves that case undefined, and "here's a pattern we'd like to continue" isn't proof of anything. Redefining positive integer powers using the identity for multiplication as a starting point for the multiplications extends the defined cases to cover cases that are otherwise only given any value by a fallacy, and the logic of the extension isn't arbitrary - it makes sense and is at least as reasonable as defining powers as "multiplying this many instances of a number together".
As far as I can tell, 0^0=1 was the standard position in mathematics until limits and analysis were first developed, because there just wasn't anything that contradicted this idea that 0^0=1 before the idea of limits. To me, that just means limits are not automatically the same thing as the value of the function you're taking a limit of. BTW - to get the limit of (ax)^(bx) as 0 with a and b constant, you have to approach x=0 from a very specific direction - a must be zero and b must be non-zero. Anything else and the limit is 1. You can also use a limit that can approach the point via curves - unless you approach along the straight line a=0 for at least some finite distance, again the limit is 1. It's not just two directions with one yielding the limit of 0 and the other the limit of 1 - you have infinite directions and approach curves yielding the limit of 1, because you have two dimensions-worth of directions and approach curves.
However... once you have non-negative integer powers, you can use power identities to define negative and rational powers, but how do you define irrational powers? AFAICT, the only reasonable way is using limits. And having irrational powers be undefined or indeterminate is quite a price to pay for saying "the limit isn't the value", especially if you want to extend from the reals to the complex numbers and write complex exponentials based on angles in radians. And while math may be under no obligation to make limits be the actual values, when a particular kind of math is incomplete there's a simple answer - add more axioms, as long as the results are self-consistent that's a valid new kind of math (axiomatic system), and as long as it matches what you observe in reality, it makes sense there too. "The limit is the value when the limit is defined and unambiguous" seems at least a reasonable axiom for the real (and complex extension of real) numbers.
I didn't know about the 1/x one - that's very interesting. I'd also say that one way you can define infinity is by analogy to projective geometry, effectively borrowing some projective geometry axioms. If you do that, you can define the value of 1/x where x=0 unambiguously because +infinity and -infinity are exactly the same thing - there's a whole horizon line of infinities, but infinity forwards and infinity backwards are identical. Whether that means the curve is continuous or not through that point could be a controversy in itself, but AFAICT there's no point where a co-ordinate system smoothly transitions from finite to infinite co-ordinates (no largest finite number) - that infinite point is arguably therefore a discontinuous point (disconnected from finite points) even though it's a well-defined point where the curve passes through infinity. On the other hand, infinity is also the limit as you approach that point from either direction - reaching the point you expect as the limit from either direction seems like arguably a kind of continuity.
I have few points to share:
1st, we can define x^0 without starting with 1 and starting from a blank space, look up the empty product.
2nd, it is true that limits are not equal to the value of the function, but it goes deeper than this. Conceptually, limits and function evaluations are not the same, if we analyse their definition, we find that limits tell the behaviour of the function near a point and not at the point like function evaluations. For this reason, I would rather leave irrational powers undefined.
3rd, even some rational powers are undefined, because sometimes there is no unique solution to x^n = a, why should you prefer a solution over the others when you define x^1/n ? So I wouldn't refer to the solutions of the equation by x^1/n and instead us a different notation. This is probably one of the reason why the radical symbol exists.
@@TH-cam_username_not_found WRT "look up the empty product." - seems to say "it's one" anyway, but with a convention (ie assumption or axiom) rather than a logical justification. I mean, I'm basically introducing one axiom rather than another anyway (what precisely does the power notation mean, at least for positive integer powers), but this doesn't really seem to disagree, it just takes the empty product as something to directly axiometise rather than "my" slightly more indirect version. By the scare quotes on "my" - I hope it's obvious I'm not claiming it as my idea, AFAIR I was taught this justification in pre-O-level (Brit version of early high school) algebra 40 years or more ago (and it seemed ridiculously obvious even then, which is part of why I remember that), and TBH it seems odd that it's not often mentioned these days AFAICT, as if maybe it's even a point people deliberately avoid mentioning.
For 2 - that makes sense, but I'm not sure how irrational powers are normally defined except as a limit. There's infinite series, of course, but their values (when defined at all) are defined as a limit anyway. And of course the fact that commutativity seems to break down for some series - re-ordering the terms gives a different sum - also argues against the limit being the value, though you can equally argue that the proof of commutativity only applies for finite sums. You definitely need powers of irrational numbers - irrational complex powers of irrational numbers even - for complex exponentials. Except maybe the "behaviour near the point" resolves that - when the behaviour is infinitessimally close to the point anyway, what difference does it make?
For 3 - I'd argue that there's a difference between being completely undefined and being multi-valued. Yes, of course rational powers often have multi-valued roots - that's why the concept of a principle root exists. You can still usefully state the set of possible values. And I'm aware that a set {0,1} of possible values seems valid in that context, but I'm fairly sure any logic leads to some dubiousness WRT 0^0, the real point is a wider understanding of the options and their implications, but I honestly don't remember if I was thinking that way or even thought about multivalued roots when writing my previous comment.
@@stevehorne5536 I was a little imprecise with my wording. The reason why empty products is 1 is similar to the reasoning you shared above, although we treat x^0 as Prod(()). Here it is:
Prod(S)×Prod(T) = Prod(S↔T) where S and T represent tuples and ↔represents the concatenation of tuples. Example, S = (a,b) and T = (c,d,e) and S↔T = (a,b,c,d,e). Of course, we can't define the generalised product function without the binary operation × being associative.
Now lets apply this for T = () (the empty tuple)
Prod(S)×Prod(()) = Prod(S↔()) but S↔() = S (the empty tuple is the neutral element of concatenation) so we get Prod(S)×Prod(()) = Prod(S). For this to be true for any tuple S we must have Prod(()) = 1, the neutral element of multiplication.
x^n is a product with n copies of x, and for n = 0 we get the empty product. Here, x^0 was never defined as 1 multiplied by x, 0 times.
However, to derive this result, we made a huge assumption, which is Prod(S)×Prod(T) = Prod(S↔T) being true for any tuple, even empty tuples, but how could we check that this is true without 1st defining Prod(())?
The answer is, we can't define Prod(()) recursively like Prod((c,d,e)). We have to start form somewhere else, and we started from declaring Prod(S)×Prod(T) = Prod(S↔T) as true even for (). We used the property as a definition.
I honestly wouldn't call exp(x) as e^x , The fact that the function matches e^n at integers doesn't mean that we should extend e^n to all real numbers.
>> *when the behaviour is infinitessimally close to the point anyway, what difference does it make?*
There is a difference and I explained it to you. limits tell the behaviour of the function near a point and not at the point like function evaluations. The floor function is an example, it behaviour near the integers is different than at the integers.
>> *There's infinite series, of course, but their values (when defined at all) are defined as a limit anyway*
If we want to be rigourous, an infinite series is not a sum of infinitely many terms but a limit of finite sums.
>> *You can still usefully state the set of possible values*
The problem with this is that we can't use operations on the set of things to do algebra so better keep it undefined and personally I would rather use different notation to refer to the set of values and keep algebraic notation for unique values.
TBH no one cares about these splits until they start deducting marks over it
Also, for the 0^0 case, since it most likely to show up in multiplicative contexts and the 0th term is likely to be a null case, it makes sense to declare the result as the multiplicative identity instead of the multiplicative blackhole
For ℕ, I have seen text books counting them from 0 and other books counting them from 1. I have also seen one book defining the variant starting from 0 as ℕ₀ to differ from the normal ℕ starting from 1.
I also see Z^+ often for positive integers if I don’t want 0 included
Thats obviously backwards. ;-)
The one *without* zero should have a special name: ℕ₁, ℤ⁺, or ℕ\0
@nbooth I have been doubting myself about my comment lately, after reading your comment. So much I had to clear up if I simply remember wrong.
Searching Google for ‘natural numbers N N0’, there is a section called ‘People also ask’. There is a question ‘What does N0 mean in math?’
Clicking on that and following the link to Wikipedia, reveals a disambiguation page that suggests the following:
‘ℕ₀, the natural numbers including zero’
So that is the second source where I have seen that, no matter how backwards it is.
Most books define their conventions early in the book anyway, so you would know how that particular book interpret concepts anyway.
I start the “Naturals” at 1, because the “Cardinals” should start at 0 in my opinion… but, we always specificy just to be safe anyway
for the continuity of 1/x, Saying it is continouos, is odd.because continuity requires there to be no vertical asymptotes.
(There are more requirements, but that is the only one that is relevant in this context)
I've never heard that. Have you any sources for that?
Hmmm, I'm starting yo doubt that survey, f(x)=1/x is continuous by definition and it would be VERY weird to call f(x)=3x+1 linear (just asymptotic linear, but not "linear").
Those seem like errors from math teachers not people with a degree in math.
Most math teachers have a degree in math, though.
A linear polynomial is anything of the form ax+b.
@@matthieubrilman9407 They don't. The most people with a degree in math don't end up teaching, as it's not lucrative enough or they simply don't want to teach. As such, people without degree will have to fill the spots.
Agreeing on the basics is the hard part. It's literally the foundation of everything else, almost per definition it requires the most meticulous (borderline pedantic) discussion
I am really surprised that so many mathematicians answer 1 for the viral problem. I would be curious to see results of the same study but where the subjects didn't learn PEMDAS but some other mnemonic device like "point before dash".
I think it's because we are so used to writing something like 1/2x on the blackboard to mean 1/(2x) where if we meant (1/2)x we would write x/2. That pattern of having implied brackets in handwritten mathematics is quite common, even if it isn't the type of thing that is great for a formal rule.
Basically, the same thing in math, you do in language. You don't go by the literal equation alone, but what it applies to in the real world. That's why context always matters. I learned that in High School, I don't know how we forgot it. It's the same thing for interpretation, you can have the exact same sentence mean a thousand things. You don't take things in isolation, but draw them through their contexts, what something actually means. It's the same in math, I'd suppose, if you needed 0^0 to be indeterminate, 1 or undefined, that's what you'd do in that specific instance, based on the geometry of what you were using. As it'd relate to that specific shape.
0:49 I believe it’s undefined, because 0^0 means 0/0 and you can’t divide by 0
I thought the axiom of choice was controversial because of the idea that it may introduce a contradiction that is not present in ZF. Then I found out that ZFC is consistent if and only if ZF is consistent and I no longer see what the issue is.
The thing is in exponents , we can always assume that everything is multiplied by 1 to the power of something . Say 10 power 2 is 100 * 1 *1 will be still 100 . And we also know that anything multiplied by zero is zero so zero to power any power is zero cause zero is multiplied by itself . The 0 power anything is 0 . O multiplied by anything (1) is zero . So 0 power zero is zero
Using this logic, we get x^0 = 1 multiplied by x, 0 times which is exactly 1, even when x = 0. So 0^0 = 1 and not 0.
7:24 - There should be a word for a function where its domain is one big block.
0^n is a product with n copies of 0. 0^0 is the empty product, which is equal to the neutral multiplicative element, aka 1
Same goes for 0!, which is less controversially known to be 1
Natural numbers are used to count. Can there be 8 pawns on a chessboard? Yes. 1 pawn? Yes. 0 pawns? Also yes. So 0 is also a natural number
I remember my math teacher stating that parallel lines cross at infinity. I stated, parallel lines can never intersect, even at infinity. Tha's why they are parallel, which IMHO doesn't change at infinity.
here's how i see it: when the power of a number goes up by one, you multiply by the base (e.g. 3^2=9 3^3=27 3*9=27) ergo to reduce a number's power by one you divide by the base (same example as before but backwards now) to go from 0^1 to 0^0 you divide by zero, ergo the answer is whatever you think 0/0 is, be that 1, 0, undefined, or infinity
But by the same reasoning, 0^n would be 0^(n−1)/0 which is undefined since we can't divide by 0 so any power of 0 is undefined which is absurd.
The conclusion: you can't proof that something is undefined based on other undefined expressions.
if n/0 equals infinity, this expression is true because any 0^n can equal 0^ any other n
@@DotboT3812 I don't understand your argument. 1st, n/0 is not infinity. 2nd how does this relate to 0^n = 0^m ? 3rd, how does that address my reply?
One I would like to add to your list: Is √i one number (1+i)/√2 or two numbers ±(1+i)/√2
If it was both of them then we have (1+i)/√2 = √i = −(1+i)/√2 which by transitivity of equality implies that (1+i)/√2 = −(1+i)/√2 which is a contradiction.
The same can be said about any mathematical object. It can't equal to 2 non equal things.
@@TH-cam_username_not_found I'm right there with you. Totally! The number of people I have gotten into arguments over this...
@@DrR0BERT I am glad that you agree about this. I thought at 1st that you are unsure and looking for an answer.
Well, while it can't equal both of them, one can assign to it one of the values. The question now becomes which one should you choose.
We could make a choice based on restricting the domain of x^2 to a "nice set" for example. arcsine is the inverse of the restriction of sine to ]-pi/2,pi/2[. This interval was chosen specifically because it is symmetrical.
Maybe there are other ways or criteria to favour a value over the other.
@@TH-cam_username_not_found I have gotten into too many arguments over the fact that once you write down the √ symbol, the choice has been made. Still I get hammered with this. The argument that usually stops it is if √25=±5 then why does the quadratic formula need to include the ±√(b^2-4ac)? The ± would be unnecessary.
This is the whole theme of Dr. Bazett's video.
@@DrR0BERT That's not the point of my 2nd response ??
Your argument is also cool, but I prefer mine as it is more general and works for any expression. However, that's not what my 2nd response is about. It is about hoe one makes the choice of _one unique_ value for, say √25 for example.
One could argue the following: why √25 was chosen to be 5 and not −5 ? .
Of course, choosing positive values would make x↦√x a multiplicative function which is nice so this choice is better.
My personal preference is to keep the √ symbol just for positive real numbers so anything like √(1+i) is considered undefined, just use a different notation to denote it.
Yes, 0EN. One convenient fact about natural numbers k is that the integral of ln^n x is elementary.
Ah, I wish they had asked about approaches that deny infinities (like real numbers) or the use of counterfactuals in proofs. That seems to be a fundamental divide that can potentially affect what we can prove, and it requires different formulations and proofs for a lot of theorems.
There's a question at the end that ultrafinitist would disagree with, not so many of those though
Thanks, yes, Question 99 speaks to that. 4.7% of mathematicians think e^ e^ e^ 79 does not exist.
Consider the function f(n)=n^2, where the domain of f is all integers. Is f continuous at 0? In calculus we do not mention this. If we study an advanced real analysis textbook, the answer is yes. A function is always continuous at an isolated point in its domain.
Ok, so what is the limit of f(n) as n->0? The answer is undefined. So the definition we give students, that f is continuous at a if the limit of f(x) as x->a is equal to f(a) is, in general, wrong.
And of course if the domain of f(x) is the closed interval [a,b], we have to modify the definition and say f is continuous at a if the limit of f(x) as x approaches a _from the right_ is f(a), which is highly ad hoc. I find these issues to be a headache when I'm teaching about continuity in calculus.
The French have a different definition of the limit that fixes this problem. They define the limit of f(x) as x approaches a to be equal to L if and only if (1) for every delta>0 there exists at least one x in the domain of f satisfying |x-a|0 there exists a delta>0 such that for every x in the domain of f satisfying |x-a|
It is this comment that encouraged me to initiate this discussion with you. You shocked me with the fact that functions are continuous at isolated points despite them not having limits at those points. I am curious to see what the definition of continuous you are using.
The French definition is the one I have been taught and the thing that made me not like it is that functions like x↦floor(x^2) no longer have limits at the points where the right limit = left limit but f(a) is different. By the French definition If a limit exists then it must necessarily be f(a). I don't know what is superior about it.
*Rerearding the rest of the comment*
OK, I had to reread it to get the idea. The limit definition was changed this way to guarantee that a function that isn't defined around an isolated point is also continuous at it. Am I getting this right??
If so, then there is something unjustified. Why should an isolated point be a continuity point? It's as if there is some other definition of continuous lurking around and you are appealing to it.
By the way, I'll be replying to every comment of yours. I hope you don't mind 😊
I forgot to adress the part where you mentioned continuity at a for a function defined over [a,b].
If I recall correctly, the English 2-sided limit definition contains the proposition "for all x in D such that 0 < |x-a| < delta" , which means that right-sided limits are equivalent to 2-sided limits when D is [a,b] , same thing for left-sided limits.
@@TH-cam_username_not_found Hello. In a typical analysis book, the definition is as you say. In introductory calculus books, it depends on the text. For example in Stewart, it assumes the function is defined on every point of an open interval of a except possibly at a itself before defining the two sided limit at a. I do think the definition from analysis textbooks simplifies things.
@@TH-cam_username_not_found
You said "OK, I had to reread it to get the idea. The limit definition was changed this way to guarantee that a function that isn't defined around an isolated point is also continuous at it. Am I getting this right??"
I suspect that is the case; it does have that advantage. I prefer the definition of the limit typically given in analysis texts, where the limit as x approaches a of f(x) is L means that a is a limit point of the domain of f and if for every epsilon there exists a delta such that for every x in the domain of f satisfying 0
@@roderictaylor Thanks for replying!
>> *For example in Stewart, it assumes the function is defined on every point of an open interval of a except possibly at a itself before defining the two sided limit at a*
I see, well, it is a little ad hoc if you asked me. The English definition in the way I and analysis textbooks present it is more sensible. After all, if there is no left direction toward a, then all directions = right direction. Same thing when there is no right direction.
Now what about my previous reply? You haven't answered my question about why an isolated point should be a continuity point.
Edit: I didn't notice that you have adressed my other reply later on, I was writing this one when you have sent yours.
I think affine maps should be called linear and "linear" maps should just be called vectorspace homomorphisms
Fun fact: A standard definition for a positive integer in the JSON schema standard includes 0.
I don't know if there is a similar term in english, but here in finland we say that f(x)=1 is increasing/decreasing, but not *truly* increasing/decreasing. It's sometimes very useful to consider constant functions as monotonous so i think it's good to separate the two definitions.
I think it's quite funny that if you count it as increasing, it has to be decreasing as well by the same logic
oh interesting. I guess the closest we have in english would be "non-decreasing"?
We have the same thing in French, constant functions are both increasing and decreasing, but neither *strictly* increasing nor strictly decreasing, in the same way that 0 is both positive and negative but neither strictly positive nor strictly negative.
@@AbelShields Yea, they're technically not mutually exclusive in the case of constant functions
In the study of real analysis we'd refer to natural numbers as {0,1,2...} So i guess that's not a problem
I've never used the term "whole numbers" since 6th grade
ya whole numbers seems to largely drop out of usage at higher levels
Whole numbers always included -1, -2, ... when I was in school. Not an English speaking country though.
If we talk about "natural" numbers, it's telling that it took at least 1000 years to invent 0. You can have 1 apple, 2 apples, ... but if you include 0, do you also have 0 oranges, 0 plums, 0 footballs? We used N_0 when we needed that in classes, and it wasn't often.
@@Milan_Openfeintthis is why I believe zero isn’t a natural number. Nobody starts counting at zero. We don’t (naturally) think about having zero of something
@@kristopherwilson506 Programmers start counting at 0.
As a programmer, I've always considered the natural numbers to start at 1. Whole numbers include 0. Ints include negatives. Makes Peano's a bit more tedious to get started, but makes it way easier to define it all with lambda calculus. Then our axioms are beta-equivalence and eta-equivalence.
Here another example which I encountered when trying to solve a maths problem in a TH-cam video, but got the problem definition wrong: in number therory etc. "log" without a given base usually means the natural log, whereas in applied maths it usually means the log to base 10. IMHO, "ln" should be eliminated, "log" without a base should always mean the natural log, and if you absolutely need a log to this rather arbitrary base 10, just add the damned base to it. But unfortunately I'm not in power ;-)
12:38, slight mispronunciation… *Frankel. Hope it helps. Good video!
oops - I knew that one too!
In South Africa (Language = Afrikaans)[direct translation] we have Natural numbers(N) and Counting numbers (N₀). All Natural numbers (and zero) are Counting numbers.
Weird idea: instead defining 0^0 as 1, let's define 1 as 0^0, probably not feasible as exp is defined by multiplication, which is definied by addition for which we need 1, but curious if anyone considered this, maybe we can get "everything" from exp.
Zero != null.. Zero is a property of an existing object, so it always physically represents something or lack there of. It's not "null". So, 0^0 represents an operation of existing objects that result in zero of that objects property. As zero is a property of the object, it's a viable result. You have to realize the difference between an existing object and its properties. This detail is crucial for constructing real world simulations.
Axion of Choice is an axiom you use or don't use. There is no right answer, both ZF and ZFC are consistent in itself. 0^0 should be agreed upon being not defined. If a mathematiican rather looks at a limit, that is not the same as the expression itself. The PEMDAS term does have well-defined a solution. If zero is considered part of the set of natural numbers or not depends on the problem one is tackling. It can be more or less elegant, depending, to include 0 or not. "Natural number" is a weird concept anyway, while "Integer" would be well defined, or "positve Integer" is also well-defined, or "non-negative Integers", too.
Saying 0^0 is indeterminate because it can be multiple values is like saying when x^2 = 1, x is indeterminate because it can be multiple values. It's just completely short-sighted. 0^0 and all other indeterminate forms equal the entire domain, either the real numbers or the complex numbers. We just need to accept the obvious fact that these operations and functions that produce indeterminate forms are really just set-valued functions. And as for 1/x, it's not discontinuous because the function is well defined and continuous at 0. 1/0 = infinity, obviously. 1/x is a continuous transformation of the Riemann Sphere, a 180 degree rotation, so it obviously is a continuous function.
As a physicist rather than a mathematician, most of these seem to have pretty obvious "correct" solutions:
- 0^0 in isolation is undefined since you don't know what the numbers represent. If you're in a context where you know you're dealing with a variable base and fixed exponents then it makes sense to set it equal to 1, but that seems more like a shorthand instead of a proper definition.
- 1/x is not continuous unless you're doing projective geometry or something where your number line wraps around so positive and negative infinity are actually the same point. It seems obviously wrong to call a function continuous if it has a bunch of holes where it isn't even defined, there should just be a different term for that.
- A constant function is not increasing, if you define it that way you either have an asymmetry between increasing and decreasing or you have to say it is simultaneously increasing and decreasing which just seems silly. You can just say non-decreasing if you want the looser definition.
- A function for a line should be a linear function. Again I think we should just have a different name for linear combinations without any constant term, but I guess we're stuck with it.
Finally, I would say that 6 / 2(1 + 2) = 1. This one is subtle, and is less about order of operations and more about the cooperative principle from linguistics. Writing the multiplication without an operator creates a strong visual connection and makes it pretty obvious that is meant to take precedence over the division, like how if I say "yellow bedroom" I am talking about a yellow room with a bed and not a room with a yellow bed. Inserting an explicit multiplication operator would instead give the multiplication and division equal footing, changing the answer to 6 / 2 * (1 + 2) = 9.
the last one is, at the end of the day, about conventions that were created to simplify writing and especially printing math expressions because inline forms are easier and cheaper to print, but can require a lot of brackets to accurately show how the elements of an expression relate to each other. since not all of these conventions have spread to the whole math world, expressions like this viral one don't have one universally accepted answer; it depends on what notation agreements you were taught or are using
@@vsm1456 That's fair, the last one is definitely an outlier since the notation is really completely arbitrary. It just seems to me that people who think the division should occur first would be unlikely to write the equation this way, but maybe I'm wrong about that.
Mathematicians arguing whether "0∈ℕ" or not: 🤬😭
Also mathematicians agreeing that "1+2+3+... = -1/12": 🗿🧐
Interesting how mathematicians come up with 9 as an answer. All really depends on whether you do multiplication or division first following the action in parentheses. If you divide first, then you have 3*3, which is 9, but if you multiply first, you get 6/6, which is 1.
I prefer increasing over non decreasing
I agree. And then if we want to describe a function where x
Wholes start at 0; naturals are a subset of wholes starting at 1.
It would have been interesting if you have covered the TH-cam conversation as to ".99..." is 1 or not.
I certainly want the function f(x)=1/x to be continuous. I find it annoying that Stewart, a textbook I otherwise like, does not make this convention. It's annoying to have to have to say things like, the sum of two functions that are continuous on their domains is a function that is continuous on its domain. The composition of two functions that are continuous on their domains is a function that is continuous on its domain. And so on.
And of course the convention that makes f(x)=1/x is consistent with the definition in more advanced texts, where a function is continuous if the inverse image of any open subset of the codomain is open with respect to the domain.
>> *I find it annoying that Stewart, a textbook I otherwise like, does not make this convention.*
It's not a matter of convention, it's a matter of satisfying a definition. If x↦1/x satisfies the definition of continuity, then it is continuous. And that's it. One doesn't get to choose the results coming from a definition. We aren't allowed to make any conventions about them. This makes the textbook, strictly speaking, wrong about this point.
Also, let's say temporarily that the points outside the domain of a function are discontinuity points, then every function whose domain isn't entirely R is discontinuous, even when the domain is an interval, which crazy, it makes the property of continuity meaningless. This is an indicator that continuity shouldn't depend on points outside the domain.
My takes and reasoning
*0^0 is indeterminate* and dependent on context. You can’t just slap on a 1 and call it a day
*Natural Numbers start at 1* because we have another set called the Whole numbers that start at 0
*1/x is not continuous* because by there being a gap in the domain it is not continuous by that alone
f(x) = 1 is increasing because funny
*3x+1 is linear* because it takes the form of a linear equation, mx+b. I would consider the linear equation that has the point (0,0) to be proportional, and one that doesn’t to be affine. I would consider affine and proportional to both be a type of linear equation, not affine or proportional being separate from a linear equation.
I’m not getting the axiom of choice so imma skip it
9
It's my turn to share my take and reasoning:
There is an explanation for 0^0 = 1. Start with a×x^n where n is a natural number. This quantity is "a multiplied by x, n times". If n is 0, then this is "a multiplied by x, 0 times" , which means it is "a" not being multiplied by anything. Since a was not multiplied by anything we have done nothing to it, it stays the same. Thus we get the equality a×x^0 = a. The only way for this to be possible is for x^0 to be 1. x was arbitrary so it could be anything, even 0. Thus proved.
There is no contradiction if we have 0^0 = 1 and wherever it appears, it is 1.
Some countries call the integers starting from 0 the natural numbers and do not call the integers starting from 1 anything, after all, no one needs to call the integers starting from 2 or 5 or 1000 anything. And of course, in turn, they denote the naturals numbers without 0 by N* or N_>0 (the >0 is suppose to be a subscript). There is nothing wrong with this.
the term "Continuous" has a very precise definition which is the following:
For all a in Domain, lim(x→a) f(x) = f(a) . 0 is not in the domain of x↦1/x so it cannot be considered a point of discontinuity. Also the fact that the graph is not connected doesn't mean that a function is discontinuous, the source of the disconnectedness is the domain. The function transforms the domain without inducing other shreds in it.
The word "increasing" also has a definition and from that definition one knows whether a constant function is increasing. Though the world loses its actual meaning in English.
Linear equation are called that because they can be written in the form f(x) = b where f is a linear function and the definition for linear is the following:
for all x,y in D and c in R, f(x+cy) = f(x) + cf(y)
The above definition implies that f(x) = mx.
It is true that labels are arbitrary and we could have used them to refer to whatever we want, nut if we don't agree on the what the terms mean, communication would be impossible, so we should agree on them, and you will find that the term linear is used in the way I used it.
Counting numbers start at 0. Before you can put up one finger, you must have no fingers up.
You said it yourself: *before* you put one finger up and start counting, you have zero fingers up. Counting begins when the first finger goes up.
@@nbooth No, counting starts at 0.
@@sdspivey Most definitions of "Counting Number" *don't* include zero.
It's a semantic distinction, but you haven't done any counting until you get to 1.
Most people these days *do* include zero in the Natural Numbers ℕ, though. It just works better most of the time.
@@nbooth Maybe we should call the set without 0 the strictly natural numbers. :-)
Second case is just lim{x->0}(x^0)=1. So again it's just another choice of f(x) and g(x).
What do you mean "second case"? Are you talking about the binomial theorem?
If you are, I want to point out that the binomial theorem doesn't just hold for the real/complex numbers - it holds in any commutative unital semiring, including in settings where limits don't even make sense. And in all such settings, we need 0^0 = 1 in order for the binomial theorem to be completely true.
But 0^0 = 1 can be explained in these contexts using the empty product convention.
@MuffinsAPlenty Oh that's fair, I was talking of course about how in the real/complex case the choice for 0^0 is aligned with that limit. Generalizations where limits/convergence are not defined will follow the same rule.
But I'd say that writing x^0 is a shortcut/abuse of notation: strictly speaking the notation x^0 is ambiguous and 1 should be written instead, but that would make for uglier expressions where an extra term (zeroth order monomial) has to be treated especially.
0 is not natural, but rather a whole (an extension of the naturals). Every natural number has a unique expression by the fundamental theorem of arithmetic of the form (p_1^a_1)(p_2^a_2)… where p_i is the ith prime. It can also be considered as an infinite list [a_1,a_2,a_3,…]. 1 has the list [0,0,0,…], whereas 0 doesn’t have any unique list. Whole numbers require an extension by including 0, while integers require that plus the inclusion of a special “prime” -1. Extensions of number systems are often huge and expand our way of thinking, which 0 definitely was.
Slight problem - how do you define the naturals? Set theory starts from zero.
@ set theory starts by defining the wholes, which it mistakenly calls the naturals. Arithmetic (a much older field of mathematics) begins by defining the naturals. 0 definitely is in a class of its own as far as properties go and shouldn’t be confused with being in the “smallest” of the naturals. Set theory just abused the notation and completely missed that 0 was never natural when they were defining its basic axioms. There is an important distinction to make, so just dismissing the arithmetic definition as “positive integers” is really downplaying what they actually are and how they’re defined in context of their set.
@@ClementinesmWTF Your opinion. Not necessarily everybody's. (As most of the issues that Trefor is pointing out, it's a matter of definitions - and not all of those are agreed upon universally)
FWIW - Peano arithmetic also starts (axiomatically) from zero. I'm not aware of any other rigorous definitions of the naturals outside those.
@ ok then how do you distinguish the two sets, while also keeping them distinctive from the integers (ie no “non-negative” or “positive” integer)?
It’s not an opinion, it’s just fact that they are different sets, with different usages and different histories and contexts. In this case, it is possible to have a wrong opinion, and anyone who includes 0 in the naturals is absolutely wrong.
@@dlevi67 FWIW Peano is also from the 19th century and also mislabels natural vs whole. But Ig have fun sticking to being wrong.
That's a cool shirt. Where did you get it?
0⁰ = 1 and 0!
Because n^m is the number of functions from a set with m elements to a set with n elements
And
Because n! Is the number of bijections from a set of n elements and itself
And the empty relation from the empty set to itself Is a function and is a bijection
And 0! = 1