@@ffggddss Actually it's increasing mic distance from sound source and the increasing angle between the mic and the sound source that results in decreasing volume.
@@RanEncounter Yeah, it looks pretty clear at that point, 4 min, that that mic is directional, at least to some degree (pun unintended!). So the angle (between mic-source line and mic's primary direction) is important. Fred
This is so great! By reverting to the definition of a property, and by boring - but SO useful - algebra, the proposition just unfolds! No fancy stuff (“0/0”), no limits, no discussion about convergence... This really nails it! 😎
Well the "first derivative’s sign" and "function monotony" are indeed related but not equivalent. Be careful with the term "equivalence relation" though, that is something entirely different, and probably not what you meant ;-)
Stalker Technix - Would you mind elaborating a bit then? Because I did reread your comment quite a few times and still don’t see what you could call an (or rather *not* being an) equivalence relation.
Although it might be counterintuitive at first, since the derivative is 0. However, consider two different points on the curve, one at x=0, the other extremely close to it. If it's on the right it will be ever so slightly bigger, if it's on the left it will be ever so slightly smaller. Therefore there's always an increase, if you 'read' it from left to right.
mrBorkD this reminds me of the paradoxical nature of "instant rate of change". In the specific instant, we'd predict it's doing nothing, but it's increasing. Maybe this is why science can never be certain
Daniel McFadden Nope that's got nothing to do with science being uncertain. Uncertainty in science (primarily) comes from our inability to measure accurately enough -- either because there's too many variables involved (think: social sciences, weather patterns, etc) or because the scale is beyond our abilities (think astrophysics) or because of fundamental limitations of the universe (quantum mechanics.) We do pretty well coming up with theories given our measurement limitations (and if we're willing to accept statistical accuracy rather than full predictive accuracy, we do remarkably well in most areas of science.) But there will always be things that are beyond our ability to measure with complete certainly, and therefore there will always be some small deviation between theory and the experiment. But hey, that also means there will always be some new frontier for science to poke at in our eternal quest to understand our universe (and ourselves!) better.
The definition of a derivative is as follows: it’s the limit, as d tends to 0, of the expression (F(x + d) - F(x))/d. And a limit in and of itself is rather counterintuitive; I would recommend 3blue1brown’s essence of calculus series (another really great TH-cam channel about mathematics much like BPRP).
The question in the beginning is also a common fallacy in logic From: If A then B you cannot conclude: we observe B therefore A. For this to hold true, you would have need to show, that A is the only way to be able to come up with B. Example: "If it rains, the streets are wet" Nice. But the other way round "The streets are wet, therefore it rains" does not hold. There are other reasons, that the street may be wet, eg the street cleaning car has passed or somebody dropped a container with water. So unless you can show, that the only way the streets ever may get wet is by raining, you cannot turn the argumentation around. PS: this is also the fallacy, Sir Arthur Conen Doyle fell into, when he let his Sherlock Homes say "How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?" The problem of course is: how does one know that he eliminated ALL impossibles? to eliminate all impossibles you can think of is not enough.
Claim: if a > b, then (a + b)^2 + a^2 + b^2 > 0. If either a OR b is 0, then either 0 > b or a > 0, in which case b^2 > 0 or a^2 > 0. Let x = a + b; then x^2 >= 0, so that, if a^2 > 0, then x^2 + a^2 + b^2 >= 0 + a^2 + b^2 = a^2 + b^2 > 0 + b^2 = b^2 >= 0 Similarly when b^2 > 0. If neither a NOR b is 0, then a^2 > 0 AND b^2 > 0, in which case the sum of the squares is also positive.
The point is that F(x) is strictly increasing on (a,b) if and only if F’(x) > 0 almost everywhere, with the only exception being at a set of points (of measure 0 - this includes isolated points) where F’(x) is 0 for x any of those points. For F(x) = x^3 that point is x = 0.
The Minkowski question mark function takes this a step further: it is a continuous strictly increasing function whose derivative is zero at all rational numbers. However, the question mark function does not have a finite derivative everywhere like x^3 does.
@@hyperpsych6483 What you said doesn't make sense. Using the second statement, the derivative of y = 0 is 0 for all x. 0 is not > 0; therefore y = 0 is not increasing. As it's not increasing by the second statement, it is invalid to use y =0 as a function for verifying the modified first statement.
@@twelfthdoc haha three whole years man, I was saying that for f(x) = 0, f'(x) = 0, and 0 >= 0 is true (notice the greater than OR equal sign) , but f(x) is not increasing anywhere. It is not necessarily possible to tell if a function is increasing or not at a point if the derivative of that point is 0.
That was a really nice video :). What i was taught is that it does not matter if an increasing function has points where f'(x)=0 so long as those points do NOT form an interval. That being said, i prefer watching proofs like these rather than accepting things and moving on.
+Kirby The points with f'(x) = 0 can form a set - in fact, almost by definition of "set", they do: the set of the points where f'(x) = 0. What they cannot do is to form an interval (i.e. the condition f'(x) = 0 is valid for two or more "consecutive" points - if you prefer you can read this more formally here: en.wikipedia.org/wiki/Interval_(mathematics))
Here I think the answer depends on how you define "increasing." With the definition I had in my head, I think x^3 is neither increasing or decreasing at x=0.
It depends on how you understand "increasing". Given that in calculus we are assuming the existence of instantaneous rates of change the argument that over an interval no matter how small there is an increase is unconvincing. The function can still be instantaneously constant at some point.
Another way to proving this "increasing property" of f(x)=x^3 is to realize that the derivative of it 3x^2 is going to be positive for all points, except in one point when x = 0, but "a point is not the same as an interval", so that point doesn't matter. You will need at least two different points to get an interval, in this case two different values for x, so showing this is sufficient to prove that the function is increasing for all non-zero intervals. Because an interval is never zero, in other words never equal to a point, and you need an interval between two different points to define a direction (increasing, decreasing or constant), x^2 will be always be positive (increasing) for non-zero values, so 3x^2 will also always be positive, and therefore the curve is strictly increasing everywhere.
The bending radius of x^3 is infinite at 0 ( the curve is flat ) . What i found strange is that the bending radius of x^2 is 1/2 at 0 , but for x^r , with r>2 ( for example x^2.0001) , has an infinite bending radius , what a brutal transition ...
@@IronLotus15 Common misunderstanding about this concept is that people think it is an oscillating circle. It is really called an osculating circle, that defines the radius of curvature, and likewise its reciprocal called curvature.
Because of the convenience of using f'(x) > 0 for checking where a function is increasing, it makes sense to define a notion for what it means for a function to be "increasing at a point." To make it consistent with what it means for a function to be increasing, we would define it as follows: A function f(x) is increasing at x = c if there is some interval (a,b) containing c on which f is increasing. This is similar to how the derivative represents the "instantaneous rate of change" of a function. It doesn't really make sense to have a rate of change which is instantaneous for the same reason that it doesn't make sense to talk about a function increasing at a point. Nevertheless, it is useful for us, so we do it anyway.
You have to clarify what is meant by "increasing." f(x) = x³ is increasing at x=0 in the sense that for any x₂ > x₁ , it is true that f(x₂) > f(x₁) But in an "instantaneous" sense, f is not increasing at x=0. I think it would, however, be hard to support the instantaneous sense of "increasing" as a viable definition. So that, yes, f *is* increasing everywhere. Fred
The deeper theoretical reasons for what is going on here is that there are two basic types of structure that the real number line (and by suitable treatment, its self-product, the plane) possesses: 1. The first type of structure is that as an _ordered field_ . This means the real line both supports all the arithmetic we all know and love as best as possible (that's the "field" bit), _and_ it also has the property that you can compare its members as greater or less (though I prefer as "preceding" vs. "succeeding") (that's the "ordered" bit). This is why we are able to call it a "real number _line_ " and not just a real number field, or a real number set. This is the kind of structure that the notion of an increasing or decreasing, strictly or not, function belongs to, since those definitions use and only use, the order. 2. The second type of structure is that as a _valued field_ . This means that the real line has a notion of the "size" of its elements, called a norm or valuation, assigning its members sizes from some suitable size set. Here the size set is itself, and the norming function is absolute value. This gives it a _metrical_ or _distance_ structure that allows us to talk of proximity and separation of points and also, moreover, lets us consider its self-product - the plane - as a faithful representation of Euclidean geometry. It is to this structure that the notion of differentiability belongs. This, then, is the resolution of the seeming paradox: The notion of strict increase belongs to the _ordered field_ aspect of the real number line's structure, and thus it is oblivious to the fact the graph is "instantaneously" stationary at x = 0, something that only arises from the _differentiability_ structure that is related to the _valued field_ aspect. In particular, the order structure is unchanged by pulling or stretching the real line elastically in various ways, and you can imagine stretching the plane in the y-direction so as to smooth that little crook in the graph out, but without distorting the x-axis or any line parallel to it. Such a stretch doesn't change order, but does change metrical structure, thus it shows to which type this property really belongs.
X^3 is increasing on (-infinity,0) Union (0, infinity). To claim a function is increasing at a single point is dubious. Definition of increasing depends on relating a function to a non singleton interval. Moreover x^3 has horizontal tangent line at x=0. x^3 is increasing on, say, [0,2]. Asserting x^3 is “increasing at 0” is tantamount to claiming 0^3 - 0^3 > 0 on interval [0,0]. I guess if you define increasing as f(b)>=f(a) when a
interesting since you've also proven that cubing preserves order over the entire real line. that is, a < b implies a^3 < b^3 for all a and b in R. you can think about the possibilities the a less than b can take on the real line, and the result will seem trivial
You're noticing an important fact. If f(x) is an _increasing function,_ then whenever a < b, it must be the case that f(a) < f(b). This is the definition of a (strictly) increasing function. But the implication is that applying an increasing function to both sides of an inequality actually preserves the inequality. Similarly, you can show that applying a _decreasing_ function to an inequality _reverses_ the inequality, just based on the definition of a decreasing function. You're used to adding any number to an inequality preserving the inequality, and that fits in this scheme because adding the number c is the same thing as applying the function f(x) = x+c, which is increasing. Similarly, multiplying by a positive number c, the function f(x) = cx is an increasing function. But multiplying by a negative number c, the function f(x) = cx is a decreasing function.
@MuffinsAPlenty I never though about that, that multiplying by a negative flips the status. another thing to note is the composition of increasing functions, and that of decreasing functions (the result is of the same status).
Wow, this is fascinating. I always felt this function was increasing at 0 since it "looked like" it was increasing and my intuition is that a continous function with domain of all real numbers that always increases will have a defined inverse. However I could not figure out any way to prove it. By simply defining that a > b you proved that f(a)> f(b) when you broke it down. This was cool
It is weird, it is parallel with x axis at 0 but it is increasing because any value to the left is smaller and any value to the right is higher. Probably makes sense, but it is almost as weird as trying to visualize black hole.
Many people have doubt that whether it is strictly increasing or not? Let me clear that it is strictly increasing. If f'(x)>0 [and f'(x)=0 only at discrete points , not in an interval] then the function is strictly increasing.
I disagree with your definition of strictly increasing. What about the real-valued function f(x) = x^(1/3), where we take the real-valued cube root for each input x. In my estimation, this function should be considered strictly increasing on the entire real line; however, it does not satisfy your definition of strictly increasing.
The condition mentioned in the Brackets should only be checked if f'(x)=0 . if f'(x)=0 at discreate points then function is strictly increasing. if f(x)=0 in an interval then function is increasing(but not strictly)
This does not address my concern. The function I mentioned has no instances in which f'(x) = 0. f'(x) > 0 for all x≠0, but f'(0) does not exist. Hence, by your definition, f(x) = x^(1/3) is not a strictly increasing function. However, if we go by the definition that for all a < b, f(a) < f(b), then f(x) = x^(1/3) is a strictly increasing function. You either have to adjust your definition to account for situations where the derivative does not exist, or you must conclude that this function is not strictly increasing, despite satisfying the property that a < b implies f(a) < f(b).
Unfortunately it is not clear your method extends in an easy way to show any odd positive power of x is a strictly increasing function. This can be achieved by using a different method: to prove a^(2n+1)>b^(2n+1) if a>b, consider the three cases: 1. a>b>=0, 2. a>0>b, 3. 0>=a>b. 1. can be proved for all positive n by factorising a^n-b^n as you did or by induction: a^(n+1)=a*a^n>b*a^n>=b*b^n=b^(n+1); for 2: a^(2n+1)>0>b^(2n+1); 3 follows from 1 by considering -b>-a>=0 (of course this is just a sketch of the proof).
Considering point 2, it is not strictly always true. Consider the function tan(x) which has the derivative sec^2(x) which is always either positive (nonzero) or undefined. Where it is undefined the limit of the derivative approaches positive infinity from both above and below, and thus even if undefined still greater than 0. So technically, a function with a derivative always greater than zero can decrease in value :)
That's more or less the idea, but a rigorous definition would be: Let f be a function, and x0 and interior point of its domain, then f is increasing at x0 iff exits a certain epslion (eps) such that f(x0)>=f(x) if x belongs to (x0-eps,x0) and f(x)>=f(x0) if x belongs to (x0, x0+eps). With this definition you can prove that the second statment of the video is true and that the first one is also true but only if you state that f'(x)>=0, for any point. Now, the definition for a function to be increasing in a certain interval I is that for any y>x in I, f(y)>=f(x). And, you can also prove, using this definition that both statments in the video hold if you change f'(x)>0 to f'(x)>=0 in the first one.
Ivan Lusenko A function is said to be increasing on I if for all (x;y) in y, x>y =>f(x)>=f(y). Therefore, if you want to be rigourous you should say that it doesn't mean anything for a function to be increasing on {x_0}. You could say that f is increasing on {x_0} iff f'(x_0)>=0 but that is not a very good def because it doesn't inclure functions that aren't differentiable when the first def works for every function. Also, saying that f is incrasing at x_0 iff f is increasing on an interval that contain x_0 is way better, but just saying that f is increasing on that said interval gives more information (so it is not really a useful def...)
I think because there is no defined way to test for it with a single point, you need an interval and that is an infinitesimal, so as long as that is increasing its satisfied, since the point at zero is infinitely small, and at any other point you so can't tell if it's increasing.
There is a critical point, but it is a red herring to finding the extreme points, which is usually what we are interested in finding. It ends up being inconclusive because of the second derivative test equaling zero. Further information from looking at the graph tells us that it is not an extreme point.
You missed a step. For arbitrary a, a^2 >= 0. But it's only 0 for a = 0, so if a > b, only one of a^2 and b^2 can be 0, so the sum is strictly positive.
Being negative/positive is different from increasing/decreasing. Take f(x) = x^3 - 10. f is negative to the left, and to the right of 0, but the function is still increasing.
It actually make sense because derivitive comes from limit calculation, therefore the derivitive of x^3 at x=0 is approaching zero and not equal to zero Edit: When I wrote this I was in high school and didnt know much about limits, now Im in bachelor's degree in Mathematics. Here is an accurate explanation: By definition a function (of one variable) is increasing at a point x, if there is a neighborhood around that point, where every point x1 right to x in the neighborhood holds that f(x1)>f(x). The same with points left to x in the neighborhood. If a function is increasing in a point and differentiable there, then the derivative >= 0 at that point. And if the derivative at a point is positive, then the function is increasing there. Therefore jf the derivative is zero at point x, we cant conclude from that that f is increasing at that point
The SECANT is approaching zero, the tangent IS zero. The derivative is the tangent, not the secant. When Eddie Woo teaches how to take derivatives from first principles, he says "because I want the tangent, not the secant" every time he writes the limit notation, to remind you of why you need it.
I'd argue that f(x) is NOT increasing a f(0). It seems "increasing" has to be carefully defined. There's increasing over an interval which assumes two distinct points and doesn't involve any calculus, and then there's "increasing" at a point which it seems should be associated with the derivative at the point which suggests the function is not increasing at f(0).
If you are defining it that way and that is the actual definition, then I agree. But it contains the idea of two points. I was thinking it could be defined as something similar to "instantaneous slope" at a single point. But using a definition for positive slope, negative slope, or 0 slope involving two points, and no limits, would always result in a positive slope for this function. I thought a function increasing or decreasing or unchanging at a single point might be similarly defined as the calculus definition of instantaneous slope. Perhaps there could be a distinction between "(interval) increasing" and "instantaneously increasing" at a point.
+teavea10 You can't have "increasing" anything without considering the behaviour of the thing over an interval, however small. And however small you choose the interval around 0, x^3 is increasing in that interval. AT 0 (as in "in that point") the function is not increasing, but then neither it is increasing AT 1, -2, sqrt(pi) or 1 gazillion; the concept of increasing has to involve comparison across two different values of the independent variable.
I think, taken in a vacuum, questions like this are a little hard to grasp due to the nature of derivatives. It’s much more useful to use a real world example. For example, let’s say that there’s a baseball shot up in the sky that begins to fall back down under the influence of gravity. Is its position decreasing at the apex of it’s jump? Obviously yes, since it falls back down. But the velocity (time derivative of position) at the apex is still zero. So why does it fall down? This is because there is still acceleration due to gravity at the apex despite the velocity being 0. But if at the apex of it’s jump, gravity is removed, it will stay in place. Because the velocity, acceleration, and any nth time derivative of position all equal zero at the point. So the solution to the answer is if any nth derivative at a point is inequal to zero, then the function is increasing/decreasing at that point.
This is good video, showing clearly why the first statement on the board is false. But the video's title was not so clear. What do you mean by a function increasing at a point? You never defined this (and indeed you didn't need to for the proof), but for clarity as to the meaning of the title I think you should. Perhaps you mean (as some comments have suggested) increasing in some neighbourhood of the point.
@@dlevi67 the curvature (? i'm spanish) changes when f''(x) changes its sign, it does not depend on the value of the first derivative that much, that's why f'(x) on certain x being 0 has almost nothing to do with f''(x) on that x
+Jorge C. M. it's "concavity" if you are talking of f''(x). Curvature has a specific meaning when you talk of three-dimensional surfaces. As to the rest of your point, I don't understand how it follows from the conversation above it, but we agree.
@@dlevi67 For low slopes, curvature is roughly equal to the second derivative. That's why in Euler Beam Theory for deflection of beams, we equate our expression for curvature of a beam to the second derivative of the elastic curve.
Couldn't you show that since the derivative of x^3, 3x^2 is always greater than or equal to zero, this means that there is no point were the graph decreases, you would then have to show that it doesn't have multiple zeros right next to each other which would mean that it is not increasing, to do that all you have to do is, 3x^2 = 0, x = 0 is the only such spot where this is true therefore it is always increasing. I don't know if this would work for everything so please point out any mistakes.
In his proof he shows that, over an interval with two distinct end points (sorry, idk the proper terminology), x^3 is always increasing. But the original question was whether or not the function is increasing at one single point, not over an interval (or am I wrong about that?), so doesn't his work solve a different question than whether or not x^3 is increasing at the point (0,0)?
But if you have a < b, f(a) = f(b) and f'(a) = f'(b) = 0, can you then say that f(x) isn't increasing? So basically saying that instead of having a single point that has f'(x) = 0, like what x^3 has, it has 2 (or more) points where f'(x) = 0.
As always, I enjoy and learn from your videos, especially the clever algebra. My students wish to know what "increasing (or not increasing) at x = 0" means when the terms "increasing" and "decreasing" are defined over intervals. Is there a definition or theorem (better yet) which states if a function is increasing on an interval, then the function is increasing at each point in said interval?
In 1:50 Wouldn't be (+1, +∞) ? Cause your statement start for any derivative function bigger than zero, like 0,5 or 0,8 or 1 and so on... Or there's any function with intervals like ( 0, - ∞) could have a positive derivative????
I'm not sure what exactly you mean but I'll try. A function can have positive derivatives at negative values. He's just specifying an interval for this given function, where, in this case it's true that it is always increasing. The derivative is almost always great than 0, the only exception being f'(0)
By the definition of derivative at a point, if f'(b)=0, then f is not increasing nor decreasing at b. And if the definition of increasing function is used (as in video), that states the monotonicity of an interval not of a single point. So it can't be applied. f(x)=x^3 is (strictly) increasing in (-∞, ∞), but f(0) can't be considered as "increasing" since its "rate of change" (aka derivative) is 0.
How is the first statement not correct? the derivative is the limit of the rate between the difference of the images of two points and the difference of those points as the points are infinitely close to each other. f'(b) = lim (f(x)-f(b))/(x-b) as x->b. If x-b > 0 and f(x)-f(b) > 0 then f'(b) > 0, because +/+ = +. But we have that f'(0)=0, so it doesn't meet the condition to be increasing. Re: "A function is increasing at a point if there exists some open interval containing that point on which the function is increasing." This is not a convention. This is yours. Again: Checking if a function increases on some interval needs to compare pairs of points. The closest approach in applying that concept to a single point is the derivative. So, either you stand by what the derivative says (in this case, that f is not increasing at 0) or acknowledge that monotonicity of a single point is an absurd characteristic not fit to be checked the same way monotonicity of an interval is.
@chengyuryc46: Monotonicity ("increasingness" and "decreasigness") isn't a property of points. It's a property of functions. A function being increasing doesn't mean that all its points are increasing, the same way a matrix having rank=3 doesn't mean every coeficient on that matrix has rank=3.
+Eric Ester the statement "A function being increasing doesn't mean that all its points are increasing" is either a tautology (any function is not increasing at any point - it either has a single value there or it's not defined), or it doesn't make any sense.
Exactly. Functions at a point have a value. A value that doesn't change. So that point neither increases nor decreases. And that's why I'm saying that monotonicity of single points is an absurd concept.
awesome video. congrats. This is the exact moment when I beg u to do a video about differentiability of x^(1/3) at x=0. What does it mean that a function is differentiable ? well that it has tangent line?or that u can calculate the tangent line? Is this function differentiable at x=0? in fact x=0 "is" the tangent line but is this function differentiable?
I didn’t buy it if it is increasing then the theta which this point is making with horizontal axis has to be positive in another word tan^-1(dy/dx) has to be positive but tan^-1(3x^2) clearly zero when we plug zero.
It's just a guess: A derivative of a function f'>0 over an Intervall I, so functions f is increasing. maybe this means if a second derivative ( f'' ) is increasing, his function f is also increasing. but i have no proof. For example f(x) = x³ => f''(x) = 6x and g(x) = 6x is always increasing because of g'(x)=6 > 0 over Intervall I.
Do you need to show how ((a+b)²+a²+b²)/2≠0? Because it looks like you can conclude that it's ≥0 since it's dealing with squares. I know that it can't be 0 since a≠b but won't you need to show that somehow?
If you wanted to spell out every detail, yes you would need to show that. But as you mention, since a≠b, it's impossible for that expression to equal 0. So it's not a big deal to skip that.
@9:13 you seem to say that a sum of real squares is positive; this is only true if at least one of the reals being squared isn't 0. Since a isn't equal to b, at least one of a or b isn't 0.
I don't know if it is correct, but derivative of x^3 is 3x^2, which is always positive (only value that is equal to 0 when x=0). Function has local minimum when derivative is equal to 0 at value x1 and value little smaller than x1 (sorry, I don't know how to say that in English :D) is negative and value little bigger than x1 is positive. Function has local maximum when its derivative is equal to 0 at x1 and values for smaller x are positive and values for x bigger than x1 are negative. 3x^2 is qual to 0 when x=0, values for x smaller than 0 are positive, bigger than x are also positive, so it shouldn't have local maximum and also minimum, right? Correct me if I'm wrong, and sorry for my English (hope you've managed to read this :D)
The point at x=0 on the equation y=x^3 is called a stationary inflection point. The term "stationary" in this context means that the derivative is zero at this point, as if the curve described position as a function of time. This would be like a "stop sign", where you stop momentarily and then resume driving forward. The term "inflection point" means that the second derivative is zero as it switches from negative to positive (or vice versa). Other versions of the cubic equation with a positive x^3 term, that have other x-terms, will have falling inflection points. Make the x^3 term negative, with other x-terms, and it will have rising inflection points. It is neither a local minimum, nor a local maximum, though it may seem to be one, when your method for finding the extreme points is to look for locations where the derivative is zero. Indeed, the derivative is zero, but it isn't an extreme point, because the second derivative is also zero, and it is inconclusive as an extreme point. It turns out that it is not an extreme point, because the function is increasing on both sides of it.
English isn't his first language, and he probably isn't accustomed to calling it "x cubed". Saying "x to the third power" is probably the first way that comes to mind for him. It means the same thing, and he likely would understand "x cubed", it just isn't the first thing that comes to his mind. Another similar example I notice for native Chinese speakers when speaking English, is defaulting to tell time by reciting the numbers exactly as they would appear on the clock, rather than using colloquial expressions. Such as saying "twelve O'clock" instead of "noon" or "midnight", and "three forty five" instead of "quarter 'till four". It is understandable why one would tell time this way, because it requires a new mental exercise to produce the words for "quarter 'till four" from "3:45".
don't know the right english name for it, but we call it a bowing point, that is, with x < 0 the curve "shows its hollow side downward" (decreasing slope with increasing x, and decreasing f'(x)), and with x > 0 the curve shows its hollow side upward (increasing slope with increasing value of x, and so increasing df/dx). the point of it is also, that df/dx *doesn't change sign* at df/dx = 0. yet it shows up a minimum value, and 'changes direction', from downward (with increasing x) to upward with increasing x.
No, it is not an extreme point (as in, not a local maximum or local minimum). Points like this are a red-herring when you intend to find extreme points, because the derivative of the function is indeed equal to zero at the point x=0, which would lead you to believe that it is an extreme point. However, because the second derivative also equals zero, you cannot conclude it to either be a minimum or a maximum. You take the first derivative to find the critical points where it equals zero, and you use the second derivative to identify them as local maxima (negative curvature at a critical point) or local minima (positive curvature at a critical point). This particular point is inconclusive from the second derivative test alone.
+HelloItsMe First let us get some definition clear: what does "f is increasing on I" mean? Even writers of university books don't universally agree on it, but the most common definition is the following (I being an interval of real values): A function f:I→R is called increasing (on I) if the following holds: for every x,y ϵ I, if x < y then f(x) ≤ f(y) Note the "≤" sign here, instead of the "
The only thing I will say is that a function can only increase or decrease on an interval. I don't think it's technically correct to say a function is increasing at x = a.
That's true for increasing functions but not for strictly increasing functions. For example f(x) = 0. g(x)=x³ however is a strictly increasing functions even if there's a solution for g'(x) = 0
@@SciDiFuoco13 what is the difference between strictly increasing and increasing, I'm curious about this because my textbook doesn't have a clear definition
The way I would have done it is a>b. So, a^3>a^2b (multiply both sides by a^2) a^2b>ab^2 (multiply both sides by ab) ab^2>b^3 (multiply both sides by b^2) So a^3>b^3 ➕➖✖➗🆒
binnacle true-north Exactly.. If f'(x)>0 [and f'(x)=0 only at discrete points , not in an interval] , then function is strictly increasing if f'(x)=0 in an interval then function is increasing(but not strictly).
@binnacle true-north weird you see the adverb 'strictly' as suspect to cause confusion, because there are different conventions for increasing, but not with strictly increasing.
What should we call a function if it is increasing everywhere but at some discrete points slope of that function is zero.. Strictly increasing or just increasing..
According to that logic, f(x)=-x^2 would be increasing at x=0 because your test passes on the whole interval (-inf, 0]. On the other hand the opposite test would tell you it's decreasing on x=0 if you consider interval [0,inf). And if you look at then neither test passes on x=0. For the concept of "increasing/decreasing/neither inc or dec" at one, single point of a function to even be fathomable one necessarily needs to use limits, i.e. derivatives, which means x^3 does not increase at x=0.
Don't you need an interval for the word "increasing" to be meaningful? e.g. the 2 statements at the top say "(...) f is increasing on I (...)", which makes sense. However, then we're talking about a function increasing at a single point.
Is it even defined to say a function is increasing over a single value? shouldn't we define that? It is, however, well defined to say a function is increasing over an interval
Thee are 2 "is" in the title, the second one shouldn't be there i guess, and as you don't make the difference between "increasing" and "stricly increasing", I found the video kind of confusing. All the answers there depend on the definition of the concepts, but we don't know wich definition you are using. :( still a like though
The flaw in the argument is pretty obvious. X=0 is not an interval. So your argument collapses. The bounds, A and B, in your "proof" are actually the same number. i.e. 0 A perfect square isn't always greater than zero. A perfect square is greater than OR EQUAL to zero... (in this case, it's zero)
Oh, calculus and seemingly paradoxes.... What does increasing at a point really mean? How do you even say increasing if you are not talking about comparing two different numbers? And yet, that is exactly what the derivative does. Limits are weird. If you graph the derivative function, you can see that 3x^2 has a minimum at x=0, where the value of the function is also 0. The function is only equal to 0 at this one point, where all other values of x yield a +pos value for the derivative. So, what does this mean? The original function is clearly never decreasing. The derivative function is only 0 at one point. Does that mean that the function can still be considered increasing? We have to look at if any two inputs to the function give the same function value; in this case, 0. But, with the proof given in this video, if you set a = 0, and b = x, and take the one-sided limit lim x -> +0 (f(b) - f(a)) = +0. Similarly setting a = x and b = 0. taking one-sided limit lim x -> -0 (f(b) - f(a)) = -0. The important thing to note is that although all the limits approach 0, the only point where the value truly is 0 is at x = 0. Even the tiniest nudge away from 0 will cause f(x) to not be 0 anymore. There is not a consecutive run of points on the graph that equal 0, so the function is always increasing.
But isn't the definition of definition the subtraction of a bigger value from a smaller value (over a positive value) as the limit of that goes to zero?
No, the question asked was is the function increasing AT x=0?, NOT , is the function increasing over a given interval of x? You changed the question which forced a change in the answer. The function definitely is NOT increasing AT x=0 . It is increasing on the 2 intervals (-)infinity to 0- and 0+ to (+)infinity but not AT x=0.
I think this definition would imply that on no point can a function be said to be increasing. After all, for a function to increase there would have to be more than one point. But that's clearly not what's meant when we normally talk about increases.
0))If ALL you knew about a function f(x) was that f ' (0)=0 ,, would you say the function is increasing? NO ! would you say the function is decreasing? NO ! ,, the absolute best you could say is "I DON'T KNOW" ...better yet if f '(x1)=0 , you can absolutely say at that point of x1 , the function is unchanging. ALL the fucking math tricks known to mankind doesn't change that.
WisdomVendor1 right, but we know more about f(x)=x³ than its derivative at 0. In maths, increasing at a point just means increasing in an infinitesimal interval around that point. Think of it as a shorthand for those in the field.
At 4:00 you also prove that increasing microphone height results in decreasing volume.
Actually it's increasing mic distance from sound source that results in decreasing volume. ;-)
Fred
@@ffggddss Actually it's increasing mic distance from sound source and the increasing angle between the mic and the sound source that results in decreasing volume.
@@RanEncounter Yeah, it looks pretty clear at that point, 4 min, that that mic is directional, at least to some degree (pun unintended!). So the angle (between mic-source line and mic's primary direction) is important.
Fred
This is so great! By reverting to the definition of a property, and by boring - but SO useful - algebra, the proposition just unfolds! No fancy stuff (“0/0”), no limits, no discussion about convergence... This really nails it! 😎
Very nice to point out that first derivative’s sign and function monotony is not an equivalence relation! Congrats!
Well the "first derivative’s sign" and "function monotony" are indeed related but not equivalent. Be careful with the term "equivalence relation" though, that is something entirely different, and probably not what you meant ;-)
Frank Steffahn Frank Steffahn Obviously you did not understand what I meant. Reread as many times required.
densch123 That’s true, but irrelevant to my the meaning of my comment.
Stalker Technix - Would you mind elaborating a bit then? Because I did reread your comment quite a few times and still don’t see what you could call an (or rather *not* being an) equivalence relation.
Although it might be counterintuitive at first, since the derivative is 0.
However, consider two different points on the curve, one at x=0, the other extremely close to it. If it's on the right it will be ever so slightly bigger, if it's on the left it will be ever so slightly smaller.
Therefore there's always an increase, if you 'read' it from left to right.
mrBorkD this reminds me of the paradoxical nature of "instant rate of change". In the specific instant, we'd predict it's doing nothing, but it's increasing. Maybe this is why science can never be certain
Daniel McFadden Nope that's got nothing to do with science being uncertain. Uncertainty in science (primarily) comes from our inability to measure accurately enough -- either because there's too many variables involved (think: social sciences, weather patterns, etc) or because the scale is beyond our abilities (think astrophysics) or because of fundamental limitations of the universe (quantum mechanics.)
We do pretty well coming up with theories given our measurement limitations (and if we're willing to accept statistical accuracy rather than full predictive accuracy, we do remarkably well in most areas of science.) But there will always be things that are beyond our ability to measure with complete certainly, and therefore there will always be some small deviation between theory and the experiment. But hey, that also means there will always be some new frontier for science to poke at in our eternal quest to understand our universe (and ourselves!) better.
altrag and Aapeli Syren, thanks for the responses. I completely understand.
The definition of a derivative is as follows: it’s the limit, as d tends to 0, of the expression (F(x + d) - F(x))/d. And a limit in and of itself is rather counterintuitive; I would recommend 3blue1brown’s essence of calculus series (another really great TH-cam channel about mathematics much like BPRP).
also, the limit for f'(x) at x=0 is positive on both sides
Masterfully done! This is one of those kind of questions which begs to be dealt with in first year calculus courses, but often gets omitted.
The question in the beginning is also a common fallacy in logic
From: If A then B
you cannot conclude: we observe B therefore A.
For this to hold true, you would have need to show, that A is the only way to be able to come up with B.
Example: "If it rains, the streets are wet"
Nice. But the other way round "The streets are wet, therefore it rains" does not hold. There are other reasons, that the street may be wet, eg the street cleaning car has passed or somebody dropped a container with water. So unless you can show, that the only way the streets ever may get wet is by raining, you cannot turn the argumentation around.
PS: this is also the fallacy, Sir Arthur Conen Doyle fell into, when he let his Sherlock Homes say "How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?" The problem of course is: how does one know that he eliminated ALL impossibles? to eliminate all impossibles you can think of is not enough.
How can you now prove that in general, if a > b, then a^n > b^n when n is odd positive integer.
Because of the Chuck Norris theorem, x^n is increasing on (-∞;+∞)
@@alexandreman8601, we would need a proof of it before being able to believe that both he and you are right.
what we mean, just a name isn't enough.
Claim: if a > b, then (a + b)^2 + a^2 + b^2 > 0.
If either a OR b is 0, then either 0 > b or a > 0, in which case b^2 > 0 or a^2 > 0.
Let x = a + b; then x^2 >= 0, so that, if a^2 > 0, then
x^2 + a^2 + b^2
>= 0 + a^2 + b^2
= a^2 + b^2
> 0 + b^2
= b^2
>= 0
Similarly when b^2 > 0.
If neither a NOR b is 0, then a^2 > 0 AND b^2 > 0, in which case the sum of the squares is also positive.
Nice example of proofing techniques without introducing anything a highschool student couldn't come up with on their own.
The point is that F(x) is strictly increasing on (a,b) if and only if F’(x) > 0 almost everywhere, with the only exception being at a set of points (of measure 0 - this includes isolated points) where F’(x) is 0 for x any of those points. For F(x) = x^3 that point is x = 0.
You are correct !
The Minkowski question mark function takes this a step further: it is a continuous strictly increasing function whose derivative is zero at all rational numbers. However, the question mark function does not have a finite derivative everywhere like x^3 does.
"I know my picture is not the best but I tried"Got dammit it gets me everytime
: )
Could we make the first statement true by saying "If f(x) is increasing on I, f'(x) >= 0 for all x on I" ?
Yes exactly
No, for example, y=0 is increasing no where, but it's derivative is certainly >=0
@@hyperpsych6483 What you said doesn't make sense. Using the second statement, the derivative of y = 0 is 0 for all x. 0 is not > 0; therefore y = 0 is not increasing. As it's not increasing by the second statement, it is invalid to use y =0 as a function for verifying the modified first statement.
@@twelfthdoc haha three whole years man, I was saying that for f(x) = 0, f'(x) = 0, and 0 >= 0 is true (notice the greater than OR equal sign) , but f(x) is not increasing anywhere. It is not necessarily possible to tell if a function is increasing or not at a point if the derivative of that point is 0.
no, but f(x) is strictly increasing if f'(x) > 0 or f'(x) = 0 at finite points
That was a really nice video :). What i was taught is that it does not matter if an increasing function has points where f'(x)=0 so long as those points do NOT form an interval.
That being said, i prefer watching proofs like these rather than accepting things and moving on.
Just curious, what do you mean by domain?
will newman Oh sorry if that is not the name, english is not my mother language. I think set is the correct word.
+Kirby The points with f'(x) = 0 can form a set - in fact, almost by definition of "set", they do: the set of the points where f'(x) = 0. What they cannot do is to form an interval (i.e. the condition f'(x) = 0 is valid for two or more "consecutive" points - if you prefer you can read this more formally here: en.wikipedia.org/wiki/Interval_(mathematics))
dlevi67 THANK YOU man, that was the word i was looking for, not set. I really need to study the english names.
You are most welcome! And your English is pretty good - you'll get the technical vocabulary as you go on.
Here I think the answer depends on how you define "increasing." With the definition I had in my head, I think x^3 is neither increasing or decreasing at x=0.
It depends on how you understand "increasing". Given that in calculus we are assuming the existence of instantaneous rates of change the argument that over an interval no matter how small there is an increase is unconvincing. The function can still be instantaneously constant at some point.
Another way to proving this "increasing property" of f(x)=x^3 is to realize that the derivative of it 3x^2 is going to be positive for all points, except in one point when x = 0, but "a point is not the same as an interval", so that point doesn't matter. You will need at least two different points to get an interval, in this case two different values for x, so showing this is sufficient to prove that the function is increasing for all non-zero intervals. Because an interval is never zero, in other words never equal to a point, and you need an interval between two different points to define a direction (increasing, decreasing or constant), x^2 will be always be positive (increasing) for non-zero values, so 3x^2 will also always be positive, and therefore the curve is strictly increasing everywhere.
My favorite counterexample is f(x)=0:D AWESOME VIDEO
f(x)=0 isn't increasing though. a>b does not imply f(a)>f(b), in fact, f(a) = f(b) = 0
@densch123 yeah, I know
Losan33 Oops. You must've replied just as I was deleting my comment after I realised my mistake!
The bending radius of x^3 is infinite at 0 ( the curve is flat ) . What i found strange is that the bending radius of x^2 is 1/2 at 0 , but for x^r , with r>2 ( for example x^2.0001) , has an infinite bending radius , what a brutal transition ...
What's the bending radius?
Sorry . Radius of curvature r(x) = ((1+y'^2)^(3/2))/y'' : en.wikipedia.org/wiki/Radius_of_curvature . Sorry i'm french .
It's okay. Didn't know about this concept anyway. :p Thanks for the link
Welcome
@@IronLotus15 Common misunderstanding about this concept is that people think it is an oscillating circle. It is really called an osculating circle, that defines the radius of curvature, and likewise its reciprocal called curvature.
"Increasing" should really only be applied to intervals, not points.
Because of the convenience of using f'(x) > 0 for checking where a function is increasing, it makes sense to define a notion for what it means for a function to be "increasing at a point." To make it consistent with what it means for a function to be increasing, we would define it as follows:
A function f(x) is increasing at x = c if there is some interval (a,b) containing c on which f is increasing.
This is similar to how the derivative represents the "instantaneous rate of change" of a function. It doesn't really make sense to have a rate of change which is instantaneous for the same reason that it doesn't make sense to talk about a function increasing at a point. Nevertheless, it is useful for us, so we do it anyway.
The slope at a point of a circle is the tangent line !
Can you say a function is increasing on a POINT instead of increasing on an INTERVAL?
一般都是講函數在某區間上遞增。可以講函數在某點上遞增嗎?
That slope at a point is the tangent line ! It's zero ! I like your comment !
Two small questions:
1. isn't "increasing" definied as greater OR equal?
2. why dont just show f(x+1) >= f(x)?
Your first point is right.
Re your second point, f(x) = sin(2pi x) satisfies f(x+1) >= f(x) for all x but is not increasing for all x.
@@martinepstein9826 Yeah but the second question Was specified in the question if x³ is always increasing as therfore only then it works this way
@@DemonFranki If the argument doesn't work for all functions then how do you know it works for f(x) = x^3?
@@martinepstein9826 the Definition tells you
@@DemonFranki *Which* definitions tell you if (x+1)^3 ≥ x^3 for all x then x^3 is increasing?
You have to clarify what is meant by "increasing."
f(x) = x³ is increasing at x=0 in the sense that for any x₂ > x₁ , it is true that f(x₂) > f(x₁)
But in an "instantaneous" sense, f is not increasing at x=0.
I think it would, however, be hard to support the instantaneous sense of "increasing" as a viable definition.
So that, yes, f *is* increasing everywhere.
Fred
The deeper theoretical reasons for what is going on here is that there are two basic types of structure that the real number line (and by suitable treatment, its self-product, the plane) possesses:
1. The first type of structure is that as an _ordered field_ . This means the real line both supports all the arithmetic we all know and love as best as possible (that's the "field" bit), _and_ it also has the property that you can compare its members as greater or less (though I prefer as "preceding" vs. "succeeding") (that's the "ordered" bit). This is why we are able to call it a "real number _line_ " and not just a real number field, or a real number set. This is the kind of structure that the notion of an increasing or decreasing, strictly or not, function belongs to, since those definitions use and only use, the order.
2. The second type of structure is that as a _valued field_ . This means that the real line has a notion of the "size" of its elements, called a norm or valuation, assigning its members sizes from some suitable size set. Here the size set is itself, and the norming function is absolute value. This gives it a _metrical_ or _distance_ structure that allows us to talk of proximity and separation of points and also, moreover, lets us consider its self-product - the plane - as a faithful representation of Euclidean geometry. It is to this structure that the notion of differentiability belongs.
This, then, is the resolution of the seeming paradox: The notion of strict increase belongs to the _ordered field_ aspect of the real number line's structure, and thus it is oblivious to the fact the graph is "instantaneously" stationary at x = 0, something that only arises from the _differentiability_ structure that is related to the _valued field_ aspect. In particular, the order structure is unchanged by pulling or stretching the real line elastically in various ways, and you can imagine stretching the plane in the y-direction so as to smooth that little crook in the graph out, but without distorting the x-axis or any line parallel to it. Such a stretch doesn't change order, but does change metrical structure, thus it shows to which type this property really belongs.
"How to prove a function is increasing?" the SEO is strong.
X^3 is increasing on (-infinity,0) Union (0, infinity). To claim a function is increasing at a single point is dubious.
Definition of increasing depends on relating a function to a non singleton interval.
Moreover x^3 has horizontal tangent line at x=0. x^3 is increasing on, say, [0,2].
Asserting x^3 is “increasing at 0” is tantamount to claiming 0^3 - 0^3 > 0 on interval [0,0].
I guess if you define increasing as f(b)>=f(a) when a
interesting since you've also proven that cubing preserves order over the entire real line. that is, a < b implies a^3 < b^3 for all a and b in R. you can think about the possibilities the a less than b can take on the real line, and the result will seem trivial
You're noticing an important fact.
If f(x) is an _increasing function,_ then whenever a < b, it must be the case that f(a) < f(b). This is the definition of a (strictly) increasing function. But the implication is that applying an increasing function to both sides of an inequality actually preserves the inequality.
Similarly, you can show that applying a _decreasing_ function to an inequality _reverses_ the inequality, just based on the definition of a decreasing function.
You're used to adding any number to an inequality preserving the inequality, and that fits in this scheme because adding the number c is the same thing as applying the function f(x) = x+c, which is increasing.
Similarly, multiplying by a positive number c, the function f(x) = cx is an increasing function. But multiplying by a negative number c, the function f(x) = cx is a decreasing function.
@MuffinsAPlenty I never though about that, that multiplying by a negative flips the status. another thing to note is the composition of increasing functions, and that of decreasing functions (the result is of the same status).
Wow, this is fascinating. I always felt this function was increasing at 0 since it "looked like" it was increasing and my intuition is that a continous function with domain of all real numbers that always increases will have a defined inverse. However I could not figure out any way to prove it. By simply defining that a > b you proved that f(a)> f(b) when you broke it down. This was cool
It is weird, it is parallel with x axis at 0 but it is increasing because any value to the left is smaller and any value to the right is higher. Probably makes sense, but it is almost as weird as trying to visualize black hole.
Many people have doubt that whether it is strictly increasing or not? Let me clear that it is strictly increasing. If f'(x)>0 [and f'(x)=0 only at discrete points , not in an interval] then the function is strictly increasing.
I disagree with your definition of strictly increasing.
What about the real-valued function f(x) = x^(1/3), where we take the real-valued cube root for each input x. In my estimation, this function should be considered strictly increasing on the entire real line; however, it does not satisfy your definition of strictly increasing.
MuffinsAPlenty f(x)=x^1/3 is strictly increasing.we can see that f'(x)=1/(3x^2/3) >0 for all x belongs to real number. So the statements hold true.
The condition mentioned in the Brackets should only be checked if f'(x)=0 . if f'(x)=0 at discreate points then function is strictly increasing. if f(x)=0 in an interval then function is increasing(but not strictly)
This does not address my concern. The function I mentioned has no instances in which f'(x) = 0. f'(x) > 0 for all x≠0, but f'(0) does not exist. Hence, by your definition, f(x) = x^(1/3) is not a strictly increasing function.
However, if we go by the definition that for all a < b, f(a) < f(b), then f(x) = x^(1/3) is a strictly increasing function.
You either have to adjust your definition to account for situations where the derivative does not exist, or you must conclude that this function is not strictly increasing, despite satisfying the property that a < b implies f(a) < f(b).
MuffinsAPlenty I understood your concern and rewrote the condition.
Used "is" twice in title incorrectly. Probably left one after an edit? I've done that one. A lot!
Unfortunately it is not clear your method extends in an easy way to show any odd positive power of x is a strictly increasing function. This can be achieved by using a different method: to prove a^(2n+1)>b^(2n+1) if a>b, consider the three cases: 1. a>b>=0, 2. a>0>b, 3. 0>=a>b.
1. can be proved for all positive n by factorising a^n-b^n as you did or by induction: a^(n+1)=a*a^n>b*a^n>=b*b^n=b^(n+1); for 2: a^(2n+1)>0>b^(2n+1); 3 follows from 1 by considering -b>-a>=0 (of course this is just a sketch of the proof).
Considering point 2, it is not strictly always true. Consider the function tan(x) which has the derivative sec^2(x) which is always either positive (nonzero) or undefined. Where it is undefined the limit of the derivative approaches positive infinity from both above and below, and thus even if undefined still greater than 0. So technically, a function with a derivative always greater than zero can decrease in value :)
How can a function increase at a single point?
That's more or less the idea, but a rigorous definition would be:
Let f be a function, and x0 and interior point of its domain, then f is increasing at x0 iff exits a certain epslion (eps) such that f(x0)>=f(x) if x belongs to (x0-eps,x0) and f(x)>=f(x0) if x belongs to (x0, x0+eps).
With this definition you can prove that the second statment of the video is true and that the first one is also true but only if you state that f'(x)>=0, for any point.
Now, the definition for a function to be increasing in a certain interval I is that for any y>x in I, f(y)>=f(x). And, you can also prove, using this definition that both statments in the video hold if you change f'(x)>0 to f'(x)>=0 in the first one.
Ivan Lusenko A function is said to be increasing on I if for all (x;y) in y, x>y =>f(x)>=f(y). Therefore, if you want to be rigourous you should say that it doesn't mean anything for a function to be increasing on {x_0}.
You could say that f is increasing on {x_0} iff f'(x_0)>=0 but that is not a very good def because it doesn't inclure functions that aren't differentiable when the first def works for every function.
Also, saying that f is incrasing at x_0 iff f is increasing on an interval that contain x_0 is way better, but just saying that f is increasing on that said interval gives more information (so it is not really a useful def...)
Petta Oni
+Esteban Thanks!
I think because there is no defined way to test for it with a single point, you need an interval and that is an infinitesimal, so as long as that is increasing its satisfied, since the point at zero is infinitely small, and at any other point you so can't tell if it's increasing.
So that not "increase at a point" , its "increasing at a point"
Is there a critical point at f(x) = x^3 when x = 0?
There is a critical point, but it is a red herring to finding the extreme points, which is usually what we are interested in finding. It ends up being inconclusive because of the second derivative test equaling zero. Further information from looking at the graph tells us that it is not an extreme point.
You missed a step. For arbitrary a, a^2 >= 0. But it's only 0 for a = 0, so if a > b, only one of a^2 and b^2 can be 0, so the sum is strictly positive.
Why don't you just say x^3 is negative if x is negative, 0 if x is 0, postive if x is positive. Therefore the funtion is increasing at zero
Being negative/positive is different from increasing/decreasing. Take f(x) = x^3 - 10. f is negative to the left, and to the right of 0, but the function is still increasing.
It actually make sense because derivitive comes from limit calculation, therefore the derivitive of x^3 at x=0 is approaching zero and not equal to zero
Edit: When I wrote this I was in high school and didnt know much about limits, now Im in bachelor's degree in Mathematics. Here is an accurate explanation:
By definition a function (of one variable) is increasing at a point x, if there is a neighborhood around that point, where every point x1 right to x in the neighborhood holds that f(x1)>f(x).
The same with points left to x in the neighborhood.
If a function is increasing in a point and differentiable there, then the derivative >= 0 at that point.
And if the derivative at a point is positive, then the function is increasing there.
Therefore jf the derivative is zero at point x, we cant conclude from that that f is increasing at that point
No, the derivative is exactly 0. The derivative is defined as a limit. The limit is the thing being approached, not the thing doing the approaching.
The SECANT is approaching zero, the tangent IS zero. The derivative is the tangent, not the secant. When Eddie Woo teaches how to take derivatives from first principles, he says "because I want the tangent, not the secant" every time he writes the limit notation, to remind you of why you need it.
@@carultch Thanks. I Edited the comment now
I'd argue that f(x) is NOT increasing a f(0). It seems "increasing" has to be carefully defined. There's increasing over an interval which assumes two distinct points and doesn't involve any calculus, and then there's "increasing" at a point which it seems should be associated with the derivative at the point which suggests the function is not increasing at f(0).
If you are defining it that way and that is the actual definition, then I agree. But it contains the idea of two points. I was thinking it could be defined as something similar to "instantaneous slope" at a single point. But using a definition for positive slope, negative slope, or 0 slope involving two points, and no limits, would always result in a positive slope for this function. I thought a function increasing or decreasing or unchanging at a single point might be similarly defined as the calculus definition of instantaneous slope. Perhaps there could be a distinction between "(interval) increasing" and "instantaneously increasing" at a point.
Perhaps I should have said instantaneous rate of change which might be the more common phrasing in calculus discussions.
+teavea10 You can't have "increasing" anything without considering the behaviour of the thing over an interval, however small. And however small you choose the interval around 0, x^3 is increasing in that interval.
AT 0 (as in "in that point") the function is not increasing, but then neither it is increasing AT 1, -2, sqrt(pi) or 1 gazillion; the concept of increasing has to involve comparison across two different values of the independent variable.
I think, taken in a vacuum, questions like this are a little hard to grasp due to the nature of derivatives. It’s much more useful to use a real world example. For example, let’s say that there’s a baseball shot up in the sky that begins to fall back down under the influence of gravity. Is its position decreasing at the apex of it’s jump? Obviously yes, since it falls back down. But the velocity (time derivative of position) at the apex is still zero. So why does it fall down? This is because there is still acceleration due to gravity at the apex despite the velocity being 0. But if at the apex of it’s jump, gravity is removed, it will stay in place. Because the velocity, acceleration, and any nth time derivative of position all equal zero at the point. So the solution to the answer is if any nth derivative at a point is inequal to zero, then the function is increasing/decreasing at that point.
This is good video, showing clearly why the first statement on the board is false. But the video's title was not so clear. What do you mean by a function increasing at a point? You never defined this (and indeed you didn't need to for the proof), but for clarity as to the meaning of the title I think you should. Perhaps you mean (as some comments have suggested) increasing in some neighbourhood of the point.
Isnt x=0 a point of inflection for y=x^3 graph?
Yes. But Concavity doesn't have to be tied to increasing/decreasing.
So much so that you can have changes in concavity where the first derivative is not zero! First example that comes to mind is sin(x).
@@dlevi67 the curvature (? i'm spanish) changes when f''(x) changes its sign, it does not depend on the value of the first derivative that much, that's why f'(x) on certain x being 0 has almost nothing to do with f''(x) on that x
+Jorge C. M. it's "concavity" if you are talking of f''(x). Curvature has a specific meaning when you talk of three-dimensional surfaces. As to the rest of your point, I don't understand how it follows from the conversation above it, but we agree.
@@dlevi67 For low slopes, curvature is roughly equal to the second derivative. That's why in Euler Beam Theory for deflection of beams, we equate our expression for curvature of a beam to the second derivative of the elastic curve.
And where is the pretty lady that used to intro your videos?
我很好奇,老師是台灣人嗎?
速ㄉ.
blackpenredpen 有種親切感,不過我很喜歡老師這樣的講解風格與台風,希望有一天我的英文講解也能這麼清晰(有何種建議嗎?),老師加油!
Couldn't you show that since the derivative of x^3, 3x^2 is always greater than or equal to zero, this means that there is no point were the graph decreases, you would then have to show that it doesn't have multiple zeros right next to each other which would mean that it is not increasing, to do that all you have to do is, 3x^2 = 0, x = 0 is the only such spot where this is true therefore it is always increasing. I don't know if this would work for everything so please point out any mistakes.
Awesome video
+blackpenredpen You actually need a > b to prove that (a+b)^2 + a^2 + b^2 > 0 because you need to exclude the case a = b = 0.
Oh yea.. But I thought that was too obvious since we already had a>b which implied they aren't equal.
Is the interval open or closed?
In his proof he shows that, over an interval with two distinct end points (sorry, idk the proper terminology), x^3 is always increasing. But the original question was whether or not the function is increasing at one single point, not over an interval (or am I wrong about that?), so doesn't his work solve a different question than whether or not x^3 is increasing at the point (0,0)?
Maybe he could have been clearer. "Increasing at 0" just means increasing on an open interval containing 0.
Another example is ?(x), which is monotonically increasing, but has derivative 0 on a dense set. It also has derivative nonexistent on a dense set.
Pierre Abbat what's (x)?
It's not (x), it's ?(x). en.wikipedia.org/wiki/Minkowski%27s_question-mark_function
Oh I see!! I actually only know about devil's staircase but have not seen this before. Thanks for sharing!
+Pierre Abbat A little bit advanced, don't you think? };-)
But if you have a < b, f(a) = f(b) and f'(a) = f'(b) = 0, can you then say that f(x) isn't increasing? So basically saying that instead of having a single point that has f'(x) = 0, like what x^3 has, it has 2 (or more) points where f'(x) = 0.
As always, I enjoy and learn from your videos, especially the clever algebra. My students wish to know what "increasing (or not increasing) at x = 0" means when the terms "increasing" and "decreasing" are defined over intervals. Is there a definition or theorem (better yet) which states if a function is increasing on an interval, then the function is increasing at each point in said interval?
Is the following valid:
f is increasing on I f(x)>=0 for all x on I?
Not strictly increasing
Why not a*b>b*b so ab>b^2 then ab is positive because it bigger than a square
because a*b can easily be negative, so
a*b is not necessarily > 0.
while b² ≥ 0 always, if b is a real number.
Man i don't know but i love your Channel!
Thank you!
In 1:50 Wouldn't be (+1, +∞) ? Cause your statement start for any derivative function bigger than zero, like 0,5 or 0,8 or 1 and so on... Or there's any function with intervals like ( 0, - ∞) could have a positive derivative????
I'm not sure what exactly you mean but I'll try.
A function can have positive derivatives at negative values.
He's just specifying an interval for this given function, where, in this case it's true that it is always increasing. The derivative is almost always great than 0, the only exception being f'(0)
Sir please make a video on point of inflection
It means a point where the concavities switch, where the second derivative is locally equal to zero.
#YAY
Your vidoes are a lot informative
By the definition of derivative at a point, if f'(b)=0, then f is not increasing nor decreasing at b.
And if the definition of increasing function is used (as in video), that states the monotonicity of an interval not of a single point. So it can't be applied.
f(x)=x^3 is (strictly) increasing in (-∞, ∞), but f(0) can't be considered as "increasing" since its "rate of change" (aka derivative) is 0.
But 0 is in that domain though? So how can the point 0 be in the domain and yet not have the property that things inside that domain have?
How is the first statement not correct?
the derivative is the limit of the rate between the difference of the images of two points and the difference of those points as the points are infinitely close to each other.
f'(b) = lim (f(x)-f(b))/(x-b) as x->b. If x-b > 0 and f(x)-f(b) > 0 then f'(b) > 0, because +/+ = +.
But we have that f'(0)=0, so it doesn't meet the condition to be increasing.
Re: "A function is increasing at a point if there exists some open interval containing that point on which the function is increasing." This is not a convention. This is yours.
Again: Checking if a function increases on some interval needs to compare pairs of points. The closest approach in applying that concept to a single point is the derivative.
So, either you stand by what the derivative says (in this case, that f is not increasing at 0) or acknowledge that monotonicity of a single point is an absurd characteristic not fit to be checked the same way monotonicity of an interval is.
@chengyuryc46: Monotonicity ("increasingness" and "decreasigness") isn't a property of points. It's a property of functions.
A function being increasing doesn't mean that all its points are increasing, the same way a matrix having rank=3 doesn't mean every coeficient on that matrix has rank=3.
+Eric Ester the statement "A function being increasing doesn't mean that all its points are increasing" is either a tautology (any function is not increasing at any point - it either has a single value there or it's not defined), or it doesn't make any sense.
Exactly. Functions at a point have a value. A value that doesn't change. So that point neither increases nor decreases.
And that's why I'm saying that monotonicity of single points is an absurd concept.
At the check step, could you plug in a=b+h where h is positive and then cube the left and subtract after?
What if you say the first statement but it’s >= 0? Does that make it true?
awesome video. congrats. This is the exact moment when I beg u to do a video about differentiability of x^(1/3) at x=0. What does it mean that a function is differentiable ? well that it has tangent line?or that u can calculate the tangent line? Is this function differentiable at x=0? in fact x=0 "is" the tangent line but is this function differentiable?
The definition is whether the limit exists at that point (where by "the limit" I mean the limit used to define the derivative)
I didn’t buy it if it is increasing then the theta which this point is making with horizontal axis has to be positive in another word tan^-1(dy/dx) has to be positive but tan^-1(3x^2) clearly zero when we plug zero.
It's just a guess:
A derivative of a function f'>0 over an Intervall I, so functions f is increasing.
maybe this means if a second derivative ( f'' ) is increasing, his function f is also increasing. but i have no proof.
For example f(x) = x³ => f''(x) = 6x and g(x) = 6x is always increasing because of g'(x)=6 > 0 over Intervall I.
Do you need to show how ((a+b)²+a²+b²)/2≠0? Because it looks like you can conclude that it's ≥0 since it's dealing with squares. I know that it can't be 0 since a≠b but won't you need to show that somehow?
If you wanted to spell out every detail, yes you would need to show that. But as you mention, since a≠b, it's impossible for that expression to equal 0. So it's not a big deal to skip that.
Yea. Since we already had a>b, which that expression couldn't be 0.
@@blackpenredpen awesome thanks for explaining :)
Thank you👍
@9:13 you seem to say that a sum of real squares is positive; this is only true if at least one of the reals being squared isn't 0. Since a isn't equal to b, at least one of a or b isn't 0.
Hey bprp great video as always, I subscribed to brilliant a few days ago through you and it's really great thank you :)
DEUS VULT glad to hear that! Thank you!!!
what if you replaced the > with >= in the first statement?
Νικολαος then that's fine. : )
The main point to discuss in this video is "is x^3 increasing at 0 or not"
I don't know if it is correct, but derivative of x^3 is 3x^2, which is always positive (only value that is equal to 0 when x=0). Function has local minimum when derivative is equal to 0 at value x1 and value little smaller than x1 (sorry, I don't know how to say that in English :D) is negative and value little bigger than x1 is positive. Function has local maximum when its derivative is equal to 0 at x1 and values for smaller x are positive and values for x bigger than x1 are negative. 3x^2 is qual to 0 when x=0, values for x smaller than 0 are positive, bigger than x are also positive, so it shouldn't have local maximum and also minimum, right? Correct me if I'm wrong, and sorry for my English (hope you've managed to read this :D)
The point at x=0 on the equation y=x^3 is called a stationary inflection point. The term "stationary" in this context means that the derivative is zero at this point, as if the curve described position as a function of time. This would be like a "stop sign", where you stop momentarily and then resume driving forward. The term "inflection point" means that the second derivative is zero as it switches from negative to positive (or vice versa). Other versions of the cubic equation with a positive x^3 term, that have other x-terms, will have falling inflection points. Make the x^3 term negative, with other x-terms, and it will have rising inflection points.
It is neither a local minimum, nor a local maximum, though it may seem to be one, when your method for finding the extreme points is to look for locations where the derivative is zero. Indeed, the derivative is zero, but it isn't an extreme point, because the second derivative is also zero, and it is inconclusive as an extreme point. It turns out that it is not an extreme point, because the function is increasing on both sides of it.
Where are you from dude.
Btw nice channel.
The most visual example is the cantor function which is monotone increasing but has derivative zero on (0,1)
What does your chinese logo mean?
Why say "x to the third power", and not "x cubed"?
English isn't his first language, and he probably isn't accustomed to calling it "x cubed". Saying "x to the third power" is probably the first way that comes to mind for him. It means the same thing, and he likely would understand "x cubed", it just isn't the first thing that comes to his mind.
Another similar example I notice for native Chinese speakers when speaking English, is defaulting to tell time by reciting the numbers exactly as they would appear on the clock, rather than using colloquial expressions. Such as saying "twelve O'clock" instead of "noon" or "midnight", and "three forty five" instead of "quarter 'till four". It is understandable why one would tell time this way, because it requires a new mental exercise to produce the words for "quarter 'till four" from "3:45".
if f(x)=x^3 , when x=0 is it an extreme point?
don't know the right english name for it, but we call it a bowing point, that is,
with x < 0 the curve "shows its hollow side downward" (decreasing slope with increasing x, and decreasing f'(x)), and
with x > 0 the curve shows its hollow side upward (increasing slope with increasing value of x, and so increasing df/dx).
the point of it is also, that df/dx *doesn't change sign* at df/dx = 0. yet it shows up a minimum value, and 'changes direction', from downward (with increasing x) to upward with increasing x.
@@keescanalfp5143 It's called an inflection point in English.
No, it is not an extreme point (as in, not a local maximum or local minimum).
Points like this are a red-herring when you intend to find extreme points, because the derivative of the function is indeed equal to zero at the point x=0, which would lead you to believe that it is an extreme point. However, because the second derivative also equals zero, you cannot conclude it to either be a minimum or a maximum. You take the first derivative to find the critical points where it equals zero, and you use the second derivative to identify them as local maxima (negative curvature at a critical point) or local minima (positive curvature at a critical point). This particular point is inconclusive from the second derivative test alone.
What about this?
If f is increasing on I, them f'(x) is greater or equal to 0
+HelloItsMe
First let us get some definition clear: what does "f is increasing on I" mean?
Even writers of university books don't universally agree on it, but the most common definition is the following (I being an interval of real values):
A function f:I→R is called increasing (on I) if the following holds:
for every x,y ϵ I, if x < y then f(x) ≤ f(y)
Note the "≤" sign here, instead of the "
The only thing I will say is that a function can only increase or decrease on an interval. I don't think it's technically correct to say a function is increasing at x = a.
"Increasing at 0" means increasing on some open interval containing 0.
I guess we just need to put " f'(x) ⩾ 0" to make the first proprety true for any functions
That's true for increasing functions but not for strictly increasing functions. For example f(x) = 0. g(x)=x³ however is a strictly increasing functions even if there's a solution for g'(x) = 0
@@SciDiFuoco13 what is the difference between strictly increasing and increasing, I'm curious about this because my textbook doesn't have a clear definition
The way I would have done it is
a>b. So,
a^3>a^2b (multiply both sides by a^2)
a^2b>ab^2 (multiply both sides by ab)
ab^2>b^3 (multiply both sides by b^2)
So a^3>b^3
➕➖✖➗🆒
Only true if both a, b > 0 or < 0
+ dlevi67: Right. I.e., the second derived inequality, a²b > ab², follows only if ab > 0.
Fred
If f' > 0 except for a finite number of values f'=0, then f is strictly increasing
densch123 they are interval of the form [a;a] ... Yes not really an classic interval
proving that cubed big numbers are bigger than cubed small numbers.
It's increasing but not strictly increasing :)
Nope, y = x^3 is strictly increasing as well.
yeah. maybe the derivative has to be zero on a continuous interval to cause the function to only be monotonic.
he proved in the video that it IS* strictly increasing though lol
binnacle true-north Exactly..
If f'(x)>0 [and f'(x)=0 only at discrete points , not in an interval] , then function is strictly increasing
if f'(x)=0 in an interval then function is increasing(but not strictly).
@binnacle true-north weird you see the adverb 'strictly' as suspect to cause confusion, because there are different conventions for increasing, but not with strictly increasing.
Am I a horrible person for being too lazy to try it first?!
What should we call a function if it is increasing everywhere but at some discrete points slope of that function is zero..
Strictly increasing or just increasing..
you forgot the question marks
Jorge C. M.
Thank u mate
Thanks, very helped me, I wasn't sure in that!
9:10 it could be 0 for a = 0 and b = 0
According to that logic, f(x)=-x^2 would be increasing at x=0 because your test passes on the whole interval (-inf, 0]. On the other hand the opposite test would tell you it's decreasing on x=0 if you consider interval [0,inf). And if you look at then neither test passes on x=0.
For the concept of "increasing/decreasing/neither inc or dec" at one, single point of a function to even be fathomable one necessarily needs to use limits, i.e. derivatives, which means x^3 does not increase at x=0.
No, his argument is fine. You just need to make sure you apply it to an open interval.
Don't you need an interval for the word "increasing" to be meaningful? e.g. the 2 statements at the top say "(...) f is increasing on I (...)", which makes sense. However, then we're talking about a function increasing at a single point.
"Increasing at 0" means increasing on some open interval containing 0.
Is it even defined to say a function is increasing over a single value? shouldn't we define that?
It is, however, well defined to say a function is increasing over an interval
"Is it even defined to say a function is increasing over a single value?"
Yes. That means increasing on some open interval containing the value.
おおおおーー むううう?
つまり超一瞬だけ傾き0になってるから、
全区間右上がりじゃないけど、全区間増加してるんね(何を言ってるんだ)
Thee are 2 "is" in the title, the second one shouldn't be there i guess, and as you don't make the difference between "increasing" and "stricly increasing", I found the video kind of confusing. All the answers there depend on the definition of the concepts, but we don't know wich definition you are using. :( still a like though
The flaw in the argument is pretty obvious.
X=0 is not an interval. So your argument collapses.
The bounds, A and B, in your "proof" are actually the same number. i.e. 0
A perfect square isn't always greater than zero. A perfect square is greater than OR EQUAL to zero... (in this case, it's zero)
Sweet proof
Oh, calculus and seemingly paradoxes....
What does increasing at a point really mean? How do you even say increasing if you are not talking about comparing two different numbers? And yet, that is exactly what the derivative does. Limits are weird.
If you graph the derivative function, you can see that 3x^2 has a minimum at x=0, where the value of the function is also 0. The function is only equal to 0 at this one point, where all other values of x yield a +pos value for the derivative. So, what does this mean? The original function is clearly never decreasing. The derivative function is only 0 at one point. Does that mean that the function can still be considered increasing? We have to look at if any two inputs to the function give the same function value; in this case, 0. But, with the proof given in this video, if you set a = 0, and b = x, and take the one-sided limit lim x -> +0 (f(b) - f(a)) = +0. Similarly setting a = x and b = 0. taking one-sided limit lim x -> -0 (f(b) - f(a)) = -0. The important thing to note is that although all the limits approach 0, the only point where the value truly is 0 is at x = 0. Even the tiniest nudge away from 0 will cause f(x) to not be 0 anymore. There is not a consecutive run of points on the graph that equal 0, so the function is always increasing.
Woah. Thanks for your comment!
So just one line will make the "rule" 1 true:
Instead of ... f'(x) > 0 write f'(x) ≥ 0
Nope. Consider f(x) = -x^3. Then f'(0) = 0 but f is decreasing at 0.
@@martinepstein9826 I didn't change the second rule, I changed the first rule
@@plislegalineu3005 My bad. I was thinking of points, not intervals. If you use ≥ then both rules are correct.
But isn't the definition of definition the subtraction of a bigger value from a smaller value (over a positive value) as the limit of that goes to zero?
f'(0) = 0+ :) it's increasing!!
No, the question asked was is the function increasing AT x=0?, NOT , is the function increasing over a given interval of x?
You changed the question which forced a change in the answer. The function definitely is NOT increasing AT x=0 . It is increasing on the 2 intervals (-)infinity to 0- and 0+ to (+)infinity but not AT x=0.
I think this definition would imply that on no point can a function be said to be increasing. After all, for a function to increase there would have to be more than one point. But that's clearly not what's meant when we normally talk about increases.
0))If ALL you knew about a function f(x) was that f ' (0)=0 ,, would you say the function is increasing? NO !
would you say the function is decreasing? NO ! ,, the absolute best you could say is "I DON'T KNOW" ...better yet if f '(x1)=0 , you can absolutely say at that point of x1 , the function is unchanging. ALL the fucking math tricks known to mankind doesn't change that.
WisdomVendor1 right, but we know more about f(x)=x³ than its derivative at 0. In maths, increasing at a point just means increasing in an infinitesimal interval around that point. Think of it as a shorthand for those in the field.
Yet the first derivative cannot be less than 0 and the function increasing 😉.
First time hear u say x^3 as x cubed