Small clarification! At 8:10 the theorem either requires saying f(x) is continuous or that f'(x) is positive at the endpoints as well. As stated it allows for a discontinuity right at the end points. :)
On the second point: Although derivatives don't need to be continuous, they have the intermediate value property we know from continuous functions: the image of an intervall is an intervall. I recently learned about this and I find it quite interesting, it's called Darboux' Theorem
@DrTrefor Thanks for sharing all your videos, Dr Trefor. I really hope you can respond to my other questions on other videos whenever you can. Thanks very much.
An interesting corollary of the Dirichlet function is that it integrates to zero over any interval. While it is true you can find an irrational between any rationals and vice versa, the cardinality of the sets are not equal. If you were to plot all rationals with yellow and all irrationals as cyan (as in the sample plot) for all possible real values, the plot would be all cyan.
@@carstenmeyer7786 Riemmann Integration is enought after correcting Riemann"s mistake of using a uniform limit instead of a pointwise limit in his definition
@@friedrichhayek4862What uniform limit are you referring to? The definition I'm familiar with is that, if we consider the infimum of the integrals of all step functions larger than the given function and the supremum of the integrals of all step functions smaller than the given function, if these two values coincide, that is the Riemann integral of the function.
@@smiley_1000 The "common" definition of the Riemann Integral is an uniform limit as showed in page 2 of return to the Riemann integral by Robert G. Bartle and in the next paragraph, article that shows how fixing such error makes it the superior integral.
@@friedrichhayek4862 The definitions given in the article don't really involve uniform or pointwise limits of functions as usually understood. But I certainly find the more general definition that they give interesting.
Great counterexamples! Another interesting modification of the fifth example is the "Takagi Curve" -- a function that is continuous everywhere, but nowhere differentiable. The idea is that the Takagi-Curve has (increasingly small) spikes everywhere -- they change the slope of the curve by integers values, leading to non-convergent difference quotients everywhere.
This is great stuff. Counterexamples like these are an essential part of understanding mathematics. Whether I learn a new concept, one of the first steps is to try to identify edge cases, which either barely meet the definition or barely miss it.
Another neat modification of the Dirichlet functie: f(x) is 0 for irrationals and f(x) = 1/q if x is rational, where x = p/q is the simplified fraction (and let's say q>0). It's discontinuous on the rationals, but continuous on the irrationals. The latter claim seems weird, but it's pretty easy to prove. For any irrational x and any epsilon>0, we know that: 1. There are only finite rational numbers p/q for which f(p/q) = 1/q > epsilon. 2. The irrational number x is some distance away from all those finite rational numbers. From those two facts it follows that there must be some positive number delta, such that if |x-y|
What’s wilder to me than the fact that a function can be discontinuous everywhere is that it can be continuous at only a single point, something that really shows the disconnect between the intuitive meaning of continuity and the rigorous mathematical definition.
Cool topic! My favorite math professor from a past life (Dmitry Fuchs) was a master of counterexamples . . . real analysis for a full year with that guy, and counterexamples were strong with that one! I think I still emphasize counterexamples more than most while I teach physics because of that experience. I'm taking notes here, because I still teach first year calculus as well . . . .
Another thing that might be intuitive is the following claim: If f'=0 then f is a constant function. This is wrong though as you mightve guessed. And here is the counter example: Let I_1 and I_2 be two disjoint and open intervals in R and then let f be a function that's defined on the union of I_1 and I_2, f(x) = 1 for x in I_1 and f(x) = -1 for x in I_2
I learned basic calc over a decade ago, including basic real analysis, but I still didn't know the example with positive derivative without an increasing neighbourhood. Thanks a lot, I can add it to my lecture notes that I can publish one day.
I asked other people and they convinced me that yes, this implies a neighbourhood with monotonicity. If a derivative is positive, then its continuity implies that it's also positive in some neighbourhood, and this implies that the original function is increasing there.
I'll add another: If f is differentiable on a closed and bounded interval, then f' is integrable on that interval. False, see Volterra's function. The construction uses x²sin(1/x) and a variant of the Cantor set. Despite having a bounded derivative, the derivative is not integrable. The book *Counterexamples in Analysis* contains many strange functions but most of the counter examples require some background in advanced calculus to mistakenly conjecture. Edit: corrected a misstatement of the false conjecture. First I said 'f differentiable ⇒ f is integrable' when it should've been what it is now.
When you say that, you are working with the riemann integral, and thats completly true. However, if you work with the generalized riemann integral, also known as the Kurzweil Henstock integral, every f' is integrable.
Here's a function (courtesy of Michael Penn) that behaves very strangely at x = 0. f(x) = e^(1/x) / [ 1 + e^(1/x) ] Usually students encountering functions that have a "jump" discontinuity at x=a are contrived. For values x>= a we have a nice g(x) and for x
I'd argue "f" needs to be piece-wise defined as well, since right now it is not defined at "x = 0" (even if both the left and the right limit to zero exist).
@@carstenmeyer7786 I see your point as it can be defined piece-wise but I would argue that it doesn't have to be. The piece-wise nature is built into the single equation.
@@ianfowler9340 You can also get a step function without an undefined point in the step. For example, f(x) = lim_{n->infinity} tan(nx). Yes, it involves a limit. But then, technically the exponential function involves a limit as well, it's just hidden away in the notation.
Students often believe that f'(x)>0 means f is strictly increasing on an interval. You may want to restrict the condition to f'(a)>0 in the third theorem, exploiting the counterexample at its full potential, otherwise with only f'(a)>=0 the counterexample f(x)=-x^3 works, making the theorem less powerful.
One of my favorite counterexamples is the Fresnel Integral. In calc 2, you learn the Divergence theorem for series. If the terms don’t approach 0, the series diverges. However, the same cannot be said for improper integrals. cos(x^2) does not have a limit as x approaches infinity. It bounces between 1 and -1. You might expect then the integral from 0 to infinity to do the same thing. However, the width of the oscillations approaches 0, meaning the integral actually converges (to sqrt(pi/8))
@1:58 There's an asterisk here, for all practical purposes that graph IS a continuous straight line at 1, say if you graph it on a pc. But if you consider the infinities involved it's a straight line at 0, since there's so many irrational numbers
For the convergence bit you should specify that you mean pointwise convergence, although the same is true for L^p convergence, but the counterexample is different.
Great video! I love counterexamples; such a great way to help get a sense of the whole terrain of a concept. And there were definitely some there I'd never heard of, like the one that has a positive derivative but is not increasing around the point!
@@DrTrefor ... I am still utterly confused about this one. I mean, your explanation was perfect and I understood it, but it created a paradox in my brain. My conflict is this: 1) The derivative at 0 is 1/2 2) You showed that no matter how small the interval I take around 0, I will always find points inside the interval for which the derivative is -1/2. 3) But from 1 and the definition of derivative (lim |Δx->0| Δy/Δx) and the ε-δ definition of limit, it turns out that any small but non-zero value of ε I must always be able to find an non-zero-length interval 0±δ such as for all x in that interval, Δy/Δx remains always no more than ε away from 1/2. How do 1 and 2 not make 3 impossible? I mean you may ask "and how you conclude that 1 and 2 make 3 impossible", and I don't know. Probably I cant because 1 and 2 do not make 3 impossible. But it feels so obvious!!! So let me ask you in the opposite way (and yea, I know, I am reversing the burden of proof, but bear with me): How is it possible to have infinite intervals arbitrarily close to x=0 where Δy/Δx=-1/2 but then be able to find a δ where, in 0±δ, ε will always be as small as I want, when I will always have pair of points within 0±δ where Δy/Δx will jump from -1/2 to 0 (let alone to +1/2)?
@@adb012notice the ratios are different depending on where you take the derivative. so if you take it at x0>0, the ratio you will be getting delta y/delta x will be (f(x0)-f(x))/(x0-x). at 0, that ratio is (f(0)-f(x))/(-x), so even though the delta intervals can intersect, the ratio you will be taking around the points where it is equal 1/2 and -1/2 may be different.
A nice variant of the Dirichlet function is the function that is 1/q for rational numbers whose totally cancelled form (with positive denominator) is p/q, and 0 for all irrational values. That function is continuous on all irrational numbers, but discontinuous on all rational numbers. I wonder if there's also a function which is differentiable only at irrational numbers. Or a function that's differentiable everywhere, but whose derivative is continuous only at irrational numbers.
Two other good counterexamples are Thomae's function (aka the popcorn function, the raindrop function, etc.) and the Cantor function (aka the Devil's staircase and other (less picturesque) names). The first is a function which is continuous at every irrational and discontinuous at every rational; the second is continuous everywhere and has zero derivative almost everywhere, yet is not constant.
My favourites are the two-variable functions like f(x,y)=x^ay^b/(x²+y²)^c and f(x,y)=x^ay^b/(x⁴+y²)^c [both with f(0,0)=0] for well-chosen values of a,b,c. For instance f(x,y)=xy/(x²+y²) has partial derivatives at every point, yet is not itself continuous, so is not differentiable: f(a,a)=1/2 except when a=0. f(x,y)=x²y/(x⁴+y²)^(3/4) at the point (0,0) the directional derivatives in every direction are equal to 0 and f is continuous, yet f is not differentiable. f(x,y)=x³y/(x²+y²) has second derivatives at 0 that defy Clairaut’s theorem. All these examples demonstrate the need for care when doing the fundamentals of multivariate calculus. It’s harder to set up than single-variable calculus.
A differentiable function on (a,b) doesn't have to have a continuous derivative, but Darboux's Theorem says that the derivative must have the intermediate value property. Example 2's derivative attains the value 0 at 0, and any interval [0,a] sees this derivative attaining all the intermediate values as required.
my favorite example for the claim that “if a function is differentiable, its derivative is continuous” is the cube root function. at zero, its derivative approaches infinity!
I would add, 0 for x=0, e^(-1/x^2) otherwise. The function that is C infinite (aka smooth) but it doesn't have a convergent taylor polynomial. This function is instructive since in calculus, you'll probably be shown how taylor of sine and cosine or exponential, approach the functions. It's important to note that taylor's theorem doesn't establish that there is a convergence for n->inf. But it states that there is a local convergence, that is, error goes to zero as you approach the value in which it is centered.
Actually it does have a convergent Taylor polynomial. It's just that the Taylor polynomial at 0 doesn't converge to the function in any neighbourhood of 0 (rather, it converges to the constant zero function).
@1:15 about graphing Dirichlet function : Considering both rationals and irrationnals are dense in R, then no matter the zoom level or how fine you decide to draw your dots (there is no line to draw for this function...), the graph would cover both lines {y=0} and {y=1} entirely any way. Admittedly, this wouldn't help much to explain the function is discontinuous everywhere, but it would be a typical example of a misleading graphical representation.
Perhaps the most counterintuitive thing about mathematical analysis is that differential calculus still allows for a major improvement despite more than three hundred years of development. This improvement is elementary, but so fundamental that allows us to solve some physical problems. For example, the problem of reconciling general relativity and quantum mechanics. You can see the proof video at my place.
Its important to note that the theorem at 8:08 shows the function would be a monotonically increasing/non-decreasing/weakly increasing function, not strictly increasing. Using the term "increasing" to describe it is ambiguous and possibly misleading, since the term can mean either completely disparate things. Great video otherwise.
There could be more counterexamples, e.g. the Thomae’s function which is continuous almost everywhere, but still discontinuous, or a Darboux function (i.e. with the intermediate value property) with discontinuity, e.g. the Conway function, which doesn't even have a limit anywhere.
I like the example of x^x for isolated discontinuities better. There are a lot of values where it's undefined but for example (-1/3)^(-1/3) is -cbrt(3)
Everyone saying that the first function is continuous is wrong. Remember that a function is continuous in a point iif the limit as x approaches the point exists. A limit exists only if, for any small error range epsilon, we can get close enough to the point so that every value of the function closer to the point is within that error range. In this case, the minimum error range we can satisfy is +-1, in the sense that if we zoom in on a point, there's no successor or predecessor to it. There doesn't exist the rational/irrational right after a point, there's infinite values, some going to 0, some to 1, our limit doesn't exist, and therefore neither the continuity.
Wait a minute, for the first example, one could "create" an infinite number of discontinuities by simply multiply the RHS by 1 in various forms of (x-y)/(x-y) for any y in (-infinity, infinity) (real numbers, the topic of the graph). One could then get creative for an infinite more points.
Your first counter example (discontinuous everywhere) can be integrated* - which is still mindblowing 54 years after I was first shown it. The can of worms it opens is, of course, the size difference between countable and uncountable sets. * the area under the curve is zero.
@@StudentDriverINOUTBRK Integration can be thought of as approximation by rectangles and taking a limit. If the rectangles are different widths then it still makes sense to just add the areas. If the rectangles overlap then the sum of the areas can only exceed the area under the function. So how does this apply to the function that is 1 on the rationals and zero on the irreationals? Definitely discontinuous on every interval. Step 1 note that the rationals can be put in a sequence based on the sum of the numerator and denominator: 1/1; 1/2, 2/1; 1/3, 3/1; 1/4, 2/3, 3/2, 4/1; 1/5... Step 2 show that the integral is less than k by surrounding the first rational by an interval of length k/2, the next by one of k/4 then k/8 etc. The total length of all the intervals is k and there are lots of overlaps. Since the height of the rectangles is 1 and k can be made arbitrarily small the integral must be zero. It's a lovely proof but the conclusion is definitely upsetting.
The most unexpected example, that i noticed was the exercise, that we had the first year: what is the electrostatic pressure on the sphere with given charge. Sounds obvious, just use gauses law for E field? WRONG!!! And i think all of us got that wrong. Then assistent showed us how to get the right result if you differentiate the electrostatic energy as a function of radius. Which ok, but it doesn't explain why using gaus gets the wrong result. And assistant told us explaination, that involved the sphere having some thicknes. But that did't feel right eather, cause you don't need thickness for theoretical problems to be consistant. Then on my way home i figured out what was wrong. Gauses law is okay right above the sphere. But directly on the surface of the sphere it is different. It's cause you can imagine going just the most tiny distance above the sphere. Like for any arbiury small region on the sphere surface imagine going so close to the sphere, that even this small region seems flat and infinite. In that case that small tiny region contributes the same E field as infinite plane would. And if you step on the surface, you have to subtract that. So in the end it's like infinitely small region but it contributes 1/2 the whole E above the surface. Which is not that surprising, cause you're also infinitely close to it. But you don't think of that when you first see the problem.
1:00 I know they can be useful for some things, but I've never considered such composites to be real functions. And given that they break so many things, I feel justified.
I first ran into this kind of thing in 3rd year real analysis. So hard to get a solid non-rigorous basic understanding of what continuity really means. Every real number is associated with one and only one point on the real number line - and vice-versa. But the point itself has ZERO dimension. It's like it doesn't even exist, just some weird abstract concept. Yes, there are rigorous proofs in real analysis that deal with continuity. But for myself, I have come to realize that getting a real (haha) solid core understanding of continuity will always remain somewhat elusive.
10:30 although, you're making a choice to use the right derivative there instead of the symmetrical derivative. If you use the symmetrical derivative it's indeed equal to 0 Regardless, I'd like a definition of "increasing" other than just derivative
The derivative is definitely 1/2??? Anyway if you want a formal definition of increasing, theres some open interval around the point a such that for all x>y f(x)>f(y)
These are all good sources for counter examples that every calculus student should understand. However, if you are a calculus student, and you are asked to find some example\counter example, please don't start with these functions. There were too many times when I asked the students for an example in some calculus problem and I was given a combination of the functions appearing in this video, where a simple linear or constant function (not to mention the zero function) were enough.
Ah yes, brings back memories. Very good video indeed. My old mathematics teacher used to say "A mistake so common that it was given its own name - partial limit" (it was not in English, so I cannot translate entirely accurate, but I think you get the idea). In a time before computers were introduced and lecture notes were handwritten imagine this problem: "Consider f(x,y) = y/x for x>0, y>0, and describe the behaviour of f in a (small) neighbourhood of (x,y)=(0,0) (i.e. for small positive x and y)." The most lazy of students would make 'short work' and answer something like this: "For every (fixed) x as y gets close to zero the result f(x,y) is close to zero. And so f(x,y) will be close to zero, as long as the (x,y)-point is close enough to zero." (for some reason not handing in homework was a cardinal sin, whereas anyone can make mistakes in their work). Did this old logic failure go away with today's easy ability to plot figures, or do you still encounter it occasionally with modern students?
It takes some work to plot the function, and find the correct zoom settings to note the relevant behavior. I'll leave it to you to extrapolate the results ;)
Just a minor problem, you defined f to be increasing on an interval wrong. The way it is in the video would be non-decreasing but in order for it to be increasing, you would need a strict inequality
Sir, I had a doubt . Can i say tan(x) is continous in it's domain because the points at which the graph is breaking are not in the domain, so why to consider those points. my exact doubt is why to call the missed discontinuous points (points which are not in domain like (2n+1)π÷2 in case of tanx) as discontinuous.
For historical reasons, when the domain of a function is a proper subset of R, some people consider the points outside the domain to be points of discontinuity. To take a simple example, f(x)=1/x, this function is continuous on its domain, but not continuous on R. I suppose this is related to the intuitive idea that a continuous function is one whose graph you can draw without taking your pencil off the paper.
12:00 i have a counter example to your counter example. If you pick f'(1/(2pi*k + pi)), then you get a derivative of 1.5 constantly, even as k goes to infinity and the expression goes to 0. Just because you can select infinitely many values that don't apply doesn't mean there aren't any values that do apply. By limiting the values you input, you necessarily change the output. in order to prove that f isn't ever increasing around 0, you would need to show that there is no value within h of 0, where h goes to 0, such that f'(h) > 0 Because the derivative at 0 is defined as lim h -> 0 (f(h)-f(0))/h = lim h->0 f(h)/h, there must be a value c infinitely close but above 0 and another value infinitely close to but below 0 such that f(c) > 0 or f(c) < 0 respectively. Because the derivative is defined at 0, and f(x) is continuous at x=0, it must also be true that there is some (possibly infinitely small) positive interval including 0 such that f(x)>f(0), and therefore by the mean value theorem there must be a value of f'(x) in that interval that is greater than 0.
On fact you can create fonction that are discontinu on Q and continu on R\Q and you can also proof that you can’t create a function continue on Q and discontinu on R\Q
The first function is continuous, though. By definition, a function is continuous iff it's continuous at each point of its domain. It cannot be discontinuous (or continuous) at 1, any more than at @DrTrefor, because neither 1, nor @DrTrefor belongs to its domain. A better example would be 0^x for nonnegative x, or 0^|x| for real x.
Originally we were talking about whether the DERIVATIVE would have the same nice properties as the original function, then we switched to asking whether the LIMIT OF A SEQUENCE of functions would have those same nice properties.
Could you perhaps define levels of continuity based on probability density for the first counterexample? For any finite interval, either rationals or irrationals have a certain level of continuity that is well-defined in most examples over these number types.
Something that I think is worth noting here is that all of the examples you provided that were about derivatives involved something going to infinity, usually a sinusoidal frequency. Might it be the case thay when you don't have anything going to infinity those intuitions do hold true?
Small question: How can i see, that between every irrational number there is a rational and vice versa? Doesnt that mean, that they go like: ration, irrational, rational, irrational? If so, how can it be, that one set is countably infinite and one is not? As it would then be possible to give a surjective image (not sure if this is the correct english terminology) between the two sets.... right? As in: Order all rationals (which is possible), the image of the rational number is the irrational left to it
It's tricky. The thing about saying rational, irrational, rational, etc is that it implies there is a concept of a "next" number. But there isn't! There is no "next" number after 1.
To see that between every two irrational numbers there is a rational, consider this: let those two numbers be a and b, and suppose a < b. Then b - a > 0, and 1/(b - a) > 0. There must be an integer n larger than 1/(b-a). So, n > 1/(b-a) > 0, which means b - a > 1/n > 0, and bn - an > 1. Since there’s a distance more than 1 between bn and an, there must be an integer between them. Call it p. Then bn > p > an. Divide by n to get b > p/n > a, and we have produced a rational between a and b.
@@DrTrefor Thanks alot for the reply :-) That makes sense on the one hand (the two irrationals arent neighbours in the sense of being next to each other, but in the sense of, there is enough space between them, to contain a rational, right?) but still leaves some room for more questions: If there is a rational between every irrational, is there more than one? If so, there are infinitly many, right? If so, i see the problem, if there is only one, i'm still having trouble... :)
The theorem at 8:35 is indeed wrong. You are assuming f is continuous at the end points a and b. If f is not continuous at those points, then f need not be increasing all throughout [a, b].
I asked other people and they convinced me that yes, this implies a neighbourhood with monotonicity. If a derivative is positive, then its continuity implies that it's also positive in some neighbourhood, and this implies that the original function is increasing there.
2:29 and further says that between every 2 rationals there is 1 irrational and vice-versa. Doesn't that imply that there is a 1 to 1 mapping possible from rationals to irrationals? Simply pick a random number, map it to the next, and repeat. That would imply the size of the rationals is the same as the size of the irrationals, but that can't be true. So somewhere in the number line there have to be at least 2 irrationals next to each other.
Nope! This is because between any pair of distinct rationals or irrationals there are in fact infinitely many rationals and irrationals. Indeed the idea of a "next" number doesn't even make sense because any "next" number you pick, you've skipped over infinitely many numbers.
This is one of those intuitions that ain't true - I can understand the proofs that it is false but, to me, it still feels intuitively that it ought to be true.
The first example seem’s to negate the continuous hypothesis and the fact that Real Numbers are not countable. How can R be uncountable if each element is surrounded by elements of Q that is countable?
For example 1,given the fact that there’s always a rational/irrational between 2 numbers,but .9 repeating and 1 are identical so there’s no number in between,is there something that will relate to these
9:00 Isn't -x^3 also a counterexample? The derivative at x = 0 is non-negative, but it's not increasing on any neighborhood of 0. Edit: I can now see that you meant that the derivative is positive, not just non-negative.
I know I need to take a more careful look at analyzing infinities, but I still can't quite follow how the rationals and the irrationals can exist as different cardinalities while maintaining the property that for any arbitrary pair of irrationals there are rationals in between them and vice versa. Generally, it feels like many of these counterintuitive examples actually hinge on that same issue; how do you interpret uncountably infinite sets and limits. I've heard that this sort of thing falls under a subject called real analysis. Would you ever be interested in putting out content related to that?
I agree it is very counterintuitive. However, if you try to get from "between two rationals there is an irrational and vice versa" to "there is a bijection between them" it isn't actually possible to do this. But yes, I do want to do more analysis type of stuff
Small clarification! At 8:10 the theorem either requires saying f(x) is continuous or that f'(x) is positive at the endpoints as well. As stated it allows for a discontinuity right at the end points. :)
The first interval should be [a,b] instead of (a,b), or the second interval should be the other way around.
Or both [a,b).
@@caspermadlener4191жхэхх
On the second point: Although derivatives don't need to be continuous, they have the intermediate value property we know from continuous functions: the image of an intervall is an intervall. I recently learned about this and I find it quite interesting, it's called Darboux' Theorem
Oh that's a good point, I kinda wish I said that. It's not as nice as continuous, but it still has some nice properties.
imagine an exam with 20 multiple choices questions like these ones
Haha don’t give my ideas for my poor calc students:D
That is just a normal math exam. Except that they are not multiple choice.
That's exactly my graduate course in analysis looks like 😢
@DrTrefor Thanks for sharing all your videos, Dr Trefor. I really hope you can respond to my other questions on other videos whenever you can. Thanks very much.
@@broccoloodleHow do you deal with that course, if I may ask?
An interesting corollary of the Dirichlet function is that it integrates to zero over any interval. While it is true you can find an irrational between any rationals and vice versa, the cardinality of the sets are not equal. If you were to plot all rationals with yellow and all irrationals as cyan (as in the sample plot) for all possible real values, the plot would be all cyan.
Good remark! Sadly, you need to know "Lebesgue-Integration" to find that result. Good old "Riemann-Integration" is not enough...
@@carstenmeyer7786 Riemmann Integration is enought after correcting Riemann"s mistake of using a uniform limit instead of a pointwise limit in his definition
@@friedrichhayek4862What uniform limit are you referring to? The definition I'm familiar with is that, if we consider the infimum of the integrals of all step functions larger than the given function and the supremum of the integrals of all step functions smaller than the given function, if these two values coincide, that is the Riemann integral of the function.
@@smiley_1000 The "common" definition of the Riemann Integral is an uniform limit as showed in page 2 of return to the Riemann integral by Robert G. Bartle and in the next paragraph, article that shows how fixing such error makes it the superior integral.
@@friedrichhayek4862 The definitions given in the article don't really involve uniform or pointwise limits of functions as usually understood. But I certainly find the more general definition that they give interesting.
Great counterexamples! Another interesting modification of the fifth example is the "Takagi Curve" -- a function that is continuous everywhere, but nowhere differentiable.
The idea is that the Takagi-Curve has (increasingly small) spikes everywhere -- they change the slope of the curve by integers values, leading to non-convergent difference quotients everywhere.
This is great stuff. Counterexamples like these are an essential part of understanding mathematics. Whether I learn a new concept, one of the first steps is to try to identify edge cases, which either barely meet the definition or barely miss it.
Another neat modification of the Dirichlet functie:
f(x) is 0 for irrationals and f(x) = 1/q if x is rational, where x = p/q is the simplified fraction (and let's say q>0).
It's discontinuous on the rationals, but continuous on the irrationals.
The latter claim seems weird, but it's pretty easy to prove. For any irrational x and any epsilon>0, we know that:
1. There are only finite rational numbers p/q for which f(p/q) = 1/q > epsilon.
2. The irrational number x is some distance away from all those finite rational numbers.
From those two facts it follows that there must be some positive number delta, such that if |x-y|
love this
This is known as Thomae's function and by many other names.
Nice, must remember this one - very sneaky.
What’s wilder to me than the fact that a function can be discontinuous everywhere is that it can be continuous at only a single point, something that really shows the disconnect between the intuitive meaning of continuity and the rigorous mathematical definition.
Cool topic! My favorite math professor from a past life (Dmitry Fuchs) was a master of counterexamples . . . real analysis for a full year with that guy, and counterexamples were strong with that one! I think I still emphasize counterexamples more than most while I teach physics because of that experience. I'm taking notes here, because I still teach first year calculus as well . . . .
Another thing that might be intuitive is the following claim: If f'=0 then f is a constant function. This is wrong though as you mightve guessed. And here is the counter example: Let I_1 and I_2 be two disjoint and open intervals in R and then let f be a function that's defined on the union of I_1 and I_2, f(x) = 1 for x in I_1 and f(x) = -1 for x in I_2
I learned basic calc over a decade ago, including basic real analysis, but I still didn't know the example with positive derivative without an increasing neighbourhood. Thanks a lot, I can add it to my lecture notes that I can publish one day.
But does continuity of the nonzero derivative guarantee (imply) a neighbourhood with monotonicity?
Same@@michatarnowski580
I asked other people and they convinced me that yes, this implies a neighbourhood with monotonicity. If a derivative is positive, then its continuity implies that it's also positive in some neighbourhood, and this implies that the original function is increasing there.
@@michatarnowski580only if strictly increasing. if the derivative is equal to zero in, e.g., x^3 at 0, then the function may not be non-decreasing.
I don't feel I understand it; do you mean that it may be increasing?
I'll add another: If f is differentiable on a closed and bounded interval, then f' is integrable on that interval. False, see Volterra's function. The construction uses x²sin(1/x) and a variant of the Cantor set. Despite having a bounded derivative, the derivative is not integrable. The book *Counterexamples in Analysis* contains many strange functions but most of the counter examples require some background in advanced calculus to mistakenly conjecture.
Edit: corrected a misstatement of the false conjecture. First I said 'f differentiable ⇒ f is integrable' when it should've been what it is now.
That's a really cool one, thanks for sharing.
@@DrTrefor That is false, you only had to use a pointwise limit instead of a uniform one when taking the Riemann Integral
When you say that, you are working with the riemann integral, and thats completly true. However, if you work with the generalized riemann integral, also known as the Kurzweil Henstock integral, every f' is integrable.
@DrTrefor Hey Dr. Trevor. I really hope you can respond to my other questions when you can. Thanks very much.
Wow, that last one was simpler than what popped into my head for it - the finite Fourier series approximations of the square wave.
For the claim as stated at 9:13, there's a much simpler counterexample: since you allow f'(x)=0, the function -x^3 is one counterexample
Or just any constant function, you know?
@@MartinPuskinconstant functions are increasing, just not strictly increasing
Here's a function (courtesy of Michael Penn) that behaves very strangely at x = 0.
f(x) = e^(1/x) / [ 1 + e^(1/x) ]
Usually students encountering functions that have a "jump" discontinuity at x=a are contrived. For values x>= a we have a nice g(x) and for x
tan^-1 (1/x) is another such example where we have a jump discontinuity that doesn’t require piecewise.
I'd argue "f" needs to be piece-wise defined as well, since right now it is not defined at "x = 0" (even if both the left and the right limit to zero exist).
I would argue this is just "inheriting" the weirdness of 1/x. In particular 1/x is not connected.
@@carstenmeyer7786 I see your point as it can be defined piece-wise but I would argue that it doesn't have to be. The piece-wise nature is built into the single equation.
@@ianfowler9340 You can also get a step function without an undefined point in the step. For example, f(x) = lim_{n->infinity} tan(nx). Yes, it involves a limit. But then, technically the exponential function involves a limit as well, it's just hidden away in the notation.
Students often believe that f'(x)>0 means f is strictly increasing on an interval. You may want to restrict the condition to f'(a)>0 in the third theorem, exploiting the counterexample at its full potential, otherwise with only f'(a)>=0 the counterexample f(x)=-x^3 works, making the theorem less powerful.
I came looking to see if anyone had mentioned f(x)=-x^3 because that's the first thing that popped into my head as well.
yeah man, that's what i was about to say.. thought i am really dumb as he continued to explain this complicated example.
One of my favorite counterexamples is the Fresnel Integral.
In calc 2, you learn the Divergence theorem for series. If the terms don’t approach 0, the series diverges. However, the same cannot be said for improper integrals.
cos(x^2) does not have a limit as x approaches infinity. It bounces between 1 and -1. You might expect then the integral from 0 to infinity to do the same thing. However, the width of the oscillations approaches 0, meaning the integral actually converges (to sqrt(pi/8))
cool! thanks for sharing
Feels good to see a calculus video once in a while.
Can you take up a cool topic related to discrete math in the next video?
@1:58 There's an asterisk here, for all practical purposes that graph IS a continuous straight line at 1, say if you graph it on a pc. But if you consider the infinities involved it's a straight line at 0, since there's so many irrational numbers
Last counterexample is nice for introducing the concept of uniform convergence
Blew my mind with the discontinuous derivatives.
For the convergence bit you should specify that you mean pointwise convergence, although the same is true for L^p convergence, but the counterexample is different.
Great video! I love counterexamples; such a great way to help get a sense of the whole terrain of a concept. And there were definitely some there I'd never heard of, like the one that has a positive derivative but is not increasing around the point!
Glad you enjoyed it!
@@DrTrefor ... I am still utterly confused about this one. I mean, your explanation was perfect and I understood it, but it created a paradox in my brain.
My conflict is this:
1) The derivative at 0 is 1/2
2) You showed that no matter how small the interval I take around 0, I will always find points inside the interval for which the derivative is -1/2.
3) But from 1 and the definition of derivative (lim |Δx->0| Δy/Δx) and the ε-δ definition of limit, it turns out that any small but non-zero value of ε I must always be able to find an non-zero-length interval 0±δ such as for all x in that interval, Δy/Δx remains always no more than ε away from 1/2.
How do 1 and 2 not make 3 impossible? I mean you may ask "and how you conclude that 1 and 2 make 3 impossible", and I don't know. Probably I cant because 1 and 2 do not make 3 impossible. But it feels so obvious!!!
So let me ask you in the opposite way (and yea, I know, I am reversing the burden of proof, but bear with me):
How is it possible to have infinite intervals arbitrarily close to x=0 where Δy/Δx=-1/2 but then be able to find a δ where, in 0±δ, ε will always be as small as I want, when I will always have pair of points within 0±δ where Δy/Δx will jump from -1/2 to 0 (let alone to +1/2)?
@@adb012notice the ratios are different depending on where you take the derivative. so if you take it at x0>0, the ratio you will be getting delta y/delta x will be (f(x0)-f(x))/(x0-x). at 0, that ratio is (f(0)-f(x))/(-x), so even though the delta intervals can intersect, the ratio you will be taking around the points where it is equal 1/2 and -1/2 may be different.
I really liked the video and I hope you make it a serie plz :)
Thank you! I think I should:D
I'll be waiting for it. Thanks @@DrTrefor
A nice variant of the Dirichlet function is the function that is 1/q for rational numbers whose totally cancelled form (with positive denominator) is p/q, and 0 for all irrational values. That function is continuous on all irrational numbers, but discontinuous on all rational numbers.
I wonder if there's also a function which is differentiable only at irrational numbers. Or a function that's differentiable everywhere, but whose derivative is continuous only at irrational numbers.
love this one
super interesting video , and maybe even useful on top ! well done
9:33, x *over 2. Hope it helps. Great video!
I instantly knew I was about to see the dirichlet function
Topology has much, much more of this stuff. It's amazing.
Two other good counterexamples are Thomae's function (aka the popcorn function, the raindrop function, etc.) and the Cantor function (aka the Devil's staircase and other (less picturesque) names).
The first is a function which is continuous at every irrational and discontinuous at every rational; the second is continuous everywhere and has zero derivative almost everywhere, yet is not constant.
My learning is continuous, and hopefully my knowledge remains an increasing function.
My favourites are the two-variable functions like f(x,y)=x^ay^b/(x²+y²)^c and f(x,y)=x^ay^b/(x⁴+y²)^c [both with f(0,0)=0] for well-chosen values of a,b,c. For instance
f(x,y)=xy/(x²+y²) has partial derivatives at every point, yet is not itself continuous, so is not differentiable: f(a,a)=1/2 except when a=0.
f(x,y)=x²y/(x⁴+y²)^(3/4) at the point (0,0) the directional derivatives in every direction are equal to 0 and f is continuous, yet f is not differentiable.
f(x,y)=x³y/(x²+y²) has second derivatives at 0 that defy Clairaut’s theorem.
All these examples demonstrate the need for care when doing the fundamentals of multivariate calculus. It’s harder to set up than single-variable calculus.
Slight lie in the above - my favourites are the examples (counterexamples?) that require Baire category theorem to “construct”.
A differentiable function on (a,b) doesn't have to have a continuous derivative, but Darboux's Theorem says that the derivative must have the intermediate value property. Example 2's derivative attains the value 0 at 0, and any interval [0,a] sees this derivative attaining all the intermediate values as required.
The first function that you've shown is continuous, as it is a ratio of two continuous functions
That was an awesome video. Thanks for all your effort and sharing with us.
Glad you enjoyed it!
my favorite example for the claim that “if a function is differentiable, its derivative is continuous” is the cube root function. at zero, its derivative approaches infinity!
I actually wouldn’t call this function differentiable at 0 because of that vertical tangent you mention
@@DrTrefor ah wait that’s true
I would add, 0 for x=0, e^(-1/x^2) otherwise.
The function that is C infinite (aka smooth) but it doesn't have a convergent taylor polynomial.
This function is instructive since in calculus, you'll probably be shown how taylor of sine and cosine or exponential, approach the functions. It's important to note that taylor's theorem doesn't establish that there is a convergence for n->inf. But it states that there is a local convergence, that is, error goes to zero as you approach the value in which it is centered.
Actually it does have a convergent Taylor polynomial. It's just that the Taylor polynomial at 0 doesn't converge to the function in any neighbourhood of 0 (rather, it converges to the constant zero function).
7:54 you can think of this as much the same as the difference between "Weather" and "Climate"
Counterexamples in analysis is a great book
@1:15 about graphing Dirichlet function : Considering both rationals and irrationnals are dense in R, then no matter the zoom level or how fine you decide to draw your dots (there is no line to draw for this function...), the graph would cover both lines {y=0} and {y=1} entirely any way.
Admittedly, this wouldn't help much to explain the function is discontinuous everywhere, but it would be a typical example of a misleading graphical representation.
gelbaum and olmsted "counterexamples in analysis" - great book
super useful reminders to get me started on my first semester ❤
Good luck!!
Last counter example also shows set of all continuous function on [0, 1] is not closed set
Perhaps the most counterintuitive thing about mathematical analysis is that differential calculus still allows for a major improvement despite more than three hundred years of development. This improvement is elementary, but so fundamental that allows us to solve some physical problems. For example, the problem of reconciling general relativity and quantum mechanics. You can see the proof video at my place.
Its important to note that the theorem at 8:08 shows the function would be a monotonically increasing/non-decreasing/weakly increasing function, not strictly increasing. Using the term "increasing" to describe it is ambiguous and possibly misleading, since the term can mean either completely disparate things. Great video otherwise.
This reminds me of my first Real Analysis class!
There could be more counterexamples, e.g. the Thomae’s function which is continuous almost everywhere, but still discontinuous, or a Darboux function (i.e. with the intermediate value property) with discontinuity, e.g. the Conway function, which doesn't even have a limit anywhere.
I like the example of x^x for isolated discontinuities better. There are a lot of values where it's undefined but for example (-1/3)^(-1/3) is -cbrt(3)
You must extend the function x^x to complex numbers C. With z^z we get all values
Everyone saying that the first function is continuous is wrong.
Remember that a function is continuous in a point iif the limit as x approaches the point exists.
A limit exists only if, for any small error range epsilon, we can get close enough to the point so that every value of the function closer to the point is within that error range.
In this case, the minimum error range we can satisfy is +-1, in the sense that if we zoom in on a point, there's no successor or predecessor to it. There doesn't exist the rational/irrational right after a point, there's infinite values, some going to 0, some to 1, our limit doesn't exist, and therefore neither the continuity.
cool video! what software do you use to draw the graphs?
This is all done in Maple Learn
Wait a minute, for the first example, one could "create" an infinite number of discontinuities by simply multiply the RHS by 1 in various forms of (x-y)/(x-y) for any y in (-infinity, infinity) (real numbers, the topic of the graph). One could then get creative for an infinite more points.
Your first counter example (discontinuous everywhere) can be integrated* - which is still mindblowing 54 years after I was first shown it.
The can of worms it opens is, of course, the size difference between countable and uncountable sets.
* the area under the curve is zero.
How can continuity nowhere be integrated? Isn't that the function that is nonintegratable in Riemann integration>?
@@StudentDriverINOUTBRK Integration can be thought of as approximation by rectangles and taking a limit. If the rectangles are different widths then it still makes sense to just add the areas. If the rectangles overlap then the sum of the areas can only exceed the area under the function.
So how does this apply to the function that is 1 on the rationals and zero on the irreationals? Definitely discontinuous on every interval.
Step 1 note that the rationals can be put in a sequence based on the sum of the numerator and denominator:
1/1; 1/2, 2/1; 1/3, 3/1; 1/4, 2/3, 3/2, 4/1; 1/5...
Step 2 show that the integral is less than k by surrounding the first rational by an interval of length k/2, the next by one of k/4 then k/8 etc. The total length of all the intervals is k and there are lots of overlaps.
Since the height of the rectangles is 1 and k can be made arbitrarily small the integral must be zero.
It's a lovely proof but the conclusion is definitely upsetting.
I recall that the first two counter-examples were Friday afternoon end-of-lecture send-off problems in my Freshman calculus course.
Hi Dr. Bazett!
So good!
Hey, thanks!
Excellent video see you at 8:30 on Thursday
haha see you!
The most unexpected example, that i noticed was the exercise, that we had the first year: what is the electrostatic pressure on the sphere with given charge. Sounds obvious, just use gauses law for E field? WRONG!!! And i think all of us got that wrong. Then assistent showed us how to get the right result if you differentiate the electrostatic energy as a function of radius. Which ok, but it doesn't explain why using gaus gets the wrong result. And assistant told us explaination, that involved the sphere having some thicknes. But that did't feel right eather, cause you don't need thickness for theoretical problems to be consistant. Then on my way home i figured out what was wrong. Gauses law is okay right above the sphere. But directly on the surface of the sphere it is different. It's cause you can imagine going just the most tiny distance above the sphere. Like for any arbiury small region on the sphere surface imagine going so close to the sphere, that even this small region seems flat and infinite. In that case that small tiny region contributes the same E field as infinite plane would. And if you step on the surface, you have to subtract that. So in the end it's like infinitely small region but it contributes 1/2 the whole E above the surface. Which is not that surprising, cause you're also infinitely close to it. But you don't think of that when you first see the problem.
Great video concept!
1:00 I know they can be useful for some things, but I've never considered such composites to be real functions. And given that they break so many things, I feel justified.
one of the best videos
I first ran into this kind of thing in 3rd year real analysis. So hard to get a solid non-rigorous basic understanding of what continuity really means. Every real number is associated with one and only one point on the real number line - and vice-versa. But the point itself has ZERO dimension. It's like it doesn't even exist, just some weird abstract concept. Yes, there are rigorous proofs in real analysis that deal with continuity. But for myself, I have come to realize that getting a real (haha) solid core understanding of continuity will always remain somewhat elusive.
10:30 although, you're making a choice to use the right derivative there instead of the symmetrical derivative. If you use the symmetrical derivative it's indeed equal to 0
Regardless, I'd like a definition of "increasing" other than just derivative
The derivative is definitely 1/2???
Anyway if you want a formal definition of increasing, theres some open interval around the point a such that for all x>y f(x)>f(y)
These are all good sources for counter examples that every calculus student should understand.
However, if you are a calculus student, and you are asked to find some example\counter example, please don't start with these functions. There were too many times when I asked the students for an example in some calculus problem and I was given a combination of the functions appearing in this video, where a simple linear or constant function (not to mention the zero function) were enough.
I won't lie I love the joke on his t-shirt
It *is* geektaculuar.
Ah yes, brings back memories. Very good video indeed.
My old mathematics teacher used to say "A mistake so common that it was given its own name - partial limit" (it was not in English, so I cannot translate entirely accurate, but I think you get the idea). In a time before computers were introduced and lecture notes were handwritten imagine this problem:
"Consider f(x,y) = y/x for x>0, y>0, and describe the behaviour of f in a (small) neighbourhood of (x,y)=(0,0) (i.e. for small positive x and y)."
The most lazy of students would make 'short work' and answer something like this:
"For every (fixed) x as y gets close to zero the result f(x,y) is close to zero. And so f(x,y) will be close to zero, as long as the (x,y)-point is close enough to zero." (for some reason not handing in homework was a cardinal sin, whereas anyone can make mistakes in their work).
Did this old logic failure go away with today's easy ability to plot figures, or do you still encounter it occasionally with modern students?
It takes some work to plot the function, and find the correct zoom settings to note the relevant behavior. I'll leave it to you to extrapolate the results ;)
@@carstenmeyer7786 😀
Thats fun because you can get literally any behavior.
If you let y=kx then the limit is k no matter what k is.
Just a minor problem, you defined f to be increasing on an interval wrong. The way it is in the video would be non-decreasing but in order for it to be increasing, you would need a strict inequality
Sir, I had a doubt .
Can i say tan(x) is continous in it's domain because the points at which the graph is breaking are not in the domain, so why to consider those points.
my exact doubt is why to call the missed discontinuous points (points which are not in domain like (2n+1)π÷2 in case of tanx) as discontinuous.
For historical reasons, when the domain of a function is a proper subset of R, some people consider the points outside the domain to be points of discontinuity. To take a simple example, f(x)=1/x, this function is continuous on its domain, but not continuous on R. I suppose this is related to the intuitive idea that a continuous function is one whose graph you can draw without taking your pencil off the paper.
And sqrt(x) is continuous!
12:00
i have a counter example to your counter example.
If you pick f'(1/(2pi*k + pi)), then you get a derivative of 1.5 constantly, even as k goes to infinity and the expression goes to 0.
Just because you can select infinitely many values that don't apply doesn't mean there aren't any values that do apply.
By limiting the values you input, you necessarily change the output.
in order to prove that f isn't ever increasing around 0, you would need to show that there is no value within h of 0, where h goes to 0, such that f'(h) > 0
Because the derivative at 0 is defined as lim h -> 0 (f(h)-f(0))/h = lim h->0 f(h)/h, there must be a value c infinitely close but above 0 and another value infinitely close to but below 0 such that f(c) > 0 or f(c) < 0 respectively.
Because the derivative is defined at 0, and f(x) is continuous at x=0, it must also be true that there is some (possibly infinitely small) positive interval including 0 such that f(x)>f(0), and therefore by the mean value theorem there must be a value of f'(x) in that interval that is greater than 0.
In the last example this sequence doesn't have limit at x=-1, because the function value in this point make a sequence 1,-1,1,-1,1,-1...
Hello Mr bazett I love your videos but could you please improve the mic quality no problems otherwise great videos I love you
On fact you can create fonction that are discontinu on Q and continu on R\Q and you can also proof that you can’t create a function continue on Q and discontinu on R\Q
Really good video
The first function is continuous, though. By definition, a function is continuous iff it's continuous at each point of its domain. It cannot be discontinuous (or continuous) at 1, any more than at @DrTrefor, because neither 1, nor @DrTrefor belongs to its domain. A better example would be 0^x for nonnegative x, or 0^|x| for real x.
Thumbnail is top notch
lol I had way too much fun with that one:D
Is the final claim true if the sequence of functions uniformly converges to f(x)?
Thanks!
Hey thanks so much!!
It was from Calculus M. Spivak that I got a better view about such examples , not exactly from my textbooks
Wikipedia's list of pathological objects is great
integral of cos(x²) over all x, converges, could you believe it?
Wouldn't the third claim also be disproven by f(x)=-x^3 at f(0)?
Isn't x^n convergent on (-1;1) and diverges on both end points?
13:40 and 14:00, what are the differences between the two claims?
Originally we were talking about whether the DERIVATIVE would have the same nice properties as the original function, then we switched to asking whether the LIMIT OF A SEQUENCE of functions would have those same nice properties.
@@DrTrefor Thanks for answering. I was just confused since they were written exactly in the same way.
It was probably an unintentional 'spoiler' created by video editing.
0:49 how can be cancel (x-1) bcz at x=1 it is zero, and we can cancel two numbers only if they are nonzero
That’s right. My (verbal) argument was that we can cancel them AWAY from x=1 but AT x=1 we can’t and it leaves the hole in the graph
If we wanted to find the limit as x->1, then we can cancel because when we take a limit we do not want x to be exactly 1.
@@DrTrefor got it 👍..btw vdo is too good...I'm waiting for the next one
Could you perhaps define levels of continuity based on probability density for the first counterexample? For any finite interval, either rationals or irrationals have a certain level of continuity that is well-defined in most examples over these number types.
You're just mapping two separate continuous functions, per definition.
And obviously, combining them into a step function is discontinuous, any finite step function is discontinuous.
All of the complexity comes from combining them into a step function and has little to do with the independent functions.
You're actually stepping between two graphs and just displaying it on one graph.
Can you make tutorials video on mathematica.
Something that I think is worth noting here is that all of the examples you provided that were about derivatives involved something going to infinity, usually a sinusoidal frequency. Might it be the case thay when you don't have anything going to infinity those intuitions do hold true?
7:42 shouldn't the definition exclude the equal sign, leaving only the "less than"? Otherwise, it would apply to constant functions.
Someday you gotta do a video on the Weierstrass function, and why is pissed people off.
haha I have had a short on this half finished for like 3 months:D
Small question: How can i see, that between every irrational number there is a rational and vice versa? Doesnt that mean, that they go like: ration, irrational, rational, irrational? If so, how can it be, that one set is countably infinite and one is not? As it would then be possible to give a surjective image (not sure if this is the correct english terminology) between the two sets.... right? As in: Order all rationals (which is possible), the image of the rational number is the irrational left to it
It's tricky. The thing about saying rational, irrational, rational, etc is that it implies there is a concept of a "next" number. But there isn't! There is no "next" number after 1.
To see that between every two irrational numbers there is a rational, consider this: let those two numbers be a and b, and suppose a < b. Then b - a > 0, and 1/(b - a) > 0. There must be an integer n larger than 1/(b-a). So, n > 1/(b-a) > 0, which means b - a > 1/n > 0, and bn - an > 1. Since there’s a distance more than 1 between bn and an, there must be an integer between them. Call it p. Then bn > p > an. Divide by n to get b > p/n > a, and we have produced a rational between a and b.
The important thing is, theres also a rational number between all pairs of irrational numbers an vice versa. In fact, there are infinitely many.
@@DrTrefor Thanks alot for the reply :-) That makes sense on the one hand (the two irrationals arent neighbours in the sense of being next to each other, but in the sense of, there is enough space between them, to contain a rational, right?) but still leaves some room for more questions: If there is a rational between every irrational, is there more than one? If so, there are infinitly many, right? If so, i see the problem, if there is only one, i'm still having trouble... :)
@@Skellborn There is at least one. It doesn’t say there is exactly one. In fact there are infinitely many
The theorem at 8:35 is indeed wrong. You are assuming f is continuous at the end points a and b. If f is not continuous at those points, then f need not be increasing all throughout [a, b].
In fact it could be discontinuous at precisely the end points.
ah yes of course, i forgot to add the assumption of continuity, thank you
But does continuity of the nonzero derivative guarantee (imply) a neighbourhood with monotonicity?
I asked other people and they convinced me that yes, this implies a neighbourhood with monotonicity. If a derivative is positive, then its continuity implies that it's also positive in some neighbourhood, and this implies that the original function is increasing there.
2:29 and further says that between every 2 rationals there is 1 irrational and vice-versa. Doesn't that imply that there is a 1 to 1 mapping possible from rationals to irrationals? Simply pick a random number, map it to the next, and repeat. That would imply the size of the rationals is the same as the size of the irrationals, but that can't be true. So somewhere in the number line there have to be at least 2 irrationals next to each other.
Nope! This is because between any pair of distinct rationals or irrationals there are in fact infinitely many rationals and irrationals. Indeed the idea of a "next" number doesn't even make sense because any "next" number you pick, you've skipped over infinitely many numbers.
This is one of those intuitions that ain't true - I can understand the proofs that it is false but, to me, it still feels intuitively that it ought to be true.
The first example seem’s to negate the continuous hypothesis and the fact that Real Numbers are not countable.
How can R be uncountable if each element is surrounded by elements of Q that is countable?
Amazing!
What program/website is that at 7:12?
Maple learn
“I will leave this as an exercise for the reader.” 😭
For example 1,given the fact that there’s always a rational/irrational between 2 numbers,but .9 repeating and 1 are identical so there’s no number in between,is there something that will relate to these
0.9 recurring and 1 aren't two numbers.
9:00 Isn't -x^3 also a counterexample? The derivative at x = 0 is non-negative, but it's not increasing on any neighborhood of 0.
Edit: I can now see that you meant that the derivative is positive, not just non-negative.
I know I need to take a more careful look at analyzing infinities, but I still can't quite follow how the rationals and the irrationals can exist as different cardinalities while maintaining the property that for any arbitrary pair of irrationals there are rationals in between them and vice versa.
Generally, it feels like many of these counterintuitive examples actually hinge on that same issue; how do you interpret uncountably infinite sets and limits. I've heard that this sort of thing falls under a subject called real analysis. Would you ever be interested in putting out content related to that?
I agree it is very counterintuitive. However, if you try to get from "between two rationals there is an irrational and vice versa" to "there is a bijection between them" it isn't actually possible to do this.
But yes, I do want to do more analysis type of stuff
Get a better mic and your videos are perfect!
8:45 why not take for example f(x) = -x^3?
Because it's derivative is negative
Yeah I meant -x^3