I don't find failing to be differentiable all that strange, even if the failure is on a perfect nowhere dense set.
What I find weird is that the derivative, if it exists, cannot have a jump discontinuity. That means that the only other kind of discontinuity it can have is infinite oscillation like sin(1/x) near zero.
This one is a bit of an obscure property of derivatives, corollary to theorem 5.12 in Baby Rudin.
"What I find weird is that the derivative, if it exists, cannot have a jump discontinuity."
You mean the derivative of a specific function you have in mind, like perhaps the field equations? Or do you mean something other than what I understand by a jump discontinuity in a derivative, such as one gets for f(x) = {-x for x<0, x for x>=0}?
Tone: Clarification request for my own understanding, not a "gotcha" post; I strongly believe you are saying something true but there's just too many details elided because they are trivial to you for me to quite follow, and I'm intrigued enough to want to be able to follow up, if you'd be so kind as to indulge me.
What is f'(0), the derivative of f at 0? It doesn't even exist, therefore it has no discontinuity at 0.
Darboux's theeorem says that there is no way to create a jump in the derivative, in part because a derivative at a point is defined in terms of limits from both sides, so the limits must be the same.
> What is f'(0), the derivative of f at 0? It doesn't even exist, therefore it has no discontinuity at 0.
This is definitely wrong. The derivative of |x| is -1 where x < 0, and 1 where x > 0, and doesn't exist where x = 0. That is a perfect match to the definition of a jump discontinuity -- the limit from the left is not equal to the limit from the right.
It's not at all necessary for the function to exist at x = 0 in order for it to have a discontinuity at x = 0.
But hey, don't take my word for it; why not check the definition on Wolfram?
The original claim was "the derivative, if it exists, cannot have a jump discontinuity." This is badly stated. You're defending the idea that if the derivative exists at a particular point, then there is no jump discontinuity in the derivative at that point. But there can be a function f which satisfies both of these properties:
- f is the derivative of some other function F. ("The derivative of F exists.")
- f has a jump discontinuity, somewhere. ("The derivative of F has a jump discontinuity.")
That is a question of your personal focus. For example, I'd expect a theorem that applied to "functions from ℝ to ℝ" to apply to f(x) = 1/x unless a specific qualifier was given.
Ah, I see. This is a definition other than the one I understood, which is most assuredly less precise than the one understood by people who have extensively studied analysis. (This being the internet, let me be very clear that I'm basically saying my definition was wrong.)
(I'm pretty decent in mathematics in the general sense, reasoning from axioms, proofs, etc. But as I came up on the computer science side, I'm very lopsided into discrete mathematics, which is a bit unusual. Almost every other way to become a good mathematician makes you lopsided into real analysis and the fields that build on that.)
There's a certain cultural thing in mathematics that is kind of hard to convey unless you've been around other mathematicians about what is a definition and what constitutes proof. And these things definitely change culturally with time. Our current standards of analysis are quite modern. From a modern viewpoint, nobody really bothered to define continuity and limits until the 19th century, even though calculus is from the 17 century. The 19th century cultural fixes had a purpose: actual misunderstandings and/or errors crept up in previously published works.
So when we get pedantic about what things and use early 20th century formalism, we do so kind of as a reaction to historical misunderstandings. When we ask, "what is the derivative of f at 0?", we're trying to show holes in understanding that have been patched by more modern frameworks.
What s/he is saying, if I understand correctly, is that two and only two kinds of discontinuities in derivatives (and second- and higher-order derivatives) can exist: jump derivatives and those exemplified by the pathology inherent in sin(1/x) at the origin thereof; given that spacetime continues beyond the Cauchy horizon but the derivative does not, the first kind (the abrupt ”vertical step” kind) must be ruled out, leaving only the second variety as a candidate. Graph that function, and it’s first and second derivatives around the origin, and you will begin to realise how bizarre this behaviour is and what strange implications it may have. (Then again, this is deep within a black hole, in an area where time and the radial dimension of space are switched, so it’s just a new kind of devilry in an already fraught area.)
No. The relevant theorem is called Darboux's Theorem. It states that if f(x) has a derivative f'(x) everywhere, then f'(x) satisfies the intermediate value property, meaning: If f'(A) < 0 and f'(B) > 0 then there exists a C between A and B such that f'(C) = 0. Why this is interesting is because f'(x) can be a discontinuous function.
f'(x) being discontinuous is not the same thing as f(x) not having a derivative at some point; for example, the absolute value function |x| is simply not differentiable at x=0. The following function IS differentiable at x=0 but its derivative at x=0 is discontinuous:
f(x) = x^2 sin(1/x) if x != 0 else 0
You can verify that this function satisfies Darboux's theorem.
Darboux's theorem implies in particular that if f'(x) exists at some point x and is discontinuous at that x then the discontinuity is not a jump discontinuity.
Sorry, I was hoping the reference to Baby Rudin would be ok. I had forgotten that the theorem had a name.
A jump discontinuity is a discontinuity where intermediate values are not attained, such as in the heaviside or signum functions. If a derivative exists at a point x, then it cannot have a jump discontinuity at x. However, it can have a discontinuity like the one exemplified by f(x) = 2x sin(1/x) - cos(1/x), with f(0) = 0, as that's the derivative of g(x) = x^2 sin (1/x)
What I find weird is that the derivative, if it exists, cannot have a jump discontinuity. That means that the only other kind of discontinuity it can have is infinite oscillation like sin(1/x) near zero.
This one is a bit of an obscure property of derivatives, corollary to theorem 5.12 in Baby Rudin.