Markus Klyver
2019-10-26 01:43:42 UTC
I've followed John Gabriel quite a while now, as I find dumb/shitty/cranky math and physics funny. As I know how rude and completely unwilling John Gabriel is to learn anything new, this isn't a response to him.
I felt, though, that some points has to me made about his (quite elementary) errors on undergrad introductory real analysis:
1. The derivative f' of a real function f is the limit of (f(x+h)-f(x))/h as h→0, given the limit exists. Now, Gabriel claims this is ill-defined because generally limits are very hard to calculate. In fact, there is no systematic way to find a general limit.
This doesn't mean that the limit is ill-defined. I could define x to be the 657566556787686578987965:th prime number, without knowing what x is. I know that a such prime number exists, hence x is well-defined. Similarly for limits, given that the limit exists, it's well-defined. In fact, the topological limit in (real) metric spaces is unique, so we can talk about *the* limit.
There is no requirement on actually being able to *compute* something you've defined in order for it to be well-defined: an other example are integrals. Integrals are generally impossible to compute, and there is no known method or algoritm to calculate an arbitrary integral. We just know how to do it in very specific cases. Not being able to calculate an integral doesn't mean it's undefined.
(He doesn't understand integrals either, but I'm too lazy to explain why his "definition" is not a definition.)
I'm very surprised that he almost derived the more general definition of the derivative, but stumbled on the the finish line. An equivalent definition for the derivative is that f(x+h)-f(x) = Ah + hρ(h) for some function ρ that will go to 0 whenever h→0. To recover the usual definition, divide both sides with h and let h→0. Then (f(x+h)-f(x))/h = A+ρ(h) → A. This is the natural generalisation and usual definition of partial derivatives.
We can think of A as an linear approximation, since when h is small, then ρ(h) will be to (by the definition of a limit).
For example, for f(x)= x², we would have ρ(h)=h, which of course goes it zero when h does it.
2. Convergence and limits in metric spaces are things Gabriel repeatably misunderstands. When we define convergence, we do that in a space. What that means is that we already have a space, and try to formalise what it means for something to converge *in that space*.
This means there could be metric spaces in which Cauchy sequences do not converge, because we require the limit to be in the metric space. An example would be the partial sums of the decimal expansion of sqrt(2) in the rationals. This forms a sequence not converging, because if it did it would have to be sqrt(2). We can decide to view the same sequence via an inclusion mapping ℚ↪ℝ as metric spaces, in which the same sequence does converge.
Decimal expansions, as I mentioned above, are defined as limits. More specifically, they are defined to be the limit of the partial sums. Since the partial sums of a real decimal expansion is Cauchy, they will converge in the real numbers. Metric spaces having this property are called complete.
One motivation for the real numbers is the completeness. Real functions over R also behave nicely, such as obtaining a minimum and maximum on compact sets (w.r.t. the Borel topology generated by open intervals).
Many layman texts go over the details of these properties, which is of course why John Gabriel doesn't understand them. Of course: if Gabriel actually read an undergrad textbook in real analysis, he would understand that these properties are not as "deep" and abstract as he think they are.
3. His "definition" of numbers makes no sense. He starts off with the concept of a point "as an abstract idea of where you are". This seems to imply a very physical definition, and not something mathematicians would find very satisfying. He the gives a vague explanation of travelling between points (already assuming all sorts of things about how the Euclidean space works, and what velocity is). He then defines "lines" and circles as "distances", ignoring the fact that a distance usually is a *number* and not a path.
Finally, when defining a "number", he already assumes what a magnitude is, which is only vaguely defined as "the size or extent" or something. He mentions lengths, masses and volumes, but these are physical objects. So again: we define something in terms of something that depend on the physical nature, which is not a satisfying definition for a mathematician.
He further assumes that these magnitudes exists on their on, but by his own criteria these objects could never be ratified because they simple make no sense. What does it mean to have an abstract magnitude, if you don't realise it in some form of underlying space? Of course, Gabriel doesn't mention the usage of Euclidean space and the implicit axiomatization. The axioms of Euclidean space of course follows as "theorems" if you implicitly assume them.
4. And oh, 3 <--> 5-2 makes no sense since 3 and 5-2 aren't logical statements. A statement can be logically equivalent to an other statement, but it makes no sense to say that a number is logically equivalent to an other number. You can't logically say "5 implies cat". But you can say "if I have 5 cats, I have a cat." This is a logical statement of the form "If x, then y" and it happens to be (always) true.
As John Gabriel might read it, I'll give an other example of something that's always false: "If a have no cat, then I have two cats". This statement is formally false, but it still makes logically and formally sense. It's a formal statement which you can evaluate and assign a truth value to. So saying that "x is less or equal to 4" just means that we can check if x satisfies (x strictly less than 4) OR (x=4). This is a logical statement combined made up with two different statements connected with a truth-functional OR. This is the same OR which you encounter in a computer or electronics engineering class. One important difference is, of course, that physical logical gates depend on the physical reality whilst truth functionals are formalised by propositional logic.
So in short: above is my selection of favourite parts of what John Gabriel doesn't understand about elementary mathematics (mostly focused on analysis). I'll be very eager to see if he responds to this. Maybe he'll feature me in one video, it's quite comical to be called an ape by a crank.
I felt, though, that some points has to me made about his (quite elementary) errors on undergrad introductory real analysis:
1. The derivative f' of a real function f is the limit of (f(x+h)-f(x))/h as h→0, given the limit exists. Now, Gabriel claims this is ill-defined because generally limits are very hard to calculate. In fact, there is no systematic way to find a general limit.
This doesn't mean that the limit is ill-defined. I could define x to be the 657566556787686578987965:th prime number, without knowing what x is. I know that a such prime number exists, hence x is well-defined. Similarly for limits, given that the limit exists, it's well-defined. In fact, the topological limit in (real) metric spaces is unique, so we can talk about *the* limit.
There is no requirement on actually being able to *compute* something you've defined in order for it to be well-defined: an other example are integrals. Integrals are generally impossible to compute, and there is no known method or algoritm to calculate an arbitrary integral. We just know how to do it in very specific cases. Not being able to calculate an integral doesn't mean it's undefined.
(He doesn't understand integrals either, but I'm too lazy to explain why his "definition" is not a definition.)
I'm very surprised that he almost derived the more general definition of the derivative, but stumbled on the the finish line. An equivalent definition for the derivative is that f(x+h)-f(x) = Ah + hρ(h) for some function ρ that will go to 0 whenever h→0. To recover the usual definition, divide both sides with h and let h→0. Then (f(x+h)-f(x))/h = A+ρ(h) → A. This is the natural generalisation and usual definition of partial derivatives.
We can think of A as an linear approximation, since when h is small, then ρ(h) will be to (by the definition of a limit).
For example, for f(x)= x², we would have ρ(h)=h, which of course goes it zero when h does it.
2. Convergence and limits in metric spaces are things Gabriel repeatably misunderstands. When we define convergence, we do that in a space. What that means is that we already have a space, and try to formalise what it means for something to converge *in that space*.
This means there could be metric spaces in which Cauchy sequences do not converge, because we require the limit to be in the metric space. An example would be the partial sums of the decimal expansion of sqrt(2) in the rationals. This forms a sequence not converging, because if it did it would have to be sqrt(2). We can decide to view the same sequence via an inclusion mapping ℚ↪ℝ as metric spaces, in which the same sequence does converge.
Decimal expansions, as I mentioned above, are defined as limits. More specifically, they are defined to be the limit of the partial sums. Since the partial sums of a real decimal expansion is Cauchy, they will converge in the real numbers. Metric spaces having this property are called complete.
One motivation for the real numbers is the completeness. Real functions over R also behave nicely, such as obtaining a minimum and maximum on compact sets (w.r.t. the Borel topology generated by open intervals).
Many layman texts go over the details of these properties, which is of course why John Gabriel doesn't understand them. Of course: if Gabriel actually read an undergrad textbook in real analysis, he would understand that these properties are not as "deep" and abstract as he think they are.
3. His "definition" of numbers makes no sense. He starts off with the concept of a point "as an abstract idea of where you are". This seems to imply a very physical definition, and not something mathematicians would find very satisfying. He the gives a vague explanation of travelling between points (already assuming all sorts of things about how the Euclidean space works, and what velocity is). He then defines "lines" and circles as "distances", ignoring the fact that a distance usually is a *number* and not a path.
Finally, when defining a "number", he already assumes what a magnitude is, which is only vaguely defined as "the size or extent" or something. He mentions lengths, masses and volumes, but these are physical objects. So again: we define something in terms of something that depend on the physical nature, which is not a satisfying definition for a mathematician.
He further assumes that these magnitudes exists on their on, but by his own criteria these objects could never be ratified because they simple make no sense. What does it mean to have an abstract magnitude, if you don't realise it in some form of underlying space? Of course, Gabriel doesn't mention the usage of Euclidean space and the implicit axiomatization. The axioms of Euclidean space of course follows as "theorems" if you implicitly assume them.
4. And oh, 3 <--> 5-2 makes no sense since 3 and 5-2 aren't logical statements. A statement can be logically equivalent to an other statement, but it makes no sense to say that a number is logically equivalent to an other number. You can't logically say "5 implies cat". But you can say "if I have 5 cats, I have a cat." This is a logical statement of the form "If x, then y" and it happens to be (always) true.
As John Gabriel might read it, I'll give an other example of something that's always false: "If a have no cat, then I have two cats". This statement is formally false, but it still makes logically and formally sense. It's a formal statement which you can evaluate and assign a truth value to. So saying that "x is less or equal to 4" just means that we can check if x satisfies (x strictly less than 4) OR (x=4). This is a logical statement combined made up with two different statements connected with a truth-functional OR. This is the same OR which you encounter in a computer or electronics engineering class. One important difference is, of course, that physical logical gates depend on the physical reality whilst truth functionals are formalised by propositional logic.
So in short: above is my selection of favourite parts of what John Gabriel doesn't understand about elementary mathematics (mostly focused on analysis). I'll be very eager to see if he responds to this. Maybe he'll feature me in one video, it's quite comical to be called an ape by a crank.