Namespaces
Variants
Actions

Difference between revisions of "Talk:Arithmetization of analysis"

From Encyclopedia of Mathematics
Jump to: navigation, search
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Created this page (obviously still under construction) thinking it would serve as a helpful place to summarize not only the historical facts of the "arithmetization of analysis" but also some current lively discussion of its significance for the "foundations of mathematics," whatever that may turn out to be! LOL [[User:Whayes43|Whayes43]] ([[User talk:Whayes43|talk]]) 18:12, 15 April 2014 (CEST)
+
--[[User:Whayes43|William Hayes]] ([[User talk:Whayes43|talk]]) 20:00, 30 September 2014 (CEST)Created this page (obviously still under construction) thinking it would serve as a helpful place to summarize not only the historical facts of the "arithmetization of analysis" but also some current lively discussion of its significance for the "foundations of mathematics," whatever that may turn out to be! LOL [[User:Whayes43|Whayes43]] ([[User talk:Whayes43|talk]]) 18:12, 15 April 2014 (CEST)
  
 
:Current lively discussion, really? I did not know. Where to look for this?
 
:Current lively discussion, really? I did not know. Where to look for this?
Line 83: Line 83:
 
Let me summarise with a brief list of pros and cons of switching to a nonstandard framework. First, the pros:
 
Let me summarise with a brief list of pros and cons of switching to a nonstandard framework. First, the pros:
  
*Many “first-order” parameters such as {\epsilon} or {N} disappear from view, as do various “negligible” errors. More importantly, “second-order” parameters, such as the function {F} appearing in Theorem 2, also disappear from view. (In principle, third-order and higher parameters would also disappear, though I do not yet know of an actual finitary argument in my fields of study which would have used such parameters (with the exception of Ramsey theory, where such parameters must come into play in order to generate such enormous quantities as Graham’s number).) As such, a lot of tedious “epsilon management” disappears.
+
*Many “first-order” parameters such as $\epsilon$ or $N$ disappear from view, as do various “negligible” errors. More importantly, “second-order” parameters, such as the function $F$ appearing in Theorem 2, also disappear from view. (In principle, third-order and higher parameters would also disappear, though I do not yet know of an actual finitary argument in my fields of study which would have used such parameters (with the exception of Ramsey theory, where such parameters must come into play in order to generate such enormous quantities as Graham’s number).) As such, a lot of tedious “epsilon management” disappears.
  
 
*Iterative (and often parameter-heavy) arguments can often be replaced by minimisation (or more generally, extremisation) arguments, taking advantage of such properties as the well-ordering principle, the least upper bound axiom, or compactness.
 
*Iterative (and often parameter-heavy) arguments can often be replaced by minimisation (or more generally, extremisation) arguments, taking advantage of such properties as the well-ordering principle, the least upper bound axiom, or compactness.
Line 118: Line 118:
  
 
-------------------------
 
-------------------------
 +
 +
: You've set out a wonderful summary of the "usefulness" of NSA.
 +
 +
: Yes, as you know, interesting questions have been asked and discussed on MathOverflow and Math Stack Exchange, including these:
 +
:: How Helpful is NSA? Are infinitesimals dangerous? Is mathematics history written by the victors? . . . .
 +
: My awareness of T. Tao's blog and Keisler's 1970s NSA textbook and to the many NSA-based extensions and applications came from these sources. I feel that we might well include some/much of the material that you set out, along with (of course) a link to Tao's source. I would like to do this, along with somewhat expanding the section on how our understanding of "The nature of rigour in definitions and proofs" has matured since the discovery of NSA -- and perhaps something connecting modern notions of continua with those of Dedekind/Cantor.
 +
 +
: I much desire your opinion on all of this and hope that you will please advise! --[[User:Whayes43|William Hayes]] ([[User talk:Whayes43|talk]]) 15:39, 30 September 2014 (CEST)
 +
 +
::That would be nice, but too long discussion of NSA could overload this article. We have another article on NSA, specifically. [[User:Boris Tsirelson|Boris Tsirelson]] ([[User talk:Boris Tsirelson|talk]]) 18:00, 30 September 2014 (CEST)
 +
 +
::: Ah, yes, of course. Then I'll just polish the section on rigour and perhaps do a wee bit more with continua, as I mentioned above. Thank you for your support. --[[User:Whayes43|William Hayes]] ([[User talk:Whayes43|talk]]) 20:00, 30 September 2014 (CEST)-

Latest revision as of 18:00, 30 September 2014

--William Hayes (talk) 20:00, 30 September 2014 (CEST)Created this page (obviously still under construction) thinking it would serve as a helpful place to summarize not only the historical facts of the "arithmetization of analysis" but also some current lively discussion of its significance for the "foundations of mathematics," whatever that may turn out to be! LOL Whayes43 (talk) 18:12, 15 April 2014 (CEST)

Current lively discussion, really? I did not know. Where to look for this?
"the definition of the theory of real numbers" --- do you really mean it, or rather, the definition of [the set, or orderd field,... of] real numbers?
Boris Tsirelson (talk) 19:35, 15 April 2014 (CEST)
Yes, thank you, I ought of course to have written "creation" of the theory of real numbers -- as is contained in the footnoted quotation -- or something equivalent.
as for "lively discussion" I found some here: http://www.cs.nyu.edu/pipermail/fom/1998-January/000804.html Perhaps not as lively as I recalled! In any case, Enjoy!
Whayes43 (talk) 04:32, 16 April 2014 (CEST)
Rather lively, I see, thanks. Boris Tsirelson (talk) 08:24, 16 April 2014 (CEST)
But, it seems, you insists on the spelling "pilars" rather than "pillars"? Boris Tsirelson (talk) 19:54, 16 April 2014 (CEST)

Oooops! I didn't see any further comment here, so I pasted in a changed version of the page. Hope I didn't over-write a change that you made, Boris Tsirelson I will look at the history and try to patch up any damage I may have done. I'll be more careful in future! Whayes43 (talk) 16:50, 19 April 2014 (CEST)

Yes, please. Hope you do not hate TeX. It seems, you keep at home a version and just upload... this is indeed not a good idea on a wiki. On the other hand, if needed, you can temporary write "Please do not disturb now; let me work intensively during two days" or something like that. Another (rather good) possibility is: work on your sandbox (there no one will disturb you) and then, when ready, move it to the mainspace. Boris Tsirelson (talk) 19:54, 19 April 2014 (CEST)
In fact, I only added the dollar signs around your formulas. Boris Tsirelson (talk) 16:43, 20 April 2014 (CEST)

I see Pierpoint in the Notes but not in Primary sources nor References. Boris Tsirelson (talk) 08:14, 6 May 2014 (CEST)

My apology . . . and many thanks for your tireless editorial watchfulness! Greatly appreciated. Spelling of his name actually is Pierpont.

"Cauchy then proved that a necessary and sufficient condition that an infinite series converge is that, for a given value of $p$, the magnitude of the difference between $S_n$ and $S_n+p$ tends toward $0$ as $n$ increases indefinitely." — Really? As far as I understand, this claim is wrong. Is it meant "Cauchy believed that he proved"? Or is it meant, not for a given value of $p$, but just the opposite: $p$ is permitted to grow with $n$? In the form written it implies convergence of every series whose terms tend to $0$. Boris Tsirelson (talk) 18:58, 10 September 2014 (CEST)

You are a tireless editor, Mr. Tsirelson, for which I again express my thanks. My laptop malfunctioned during my last edit (just another poor workman blaming his tools!) requiring that I re-enter a number of small amendments. In the process of doing so, I omitted the {__} around the subscripts in the expression $S_{n+p}$, which I believe solves the problem. It must be (somewhat) amusing for you to be monitoring the efforts of "professionals", who, even after working with computers for 47 years, still make amateurish mistakes?!

In any case, I will attend to the matter with as much haste and as little waste as I can manage! --Whayes43 (talk) 14:59, 11 September 2014 (CEST)

No, the braces around the subscripts do not solve the problem. The problem is mathematical, not technical. Consider for example the harmonic series $ 1+\frac12+\frac13+\dots$; here $ S_{n+1}-S_n = \frac1n \to 0 $; also $ S_{n+2}-S_n = \frac1n+\frac1{n+1} \to 0 $; and so on, for every (fixed!) $p$. Nevertheless, the series diverges. Boris Tsirelson (talk) 17:31, 11 September 2014 (CEST)

Ah, yes, of course. (Apparently, it has been known since the 14th century that the harmonic series diverges!) It's interesting, as Grabiner (1981) notes, that one 18th century way of defining convergence of a series is that the nth term goes to zero, even though (as with the harmonic series) there is no finite sum. She writes this:

In eighteenth-century work on series, sometimes a series is said to converge in the way that the hyperbola 'converges' to its asymptote, that is, when its nth term goes to zero; at other times the series is said to converge in our sense, that is, when its partial sums approach a limit, which is then called the sum of the series. Thus a series may converge in the first sense without converging in the second sense -- Cauchy's (and our) sense.

Thank you for pointing out this lacuna. I will correct -- and endeavour to work with more care.

Nice. Not knowing the history but knowing the notion now called Cauchy sequence I guess that Cauchy had good understanding of the importance of $p$ not to be fixed but be permitted to grow with $n$. Boris Tsirelson (talk) 16:24, 12 September 2014 (CEST)

Thanks for your comments. I will remove the following problematic text:

Cauchy then established the necessary and sufficient condition that an infinite series converges:
for a given value of $p$, the magnitude of the difference between $S_n$ and $S_{n+p}$ tends toward $0$ as $n$ increases indefinitely.

Some investigation reveals that Cauchy did not even attempt a proof of the sufficiency of the the Cauchy criterion!

The excerpt below [from Grabiner (1981) p. 102] provides interesting background that may interest you:

Cauchy called attention to the differences between the first and the succesive partial sums, defined by
$S_{n+1} - S_n = u_n$ ::$S_{n+2} - S_n = u_n + u_{n+1}$ ::$S_{n+3} - S_n = u_n + u_{n+1} + u_{n+2}$ :: . . . :: . . . :: . . . Grabiner continues as follows: :For the series to converge, it was known [by Cauchy and others] to be necessary that the first of these, $u_n$, go to zero. But it was also known, as Cauchy pointed out next, that this was not sufficient: ::It is necessary also, for increasing values of $n$, that the different sums $u_n + u_{n+1}$, $u_n + u_{n+1} + u_{n+2}$ ... , that is, the sums of the quantities $u_n$, $u_{n+1}$, $u_{n+2}$, ... taken, from the first, in whatever number we wish, finish by constantly having numerical [that is, absolute] values less than any assignable limit. ''Conversely, when these diverse conditions are fulfilled, the convergence of the series is assured''. I believe that the following [from Boyer p. 566], which I stated Cauchy had established as a necessary and sufficient condition of convergence, was meant [by Boyer] to be a summary of the above (translated) text of Cauchy's: :for a given value of $p$, the magnitude of the difference between $S_n$ and $S_{n+p}$ tends toward $0$ as $n$ increases indefinitely. --[[User:Whayes43|Whayes43]] ([[User talk:Whayes43|talk]]) 16:47, 13 September 2014 (CEST) I've accepted the boundaries of the arithmetization period as 1822 and 1872, this last the "red-letter year" -- so termed by Boyer. Accordingly, I've removed the (in any case, abbreviated) discussion of the set-theoretic definition of function and the set-theoretic construction of the real line, feeling that the history of this subject is best dealt with in another article, if at all. --[[User:Whayes43|William Hayes]] ([[User talk:Whayes43|talk]]) 16:06, 29 September 2014 (CEST) :As for me, it is now an interesting, well-developed article. :But I have a remark, in a separate section below. [[User:Boris Tsirelson|Boris Tsirelson]] ([[User talk:Boris Tsirelson|talk]]) 19:38, 29 September 2014 (CEST) =='"`UNIQ--h-0--QINU`"' More opinions on non-standard analysis == Some mathematicians like non-standard analysis, some dislike it, and many never mind. Here I quote some opinions that look to me rather well-balanced. [[User:Boris Tsirelson|Boris Tsirelson]] ([[User talk:Boris Tsirelson|talk]]) 19:36, 29 September 2014 (CEST) -------------------- [http://math.stackexchange.com/questions/51453/is-non-standard-analysis-worth-learning MathStackExchange]: "Is non-standard analysis worth learning?" So, in summary, I think some parts of "modern analysis" (done lightly) more effectively fulfill one's intuition about "infinitesimals" than does non-standard analysis. (Paul Garrett.) A propos whether it makes calculus any easier: that is for you to decide. From my point of view, because of the need to rigorously introduce the hyperreals and the notion of the standard part of an expression, for an absolute beginner it is just passing the buck. You either learn how to write ϵ−δ arguments, or you accept a less-intuitive number system and learn all the rules about what you can and cannot do in this system. I think that a real appreciation of non-standard analysis can only come after one has developed some fundamentals of standard analysis. In fact, for me, one of the most striking thing about non-standard analysis is that it provides an a posteriori justification why the (by modern standards) sloppy notations of Newton, Leibniz, Cauchy, etc did not result in a theory that collapses under more careful scrutiny. But that's just my opinion. (Willie Wong.) ------------------------------- [http://terrytao.wordpress.com/2009/12/13/approximate-bases-sunflowers-and-nonstandard-analysis/ Terry Tao], "Approximate bases, sunflowers, and nonstandard analysis". Reproduced in book: Epsilon of Room, Two, Volume 2 (sect. 2.11.4, pp. 227-229). 4. Summary Let me summarise with a brief list of pros and cons of switching to a nonstandard framework. First, the pros: *Many “first-order” parameters such as $\epsilon$ or $N$ disappear from view, as do various “negligible” errors. More importantly, “second-order” parameters, such as the function $F$ appearing in Theorem 2, also disappear from view. (In principle, third-order and higher parameters would also disappear, though I do not yet know of an actual finitary argument in my fields of study which would have used such parameters (with the exception of Ramsey theory, where such parameters must come into play in order to generate such enormous quantities as Graham’s number).) As such, a lot of tedious “epsilon management” disappears.
  • Iterative (and often parameter-heavy) arguments can often be replaced by minimisation (or more generally, extremisation) arguments, taking advantage of such properties as the well-ordering principle, the least upper bound axiom, or compactness.
  • The transfer principle lets one use “for free” any (first-order) statement about standard mathematics in the non-standard setting (provided that all objects involved are internal; see below).
  • Mature and powerful theories from infinitary mathematics (e.g. linear algebra, real analysis, representation theory, topology, functional analysis, measure theory, Lie theory, ergodic theory, model theory, etc.) can be used rigorously in a nonstandard setting (as long as one is aware of the usual infinitary pitfalls, of course; see below).
  • One can formally define terms that correspond to what would otherwise only be heuristic (or heavily parameterised and quantified) concepts such as “small”, “large”, “low rank”, “independent”, “uniformly distributed”, etc.
  • The conversion from a standard result to its nonstandard counterpart, or vice versa, is fairly quick (but see below), and generally only needs to be done only once or twice per paper.

Next, the cons:

  • Often requires the axiom of choice, as well as a certain amount of set theory. (There are however weakened versions of nonstandard analysis that can avoid choice that are still suitable for many applications.)
  • One needs the machinery of ultralimits and ultraproducts to set up the conversion from standard to nonstandard structures.
  • The conversion usually proceeds by a proof by contradiction, which (in conjunction with the use of ultralimits) may not be particularly intuitive.
  • One cannot efficiently discern what quantitative bounds emerge from a nonstandard argument (other than by painstakingly converting it back to a standard one, or by applying the tools of proof mining). (On the other hand, in particularly convoluted standard arguments, the quantitative bounds are already so poor – e.g. of iterated tower-exponential type – that losing these bounds is no great loss.)
  • One has to take some care to distinguish between standard and nonstandard objects (and also between internal and external sets and functions, which are concepts somewhat analogous to measurable and non-measurable sets and functions in measurable theory). More generally, all the usual pitfalls of infinitary analysis (e.g. interchanging limits, the need to ensure measurability or continuity) emerge in this setting, in contrast to the finitary setting where they are usually completely trivial.
  • It can be difficult at first to conceptually visualise what nonstandard objects look like (although this becomes easier once one maps nonstandard analysis concepts to heuristic concepts such as “small” and “large” as mentioned earlier, thus for instance one can think of an unbounded nonstandard natural number as being like an incredibly large standard natural number).
  • It is inefficient for both nonstandard and standard arguments to coexist within a paper; this makes things a little awkward if one for instance has to cite a result from a standard mathematics paper in a nonstandard mathematics one.
  • There are philosophical objections to using mathematical structures that only exist abstractly, rather than corresponding to the “real world”. (Note though that similar objections were also raised in the past with regard to the use of, say, complex numbers, non-Euclidean geometries, or even negative numbers.)
  • Formally, there is no increase in logical power gained by using nonstandard analysis (at least if one accepts the axiom of choice); anything which can be proven by nonstandard methods can also be proven by standard ones. In practice, though, the length and clarity of the nonstandard proof may be substantially better than the standard one.

In view of the pros and cons, I would not say that nonstandard analysis is suitable in all situations, nor is it unsuitable in all situations, but one needs to carefully evaluate the costs and benefits in a given setting; also, in some cases having both a finitary and infinitary proof side by side for the same result may be more valuable than just having one of the two proofs. My rule of thumb is that if a finitary argument is already spitting out iterated tower-exponential type bounds or worse in an argument, this is a sign that the argument “wants” to be infinitary, and it may be simpler to move over to an infinitary setting (such as the nonstandard setting).


You've set out a wonderful summary of the "usefulness" of NSA.
Yes, as you know, interesting questions have been asked and discussed on MathOverflow and Math Stack Exchange, including these:
How Helpful is NSA? Are infinitesimals dangerous? Is mathematics history written by the victors? . . . .
My awareness of T. Tao's blog and Keisler's 1970s NSA textbook and to the many NSA-based extensions and applications came from these sources. I feel that we might well include some/much of the material that you set out, along with (of course) a link to Tao's source. I would like to do this, along with somewhat expanding the section on how our understanding of "The nature of rigour in definitions and proofs" has matured since the discovery of NSA -- and perhaps something connecting modern notions of continua with those of Dedekind/Cantor.
I much desire your opinion on all of this and hope that you will please advise! --William Hayes (talk) 15:39, 30 September 2014 (CEST)
That would be nice, but too long discussion of NSA could overload this article. We have another article on NSA, specifically. Boris Tsirelson (talk) 18:00, 30 September 2014 (CEST)
Ah, yes, of course. Then I'll just polish the section on rigour and perhaps do a wee bit more with continua, as I mentioned above. Thank you for your support. --William Hayes (talk) 20:00, 30 September 2014 (CEST)-
How to Cite This Entry:
Arithmetization of analysis. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Arithmetization_of_analysis&oldid=33447