Namespaces
Variants
Actions

Neyman, Jerzy

From Encyclopedia of Mathematics
Jump to: navigation, search
Copyright notice
This article Jerzy Neyman was adapted from an original article by Steve Fienberg, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article ([http://statprob.com/encyclopedia/JerzyNEYMAN.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb.

Jerzy NEYMAN

b. 16 April 1894 - d. 5 August 1981

Summary. Neyman was one of the towering figures in the rise of 20th century mathematical statistics, contributing to the theory of statistical experimentation and sample surveys, but especially to the development of the formal theory of tests and confidence intervals.

Jerzy Neyman was born of Polish parents in 1894, in Bendery, which has been variously labeled as Rumania, Ukraine, and Moldavia because of the vicissitudes of border-drawing in Eastern Europe. From 1912 to 1917 he studied mathematics at the University of Kharkov, where he went on to receive a Masters degree in 1920 and then became a lecturer. In the summer of 1921, Neyman went to Bydgoszcz in northern Poland, as part of an exchange of nationals between Russia and Poland agreed to at the end of the Russian-Polish War. There he worked as "senior statistical assistant" at the National Agricultural Institute and he wrote two long papers on agricultural experimentation that were published in 1923 in Polish (Splawa-Neyman, 1990 [1923a], 1925 [1923b]).

In 1924, Neyman obtained his doctorate degree from the University of Warsaw, using as a thesis the work done in Bydgoszcz. The papers prepared during his time at the National Agricultural Institute led to Neyman's later contributions to experimental design (see Neyman et al. 1935) and especially sampling (Neyman, 1934) but were also responsible in part for Neyman's 1925 visit to University College London, to work in Karl Pearson's Laboratory.

There Neyman began a decade-long collaboration with Pearson's son, Egon, a collaboration which yielded the formal theory of tests of hypotheses and also led to Neyman's subsequent invention of confidence intervals. We describe some these contributions in more detail in subsequent sections.

By 1934, after holding various teaching and research positions in Poland, Neyman moved to England to join Egon Pearson at University College as a Senior Lecturer and then Reader. In 1937, at the invitation of W. Edwards Deming, Neyman made a six-week visit to the United States where he lectured on sampling at the U. S. Department of Agriculture Graduate School (e.g., see Neyman, 1952 for a revised version of these lectures) and at several universities. This visit ultimately led to an offer for Neyman to join the University of California at Berkeley where he ultimately established a leading department of mathematical statistics. Here he institutionalized the now famous Berkeley Symposia on Statistics and Probability, gathering together the world's leading mathematical statisticians every five years from 1945 to 1970.

Throughout his Berkeley years, problems related to the war effort, to medicine, to weather modification, and to astronomy offered Neyman practical applications which inspired theoretical development. Neyman worked closely with students, supervising almost 40 Ph.D. dissertations at Berkeley. He continued to work at Berkeley until his retirement in 1961, and he remained an active participant in the department until his death on August 5, 1981.

Neyman had always had scientific interests much broader than narrowly construed mathematical statistics; as noted above, he constantly used practical problems as springboards to theoretical advances. But his interests extended to the practical issues of formation of statistical societies (he was a prime mover in the establishment of the Bernoulli Society as a section of the International Statistical Institute) and to science well beyond statistics. Before coming to the United States he had helped commemorate the 450th anniversary of Copernicus' birth. In 1973 he helped mark Copernicus' 500th anniversary by organizing and editing a volume about modern-day Copernican revolutions entitled The Heritage of Copernicus: Theories "Pleasing to the Mind."

Neyman received many honors, including the Guy Medal in Gold from the Royal Statistical Society and numerous honorary degrees. He was elected as a member of the National Academy of Sciences in the United States and as a foreign member of the Swedish and Polish Academies of Science and of the Royal Society. For detailed biographical material on Neyman, see Lehmann (1994) and especially Reid (1982).

Neyman on Experimentation and Sampling

While Neyman is perhaps best known today for his contributions to sampling theory and for a system of inference, his earliest published work concerned the design of experiments. The parallels between the design elements of sampling and experimentation are obvious in Neyman's 1923 paper on experimental design. He conceptualized the assignment of treatments to units in an experiment as the drawing without replacement of balls from urns, one urn for each treatment. These urns had the special property that the removal of a ball (representing the outcome of an experimental unit) from one urn causes it to disappear from the other urns as well. Thus Neyman showed that when there is a finite pool of experimental units that need to be assigned to treatments, the random assignment of units to treatments is exactly parallel to the random selection of a sample from a finite population. Neyman was to take up the issues in experimentation again in a controversial 1935 paper on Latin square designs read before the Royal Statistical Society. This work provoked a major dispute with R. A.. Fisher (see Fienberg and Tanur (1996) for further details.

During the 1920s there was much discussion of "the method of representative sampling" at the meetings of the International Statistical Institute (ISI), leading to a committee report which contained a description of two such methods, random sampling (with all elements of the populations having the same probability of selection), and purposive selection of large groups of units (in modern terminology, clusters), chosen to match the population on selected control variates. Although the ISI report did not attempt to choose between the methods, Gini and Galvani subsequently presented a major application of purposive selection to the sampling of records from the 1921 Italian census, in which they called into question the accuracy of random sampling.

Neyman responded with his classic 1934 paper, in which he compared purposive and random sampling, but also offered elements of synthesis. He began by describing stratified sampling, noting that earlier work by Bowley considered only the proportionate case. He then gave a crisp description of cluster sampling explaining that the difficulties in sampling individuals at random may be greatly diminished when we adopt groups as the elements of sampling. In this new synthesis, he called this procedure "random sampling by groups." He then suggested combining stratification with clustering to form "random stratified sampling by groups."

The paper also contains a derivation of optimal allocation for stratification, still known as Neyman allocation despite the later discovery of an earlier proof by Chuprov. But perhaps the most notable feature of the paper at the time was Neyman's introduction of confidence intervals.

The Neyman-Pearson Collaboration

During the same period of time, Neyman entered into a collaboration with Egon Pearson, which was to have a profound impact on the direction both of mathematical statistics and statistical practice for the rest of the 20th century. Between 1926 and 1933, in response to what they viewed as Fisher's ad hoc approach to testing, they developed what we now know as the Neyman-Pearson theory of hypothesis testing, which identifies an alternative hypothesis and recognizes two types of error. Their approach focused on the power to detect alternative hypotheses and led to the identification of optimal test criteria in specific circumstances. It was to serve as the foundation of much later work in mathematical statistics.

This work on testing also led Neyman to formulate his theory of statistical estimation via the method of confidence intervals, first introduced with limited details in the 1934 sampling paper. As Neyman developed these ideas further in a 1937 paper, he linked the basic structure of tests and interval estimation in a fashion that allowed the carry-over of the Neyman-Pearson optimality results. The frequentist, repeated sampling interpretation of confidence intervals remains difficult for many practicing scientists to comprehend.

Neyman continued to work on testing and estimation problems throughout his career and later developed an alternative to the Pearson chi-square goodness-of-fit test, as well as the $C(\alpha)$ family of test criteria, and, in 1949, he used linearization methods to develop best asymptotically normal (BAN) estimates.


References

[1] Fienberg, S.E. and Tanur, J.M. (1996). Reconsidering the fundamental contributions of Fisher and Neyman on experimentation and sampling. International Statistical Review, 64, 237--253.
[2] Lehmann, E. L. (1994). Jerzy Neyman. 1894-1981. Biographical Memoirs, Vol. 63, National Academy Press, Washington, DC, 395--420.
[3] Neyman, J. (1934). On two different aspects of the representative method: the method of stratified sampling and the method of purposive selection (with discussion). Journal of the Royal Statistical Society, 97, 558--625.
[4] Neyman, J., with the cooperation of K. Iwaszkiewicz and St. Kolodziejczyk. (1935). Statistical problems in agricultural experimentation (with discussion). Supplement to the Journal of the Royal Statistical Society, 2: 107--180.
[5] Neyman, J. (1937). Outline of a theory of statistical estimation based on the classical theory of probability. Philosophical Transactions of the Royal Society of London, Series, A, 236, 333-380.
[6] Neyman, J. (1952 [1938]). Lectures and Conferences on Mathematical Statistics and Probability. U. S. Department of Agriculture, Washington, DC. (The 1952 edition is an expanded and revised version of the original 1938 mimeographed edition.)
[7] Reid, C. (1982). Neyman from Life. Springer-Verlag, New York.
[8] Splawa-Neyman, J. (1925[1923b]) Contributions of the theory of small samples drawn from a finite population. Biometrika, 17: 472--479. (The note on this republication reads ``These results with others were originally published in La Revue Mensuelle de Statistique, publ. par l'Office Central de Statistique de la République Polonaise, tom. vi. pp. 1--29, 1923.)
[9] Splawa-Neyman, J. (1990 [1923a]). On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Statistical Science, 5: 465-472. Translated and edited by D. M. Dabrowska and T. P. Speed from the Polish original, which appeared in Roczniki Nauk Rolniczyc, Tom X (1923): 1--51 (Annals of Agricultural Sciences).



Reprinted with permission from Christopher Charles Heyde and Eugene William Seneta (Editors), Statisticians of the Centuries, Springer-Verlag Inc., New York, USA.

How to Cite This Entry:
Neyman, Jerzy. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Neyman,_Jerzy&oldid=54029