Namespaces
Variants
Actions

Functional data analysis

From Encyclopedia of Mathematics
Jump to: navigation, search
Copyright notice
This article Analysis of Samples of Curves (=Functional Data Analysis) was adapted from an original article by Hans-Georg Müller, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article ([http://statprob.com/encyclopedia/FunctionalDataAnalysis2.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb.

2020 Mathematics Subject Classification: Primary: 62G05 Secondary: 62M09 [MSN][ZBL]


$ \def\cov{ {\rm cov}} $ $ \def\var{ {\rm var}} $ $ \def\ci{\cite} $ $ \def\cp{\citep} $ $ \def\eps{\varepsilon} $

$ \def\T{\mathcal{T}} $ $ \def\mt{\mathcal{T}} $ $ \def\xk{A_k} $ $ \def\xik{A_{ik}} $ $ \def\hxk{\hat{A}_k} $ $ \def\hxik{\hat{A}_{ik}} $ $ \def\tX{\tilde{X}} $ $ \def\tY{\tilde{Y}} $ $ \def\tij{t_{ij}} $ $ \def\Yij{Y_{ij}} $ $ \def\Xij{X_{ij}} $ $ \def\pk{\phi_k} $

Functional Data Analysis

Hans-Georg Müller

Department of Statistics University of California, Davis

One Shields Ave., Davis, CA 95616, USA.

e-mail: mueller@wald.ucdavis.edu

KEY WORDS: Autocovariance Operator, Clustering, Covariance Surface, Eigenfunction, Infinite-dimensional Data, Karhunen-Lo\`eve Representation, Longitudinal Data, Nonparametrics, Panel Data, Principal Component, Registration, Regression, Smoothing, Square Integrable Function, Stochastic Process, Time Course, Tracking, Warping.


1. Overview


Functional data analysis (FDA) refers to the statistical analysis of data samples consisting of random functions or surfaces, where each function is viewed as one sample element. Typically, the random functions contained in the sample are considered to be independent and to correspond to smooth realizations of an underlying stochastic process. FDA methodology then provides a statistical approach to the analysis of repeatedly observed stochastic processes or data generated by such processes. FDA differs from time series approaches, as the sampling design is very flexible, stationarity of the underlying process is not needed, and autoregressive-moving average models or similar time regression models play no role, except where the elements of such models are functions themselves.

FDA also differs from multivariate analysis, the area of statistics that deals with finite-dimensional random vectors, as functional data are inherently infinite-dimensional and smoothness often is a central assumption. Smoothness has no meaning for multivariate data analysis, which in contrast to FDA is permutation invariant. Even sparsely and irregularly observed longitudinal data can be analyzed with FDA methodology. FDA thus is useful for the analysis of longitudinal or otherwise sparsely sampled data. It is also a key methodology for the analysis of time course, image and tracking data.

The approaches and models of FDA are essentially nonparametric, allowing for flexible modeling. The statistical tools of FDA include smoothing, e.g., based on series expansions, penalized splines, or local polynomial smoothing, and functional principal component analysis. A distinction between smoothing methods and FDA is that smoothing is typically used in situations where one wishes to obtain an estimate for one non-random object (where objects here are functions or surfaces) from noisy observations, while FDA aims at the analysis of a sample of random objects, which may be assumed to be completely observed without noise or to be sparsely observed with noise; many scenarios of interest fall in between these extremes.

An important special situation arises when the underlying random processes generating the data are Gaussian processes, an assumption that is often invoked to justify linear procedures and to simplify methodology and theory. Functional data are ubiquitous and may for example involve samples of density functions \cp{knei:01}, hazard functions, or behavioral tracking data. Application areas that have been emphasized in the statistical literature include growth curves \cp{rao:58, mull:84:2}, econometrics and e-commerce \cp{rams:02:2,jank:06:2}, evolutionary biology \cp{kirk:89,izem:05}, and genetics and genomics \cp{opge:06,mull:08:3}. FDA also applies to panel data as considered in economics and other social sciences.


2. Methodology


Key FDA methods include functional principal component analysis \cp{cast:86,rice:91}, warping and curve registration \cp{gerv:04} and functional regression \cp{rams:91}. Theoretical foundations and asymptotic methods of FDA are closely tied to perturbation theory of linear operators in Hilbert space \cp{daux:82,bosq:00,mas:03}; a reproducing kernel Hilbert space approach has also been proposed \cp{euba:08}, as well as Bayesian approaches \cp{tele:08}. Finite sample implementations typically require to address ill-posed problems, emplying suitable regularization, which is often implemented by penalized least squares or penalized likelihood and by truncated series expansions. A broad overview of methods and applied aspects of FDA can be found in the textbook \ci{rams:05} and some additional reviews are in \ci{rice:04,zhao:04,mull:08:7}.

The basic statistical methodologies of ANOVA, regression, correlation, classification and clustering that are available for scalar and vector data have spurred analogous developments for functional data. An additional aspect is that the time axis itself may be subject to random distortions and adequate functional models sometimes need to reflect such time-warping (also referred to as alignment or registration).

Another issue is that often the random trajectories are not directly observed. Instead, for each sample function one has available measurements on a time grid that may range from very dense to extremely sparse. Sparse and randomly distributed measurement times are frequently encountered in longitudinal studies. Additional contamination of the measurements of the trajectory levels by errors is also common. These situations require careful modeling of the relationship between the recorded observations and the assumed underlying functional trajectories \cp{rice:01, jame:03, mull:05:4}.

Initial analysis of functional data includes exploratory plotting of the observed functions in a "spaghetti plot" to obtain an initial idea of functional shapes, to check for outliers and to identify potential "landmarks". Preprocessing may include outlier removal and registration to adjust for time-warping \cp{gass:95,gerv:04, mull:04:4, jame:07,knei:08}.


3. Functional Principal Components


Basic objects in FDA are the mean function $\mu$ and the covariance function $G$. For square integrable random functions $X(t)$, \begin{eqnarray} \mu(t)=E(Y(t)), \quad G(s,t)&=&\cov\left\{X(s),X(t)\right\},\quad s,t \in \T, \end{eqnarray} with auto-covariance operator $(A f)(t) = \int_{\T}\, f(s) G(s,t)\, ds.$ This linear operator of Hilbert-Schmidt type has orthonormal eigenfunctions $\pk,\, k=1,2,\ldots,$ with associated ordered eigenvalues $\lambda_{1} \ge \lambda_{2} \ge \ldots$, such that $A\, \pk = \lambda_k \, \pk.$ The foundation for functional principal component analysis is the Karhunen-Lo\`eve representation of random functions \cp{karh:46,gren:50,gikh:69} $X(t)=\mu(t)+\sum\limits_{k=1}^{\infty} \xk\,\pk(t),$ where $\xk=\int_{\T} (Y(t)-\mu(t))\pk(t)\,dt$ are uncorrelated centered random variables with $\var(\xk)=\lambda_{k}$, referred to as functional principal components (FPCs).

Estimation of eigenfunctions, eigenvalues and of FPCs is a core objective of FDA. Various smoothing-based methods and applications for various sampling designs have been considered \cp{jone:92, silv:96, stan:98, card:00:1,jame:00, paul:09:1}. Estimators employing smoothing methods (local least squares or splines) have been developed for various sampling schemes (sparse, dense, with errors) to obtain a data-based version of the eigen-representation, where one regularizes by truncating at a finite number $K$ of included components. The idea is to borrow strength from the entire sample of functions, rather than estimating each function separately. The functional data are then represented by the subject-specific vectors of score estimates $\hxk,\, k=1,\ldots, K$, which can be used to represent individual trajectories and for subsequent statistical analysis \cp{mull:05:4}.

More adequate representations of functional data are sometimes obtained by fitting pre-specified fixed basis functions with random coefficients. In particular, B-splines \cp{sy:97}, P-splines \cp{yao:06} and wavelets \cp{morr:06} have been successfully applied. A general relation between mixed linear models and fitting functional models with basis expansion coefficients can be used to advantage for modeling and implementation of these approaches. In the theoretical analysis, one may distinguish between an essentially multivariate analysis, which results from assuming that the number of series terms is actually finite, leading to parametric rates of convergence, and an essentially functional approach. In the latter, the number of components is assumed to increase with sample size and this leads to "functional" rates of convergence that depend on the properties of underlying processes, such as decay and spacing of the eigenvalues of the autocovariance operator.


4. Functional Regression and Related Models


Functional regression models may include one or several functions among the predictors, responses, or both. For pairs $(X,Y)$ with centered random predictor functions $X$ and scalar responses $Y$, the linear model is $$E(Y|X)=\int_{\T} (X(s)-\mu(s))\beta(s)\,ds.$$ The regression parameter function $\beta$ can be represented in a suitable basis, for example the eigenbasis, with coefficient estimates determined by least squares or similar criteria. The functional linear model has been thoroughly studied, including optimal rates of convergence \cp{card:03:1,card:03:2,mull:05:5,cai:06,hall:07:1,li:07,mas:09}.

The class of useful functional regression models is large, due to the infinite-dimensional nature of the functional predictors. The case where the response is functional \cp{rams:91} also is of interest. Flexible extensions of the functional linear model for example include nonparametric approaches \cp{ferr:06}, where unfavorable small ball probabilities and the non-existence of a density in general random function space impose limits on convergence \cp{mull:09:5}, and multiple index models \cp{jame:05}. Another extension is the functional additive model \cp{mull:08:2}. For functional predictors $X=\mu + \sum_{k=1}^\infty A_k \pk$ and scalar responses $Y$, this model is given by $$E(Y|X)=\sum_{k=1}^\infty f_k(A_k) \pk$$ for smooth functions $f_k$ with $E(f_k(A_k))=0$.

Another variant of the functional linear model, which is also applicable for classification purposes, is the generalized functional linear model $E(Y|X)=g\{\mu + \int_{\T} \, X(s)\beta(s)\,ds\}$ with link function $g$ \cp{jame:02,esca:04, card:05:1, mull:05:1}. The link function (and an additional variance function if applicable) is adapted to the (often discrete) distribution of $Y$; the components of the model can be estimated by quasi-likelihood. Besides discriminant analysis via the binomial functional generalized linear model, various other methods have been studied for functional clustering and discriminant analysis \cp{jame:03,chio:07,chio:08}.

Of practical relevance are extensions towards polynomial functional regression models \cp{mull:10:1}, hierarchical functional models \cp{crai:09:1}, models with varying domains, models with more than one predictor function, and functional (autoregressive) time series models, among others. In addition to the functional trajectories themselves, derivatives are of interest to study the dynamics of the underlying processes \cp{rams:05}. Software for functional data analysis evolves rapidly and is available from various sources. Freely available software includes for example the fda package (R and matlab), at \newline http://www.psych.mcgill.ca/misc/fda/software.html , and the PACE package (matlab), at http://anson.ucdavis.edu/~mueller/data/pace.html.


Acknowledgments


Research supported in part by NSF Grant DMS-0806199. Based on an article from Lovric, Miodrag (2011), International Encyclopedia of Statistical Science. Heidelberg: Springer Science +Business Media, LLC.


References

[1] [{Bosq(2000)}]{bosq:00} \textsc{Bosq, D.} (2000). Linear Processes in Function Spaces: Theory and Applications. Springer-Verlag, New York.
[2] [{Cai and Hall(2006)}]{cai:06} \textsc{Cai, T.} and \textsc{Hall, P.} (2006). Prediction in functional linear regression. The Annals of Statistics 34 2159--2179.
[3] [{Cardot(2000)}]{card:00:1} \textsc{Cardot, H.} (2000). Nonparametric estimation of smoothed principal components analysis of sampled noisy functions. Journal of Nonparametric Statistics 12 503--538.
[4] [{Cardot et~al.(2003{\natexlab{a}})Cardot, Ferraty, Mas and Sarda}]{card:03:1} \textsc{Cardot, H.}, \textsc{Ferraty, F.}, \textsc{Mas, A.} and \textsc{Sarda, P.} (2003{\natexlab{a}}). Testing hypotheses in the functional linear model. Scandinavian Journal of Statistics. Theory and Applications 30 241--255.
[5] [{Cardot et~al.(2003{\natexlab{b}})Cardot, Ferraty and Sarda}]{card:03:2} \textsc{Cardot, H.}, \textsc{Ferraty, F.} and \textsc{Sarda, P.} (2003{\natexlab{b}}). Spline estimators for the functional linear model. Statistica Sinica 13 571--591.
[6] [{Cardot and Sarda(2005)}]{card:05:1} \textsc{Cardot, H.} and \textsc{Sarda, P.} (2005). Estimation in generalized linear models for functional data via penalized likelihood. Journal of Multivariate Analysis 92 24--41.
[7] [{Castro et~al.(1986)Castro, Lawton and Sylvestre}]{cast:86} \textsc{Castro, P. E.}, \textsc{Lawton, W. H.} and \textsc{Sylvestre, E. A.} (1986). Principal modes of variation for processes with continuous sample curves. Technometrics 28 329--337.
[8] [{Chiou and Li(2007)}]{chio:07} \textsc{Chiou, J.-M.} and \textsc{Li, P.-L.} (2007). Functional clustering and identifying substructures of longitudinal data. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 69 679--699.
[9] [{Chiou and Li(2008)}]{chio:08} \textsc{Chiou, J.-M.} and \textsc{Li, P.-L.} (2008). Correlation-based functional clustering via subspace projection. Journal of the American Statistical Association 103 1684--1692.
[10] [{Crainiceanu et~al.(2009)}]{crai:09:1} \textsc{Crainiceanu, C.M.}, \textsc{Staicu, A.-M.} and \textsc{Di, C.-Z.} (2009). Generalized multilevel functional regression. Journal of the American Statistical Association 104 1550--1561.
[11] [{Dauxois et~al.(1982)Dauxois, Pousse and Romain}]{daux:82} \textsc{Dauxois, J.}, \textsc{Pousse, A.} and \textsc{Romain, Y.} (1982). Asymptotic theory for the principal component analysis of a vector random function: some applications to statistical inference. Journal of Multivariate Analysis 12 136--154.
[12] [{Escabias et~al.(2004)Escabias, Aguilera and Valderrama}]{esca:04} \textsc{Escabias, M.}, \textsc{Aguilera, A. M.} and \textsc{Valderrama, M. J.} (2004). Principal component estimation of functional logistic regression: discussion of two different approaches. Journal of Nonparametric Statistics 16 365--384.
[13] [{Eubank and Hsing(2008)}]{euba:08} \textsc{Eubank, R. L.} and \textsc{Hsing, T.} (2008). Canonical correlation for stochastic processes. Stochastic Processes and their Applications 118 1634--1661.
[14] [{Ferraty and Vieu(2006)}]{ferr:06} \textsc{Ferraty, F.} and \textsc{Vieu, P.} (2006). {Nonparametric Functional Data Analysis.} Springer, New York, New York.
[15] [{Gasser and Kneip(1995)}]{gass:95} \textsc{Gasser, T.} and \textsc{Kneip, A.} (1995). Searching for structure in curve samples. Journal of the American Statistical Association 90 1179--1188.
[16] [{Gasser et~al.(1984)Gasser, Müller, Köhler, Molinari and Prader}]{mull:84:2} \textsc{Gasser, T.}, \textsc{Müller, H.-G.}, \textsc{Köhler, W.}, \textsc{Molinari, L.} and \textsc{Prader, A.} (1984). Nonparametric regression analysis of growth curves. The Annals of Statistics 12 210--229.
[17] [{Gervini and Gasser(2004)}]{gerv:04} \textsc{Gervini, D.} and \textsc{Gasser, T.} (2004). Self-modeling warping functions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66 959--971.
[18] [{Gikhman and Skorokhod(1969)}]{gikh:69} \textsc{Gikhman, I. I.} and \textsc{Skorokhod, A. V.} (1969). Introduction to the Theory of Random Processes. W. B. Saunders Company, Philadelphia.
[19] [{Grenander(1950)}]{gren:50} \textsc{Grenander, U.} (1950). Stochastic processes and statistical inference. Arkiv för Matematik 1 195--277.
[20] [{Hall and Horowitz(2007)}]{hall:07:1} \textsc{Hall, P.} and \textsc{Horowitz, J. L.} (2007). Methodology and convergence rates for functional linear regression. The Annals of Statistics 35 70--91.
[21] [{Hall et~al.(2009)Hall, Müller and Yao}]{mull:09:5} \textsc{Hall, P.}, \textsc{Müller, H.-G.} and \textsc{Yao, F.} (2009). Estimation of functional derivatives. The Annals of Statistics 37 3307--¨C3329.
[22] [{Izem and Kingsolver(2005)}]{izem:05} \textsc{Izem, R.} and \textsc{Kingsolver, J.} (2005). Variation in continuous reaction norms: Quantifying directions of biological interest. American Naturalist 166 277--289.
[23] [{James(2002)}]{jame:02} \textsc{James, G. M.} (2002). Generalized linear models with functional predictors. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 64 411--432.
[24] [{James(2007)}]{jame:07} \textsc{James, G. M.} (2007). Curve alignment by moments. Annals of Applied Statistics 1 480--501.
[25] [{James et~al.(2000)James, Hastie and Sugar}]{jame:00} \textsc{James, G. M.}, \textsc{Hastie, T. J.} and \textsc{Sugar, C. A.} (2000). Principal component models for sparse functional data. Biometrika 87 587--602.
[26] [{James and Silverman(2005)}]{jame:05} \textsc{James, G. M.} and \textsc{Silverman, B. W.} (2005). Functional adaptive model estimation. Journal of the American Statistical Association 100 565--576.
[27] [{James and Sugar(2003)}]{jame:03} \textsc{James, G. M.} and \textsc{Sugar, C. A.} (2003). Clustering for sparsely sampled functional data. Journal of the American Statistical Association 98 397--408.
[28] [{Jank and Shmueli(2006)}]{jank:06:2} \textsc{Jank, W.} and \textsc{Shmueli, G.} (2006). Functional data analysis in electronic commerce research. Statistical Science 21 155--166.
[29] [{Jones and Rice(1992)}]{jone:92} \textsc{Jones, M. C.} and \textsc{Rice, J. A.} (1992). Displaying the important features of large collections of similar curves. The American Statistician 46 140--145.
[30] [{Karhunen(1946)}]{karh:46} \textsc{Karhunen, K.} (1946). Zur {S}pektraltheorie stochastischer {P}rozesse. Annales Academiae Scientiarum Fennicae. Series A. I, Mathematica 1946 7.
[31] [{Kirkpatrick and Heckman(1989)}]{kirk:89} \textsc{Kirkpatrick, M.} and \textsc{Heckman, N.} (1989). A quantitative genetic model for growth, shape, reaction norms, and other infinite-dimensional characters. Journal of Mathematical Biology 27 429--450.
[32] [{Kneip and Ramsay(2008)}]{knei:08} \textsc{Kneip, A.} and \textsc{Ramsay, J. O.} (2008). Combining registration and fitting for functional models. Journal of the American Statistical Association 103 1155--1165.
[33] [{Kneip and Utikal(2001)}]{knei:01} \textsc{Kneip, A.} and \textsc{Utikal, K. J.} (2001). Inference for density families using functional principal component analysis. Journal of the American Statistical Association 96 519--542.
[34] [{Li and Hsing(2007)}]{li:07} \textsc{Li, Y.} and \textsc{Hsing, T.} (2007). On rates of convergence in functional linear regression. Journal of Multivariate Analysis 98 1782--1804.
[35] [{Liu and Müller(2004)}]{mull:04:4} \textsc{Liu, X.} and \textsc{Müller, H.-G.} (2004). Functional convex averaging and synchronization for time-warped random curves. Journal of the American Statistical Association 99 687--699.
[36] [{Mas and Menneteau(2003)}]{mas:03} \textsc{Mas, A.} and \textsc{Menneteau, L.} (2003). Perturbation approach applied to the asymptotic study of random operators. In High dimensional probability, III (Sandjberg, 2002), vol. 55 of Progr. Probab. Birkhäuser, Basel, 127--134.
[37] [{Mas and Pumo(2009)}]{mas:09} \textsc{Mas, A.} and \textsc{Pumo, B.} (2009). Functional linear regression with derivatives. Journal of Nonparametric Statistics 21 19--40.
[38] [{Morris and Carroll(2006)}]{morr:06} \textsc{Morris, J. S.} and \textsc{Carroll, R. J.} (2006). Wavelet-based functional mixed models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 179--199.
[39] [{Müller(2008)}]{mull:08:7} \textsc{Müller, H.-G.} (2008). Functional modeling of longitudinal data. In Longitudinal Data Analysis (Handbooks of Modern Statistical Methods) (G. Fitzmaurice, M. Davidian, G. Verbeke and G. Molenberghs, eds.). Chapman & Hall/CRC, New York, 223--252.
[40] [{Müller et~al.(2008)Müller, Chiou and Leng}]{mull:08:3} \textsc{Müller, H.-G.}, \textsc{Chiou, J.-M.} and \textsc{Leng, X.} (2008). Inferring gene expression dynamics via functional regression analysis. BMC Bioinformatics 9 60.
[41] [{Müller and Stadtmüller(2005)}]{mull:05:1} \textsc{Müller, H.-G.} and \textsc{Stadtmüller, U.} (2005). Generalized functional linear models. The Annals of Statistics 33 774--805.
[42] [{Müller and Yao(2008)}]{mull:08:2} \textsc{Müller, H.-G.} and \textsc{Yao, F.} (2008). Functional additive models. Journal of the American Statistical Association 103 1534--1544.
[43] [{Müller and Yao(2010)}]{mull:10:1} \textsc{Müller, H.-G.} and \textsc{Yao, F.} (2010). Functional quadratic regression. Biometrika 97 49--64.
[44] [{Opgen-Rhein and Strimmer(2006)}]{opge:06} \textsc{Opgen-Rhein, R.} and \textsc{Strimmer, K.} (2006). Inferring gene dependency networks from genomic longitudinal data: A functional data approach. REVSTAT - Statistical Journal 4 53--65.
[45] [{Paul and Peng(2009)}]{paul:09:1} \textsc{Paul, D.} and \textsc{Peng, J.} (2009). Consistency of restricted maximum likelihood estimators in functional principal components analysis. Annals of Applied Statistics 37 1229--1271.
[46] [{Ramsay and Dalzell(1991)}]{rams:91} \textsc{Ramsay, J. O.} and \textsc{Dalzell, C. J.} (1991). Some tools for functional data analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 53 539--572.
[47] [{Ramsay and Ramsey(2002)}]{rams:02:2} \textsc{Ramsay, J. O.} and \textsc{Ramsey, J. B.} (2002). Functional data analysis of the dynamics of the monthly index of nondurable goods production. Journal of Econometrics 107 327--344. Information and entropy econometrics.
[48] [{Ramsay and Silverman(2005)}]{rams:05} \textsc{Ramsay, J. O.} and \textsc{Silverman, B. W.} (2005). Functional {Data {A}nalysis}. 2nd ed. Springer Series in Statistics, Springer, New York.
[49] [{Rao(1958)}]{rao:58} \textsc{Rao, C. R.} (1958). Some statistical methods for comparison of growth curves. Biometrics 14 1--17.
[50] [{Rice(2004)}]{rice:04} \textsc{Rice, J. A.} (2004). Functional and longitudinal data analysis: Perspectives on smoothing. Statistica Sinica 631--647.
[51] [{Rice and Silverman(1991)}]{rice:91} \textsc{Rice, J. A.} and \textsc{Silverman, B. W.} (1991). Estimating the mean and covariance structure nonparametrically when the data are curves. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 53 233--243.
[52] [{Rice and Wu(2001)}]{rice:01} \textsc{Rice, J. A.} and \textsc{Wu, C. O.} (2001). Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics 57 253--259.
[53] [{Silverman(1996)}]{silv:96} \textsc{Silverman, B. W.} (1996). Smoothed functional principal components analysis by choice of norm. The Annals of Statistics 24 1--24.
[54] [{Staniswalis and Lee(1998)}]{stan:98} \textsc{Staniswalis, J. G.} and \textsc{Lee, J. J.} (1998). Nonparametric regression analysis of longitudinal data. Journal of the American Statistical Association 93 1403--1418.
[55] [{Sy et~al.(1997)Sy, Taylor and Cumberland}]{sy:97} \textsc{Sy, J. P.}, \textsc{Taylor, J. M. G.} and \textsc{Cumberland, W. G.} (1997). A stochastic model for the analysis of bivariate longitudinal {AIDS} data. Biometrics 53 542--555.
[56] [{Telesca and Inoue(2008)}]{tele:08} \textsc{Telesca, D.} and \textsc{Inoue, L. Y.} (2008). Bayesian hierarchical curve registration. Journal of the American Statistical Association 103 328--339.
[57] [{Yao and Lee(2006)}]{yao:06} \textsc{Yao, F.} and \textsc{Lee, T. C. M.} (2006). Penalized spline models for functional principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 3--25.
[58] [{Yao et~al.(2005{\natexlab{a}})Yao, Müller and Wang}]{mull:05:4} \textsc{Yao, F.}, \textsc{Müller, H.-G.} and \textsc{Wang, J.-L.} (2005{\natexlab{a}}). Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association 100 577--590.
[59] [{Yao et~al.(2005{\natexlab{b}})Yao, Müller and Wang}]{mull:05:5} \textsc{Yao, F.}, \textsc{Müller, H.-G.} and \textsc{Wang, J.-L.} (2005{\natexlab{b}}). Functional linear regression analysis for longitudinal data. The Annals of Statistics 33 2873--2903.
[60] [{Zhao et~al.(2004)Zhao, Marron and Wells}]{zhao:04} \textsc{Zhao, X.}, \textsc{Marron, J. S.} and \textsc{Wells, M. T.} (2004). The functional data analysis view of longitudinal data. Statistica Sinica 14 789--808.


How to Cite This Entry:
Analysis of Samples of Curves. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Analysis_of_Samples_of_Curves&oldid=37911