Namespaces
Variants
Actions

Model-based geostatistics

From Encyclopedia of Mathematics
Revision as of 20:19, 12 March 2016 by Ulf Rehmann (talk | contribs) (Created page with " {| class="wikitable" !Copyright notice <!-- don't remove! --> |- | This article ''Model-based Geostatistics'' was adapted from an original article by Hannes Kazianka, wh...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Copyright notice
This article Model-based Geostatistics was adapted from an original article by Hannes Kazianka, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article ([http://statprob.com/encyclopedia/ModelBasedGeostatistics.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb.

2020 Mathematics Subject Classification: Primary: 62M30 [MSN][ZBL]


$ \newcommand{\etal}{ ''et al.$\;$''} $ $ \def\rank{\mathop{\operator@font rank}} $ $ \def\diag{\mathop{\operator@font diag}} $ $ \def\tr{\mathop{\operator@font tr}} $ $ \def\sign{\mathop{\operator@font sign}} $ $ \def\kurt{\mathop{\operator@font kurt}} $ $ \def\Cov{\mathop{\operator@font Cov}} $ $ \def\Cor{\mathop{\operator@font Cor}} $ $ \def\Var{\mathop{\operator@font Var}} $ $ \def\diff{\mathop{\operator@font d}} $ $ \def\Bletter{\mathop{\operator@font \hspace*{-0.3mm}\boldsymbol{_{\sf I}\hspace*{-1.23mm}{\sf B}}}\hspace*{-0.5mm}} $ \labelformat{equation}{(#1)}

Model-based Geostatistics
Hannes Kazianka

[1]


Vienna University of Technology

and

Jürgen Pilz [2]


Alpen-Adria University of Klagenfurt

Keywords and Phrases: Spatial Statistics, Geostatistics, Bayesian Kriging, Transformed-Gaussian Kriging, Correlation Function, Geometric Anisotropy, Random Field, Generalized Linear Geostatistical Model

Stochastic models for spatial data

\label{sec.1} Diggle and Ribeiro (2007) and Mase (2011) describe geostatistics as a branch of spatial statistics that deals with statistical methods for the analysis of spatially referenced data with the following properties. Firstly, values $Y_i$, $i=1,\dots,n$, are observed at a discrete set of sampling locations $\boldsymbol{x}_i$ within some spatial region $\mathcal{S}\subset{\rm R}^d$, $d\geq 1$. Secondly, each observed value $Y_i$ is either a measurement of, or is statistically related to, the value of an underlying continuous spatial phenomenon, $Z\left(\boldsymbol{x}\right)$, at the corresponding sampling location $\boldsymbol{x}_i$. The term model-based geostatistics refers to geostatistical methods that rely on a stochastic model. The observed phenomenon is viewed as a realization of a stochastic process with continuous spatial index, a so-called random field.

Such a random field $Z\left(\boldsymbol{x}\right)$ is fully determined by specifying all multivariate distributions, i.e. $P\left(Z\left(\boldsymbol{x}_1\right)\leq z_1,\dots,Z\left(\boldsymbol{x}_n\right)\leq z_n\right)$ for arbitrary $n\in{\rm N}$ and $\boldsymbol{x}_1,\dots,\boldsymbol{x}_n\in\mathcal{S}$. Since a full characterization of a random field is usually impossible, the mean function $m\left(\boldsymbol{x}\right)=E\left(Z\left(\boldsymbol{x}\right)\right)$ and the covariance function $K\left(\boldsymbol{x}_i,\boldsymbol{x}_j\right)=Cov\left(Z\left(\boldsymbol{x}_i\right),Z\left(\boldsymbol{x}_j\right)\right)$ play a prominent role. Thereby, $m\left(\boldsymbol{x}\right)$ represents the trend while $K\left(\boldsymbol{x}_i,\boldsymbol{x}_j\right)$ defines the dependence structure of the random field. It is typical that the assumption of weak (second-order) isotropy is made about the random field, i.e. its mean function is constant and its covariance function $K\left(\boldsymbol{x}_1,\boldsymbol{x}_2\right)$ depends on $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ only through $h=\left\|\boldsymbol{x}_1-\boldsymbol{x}_2\right\|_2$, where $\left\|.\right\|_2$ denotes the Euclidean distance. In this case $K$ is called an isotropic autocovariance function. The covariance function is directly related to smoothness properties of the random field such as mean square continuity and differentiability. A widely used parametric family of isotropic autocovariance functions is the Mat\`ern family $$ K_{\sigma^2,\boldsymbol{\theta}}\left(h\right)=\sigma^2\left(\left(1-\vartheta_2\right)+\frac{\vartheta_2}{2^{\kappa-1}\Gamma\left(\kappa\right)}\left(\frac{2\kappa^{\frac{1}{2}}h}{\vartheta_1}\right)^{\kappa}\mathcal{K}_{\kappa}\left(\frac{2\kappa^{\frac{1}{2}}h}{\vartheta_1}\right)\right), $$ where $\mathcal{K}_{\kappa}$ denotes the modified Bessel function of order $\kappa>0$, $\vartheta_1>0$ is a called the "range parameter" controlling how fast the covariance decays as the distance $h$ gets large, $\vartheta_2\in\left[0,1\right]$ is called the "nugget parameter" and is the proportion of the variance attributed to measurement error, $\sigma^2$ controls the variance and $\boldsymbol{\theta}=\left(\vartheta_1,\vartheta_2,\kappa\right)$ denotes the vector of correlation parameters. The parameter $\kappa$ controls the smoothness of the corresponding process. A thorough mathematical introduction to the theory of random fields is given in Stein (1999) and Yaglom (1987).

The most important geostatistical model is the linear Gaussian model $$ Y_i=\boldsymbol{f}\left(\boldsymbol{x}_k\right)^T\boldsymbol{\beta}+Z\left(\boldsymbol{x}_i\right), i=1,\dots,n,\label{eqn.model1} \tag{1}$$

where $Z\left(\boldsymbol{x}\right)$ is an isotropic zero-mean Gaussian random field with autocovariance function $K_{\sigma^2,\boldsymbol{\theta}}$, $\boldsymbol{f}$ is a vector of location-dependent explanatory variables and $\boldsymbol{\beta}=\left(\beta_1,\dots,\beta_p\right)^T$ is the vector of regression parameters. The likelihood function for the linear Gaussian model is $$ p\left(\boldsymbol{Y}\left|\right.\boldsymbol{\beta},\sigma^2,\boldsymbol{\theta}\right)=\left(2\pi\right)^{-\frac{n}{2}}\left|\sigma^2\boldsymbol{\Sigma}_{\boldsymbol{\theta}}\right|^{-\frac{1}{2}}\exp\left\{-\frac{1}{2\sigma^2}\left(\boldsymbol{Y}-\boldsymbol{F}\boldsymbol{\beta}\right)^T\boldsymbol{\Sigma}_{\boldsymbol{\theta}}^{-1}\left(\boldsymbol{Y}-\boldsymbol{F}\boldsymbol{\beta}\right)\right\}, $$ where $\boldsymbol{\Sigma}_{\boldsymbol{\theta}}$ denotes the correlation matrix, $\boldsymbol{F}=(\boldsymbol{f}\left(\boldsymbol{x}_1\right),\dots,\boldsymbol{f}\left(\boldsymbol{x}_n\right))^T$ is the design matrix and $\boldsymbol{Y}=\left(Y_1,\dots,Y_n\right)^T$ is the vector of observations. The maximum likelihood estimates for $\boldsymbol{\beta}$ and $\sigma^2$ in the linear Gaussian model are \begin{eqnarray} \hat{\boldsymbol{\beta}}&=&\left(\mathbf{F}^{T}\boldsymbol{\Sigma}_{\boldsymbol{\theta}}^{-1}\mathbf{F}\right)^{-1}\mathbf{F}^{T}\boldsymbol{\Sigma}_{\boldsymbol{\theta}}^{-1}\mathbf{Y},\label{eqn.beta}\\ \hat{\sigma}^{2}&=&\frac{1}{n}\left(\mathbf{Z}-\mathbf{F}\hat{\boldsymbol{\beta}}\right)^{T}\boldsymbol{\Sigma}_{\boldsymbol{\theta}}^{-1}\left(\mathbf{Z}-\mathbf{F}\hat{\boldsymbol{\beta}}\right).\label{eqn.sigma} \end{eqnarray} Plugging these estimates into the log-likelihood, we arrive at the so-called profiled log-likelihood, which just contains the parameters $\boldsymbol{\theta}$ $$ \log p\left(\boldsymbol{Y}\left|\right.\hat{\boldsymbol{\beta}},\hat{\sigma}^2,\boldsymbol{\theta}\right)=-\frac{n}{2}\left(\log\left(2\pi\right)+1\right)-\frac{1}{2}\log\left|\boldsymbol{\Sigma}_{\boldsymbol{\theta}}\right|-\frac{n}{2}\log\left(\hat{\sigma}^2\right). $$ To obtain $\hat{\boldsymbol{\theta}}$ we have to maximize the latter equation for $\boldsymbol{\theta}$ numerically. Note that this maximization problem is a lot simpler than the maximization of the complete likelihood where $\boldsymbol{\beta}$ and $\sigma^2$ are additional unknowns, especially when $p$ is large. Spatial prediction, which is often the goal in geostatistics, is performed based on the estimated parameters. The plug-in predictive distribution for the value of the random field at an unobserved location $\boldsymbol{x}_0$ is Gaussian $$ Y_0\left|\right.\boldsymbol{Y},\hat{\sigma}^2,\hat{\boldsymbol{\theta}}\sim\mbox{N}\left(\boldsymbol{k}^T\boldsymbol{K}^{-1}\boldsymbol{Y}+\boldsymbol{s}^T\hat{\boldsymbol{\beta}},\hat{\sigma}^2-\boldsymbol{k}^T\boldsymbol{K}^{-1}\boldsymbol{k}+\hat{\sigma}^2\boldsymbol{s}^T\left(\boldsymbol{F}^T\boldsymbol{K}^{-1}\boldsymbol{F}\right)^{-1}\boldsymbol{s}\right),\label{eqn.universal} \tag{2}$$

where $\boldsymbol{K}=\hat{\sigma}^2\boldsymbol{\Sigma}_{\hat{\boldsymbol{\theta}}}$, $\boldsymbol{s}=\boldsymbol{f}\left(\boldsymbol{x}_0\right)-\boldsymbol{F}^T\boldsymbol{K}^{-1}\boldsymbol{k}$, $\boldsymbol{k}=Cov\left(\boldsymbol{Z},Z\left(\boldsymbol{x}_0\right)\right)$, $\boldsymbol{Z}=\left(Z\left(\boldsymbol{x}_1\right),\dots,Z\left(\boldsymbol{x}_n\right)\right)^T$.

Weak isotropy is a rather strong assumption and environmental processes are typically not direction independent but show an anisotropic behavior. A popular extension to isotropic random fields is to consider random fields that become isotropic after a linear transformation of the coordinates (Schabenberger and Gotway (2005)). This special variant of anisotropy is called geometric anisotropy. Let $Z_1\left(\boldsymbol{x}\right)$ be an isotropic random field on ${\rm R}^d$ with autocovariance function $K_1$ and mean $\mu$. For the random field $Z\left(\boldsymbol{x}\right)=Z_1\left(\boldsymbol{T}\boldsymbol{x}\right)$, where $\boldsymbol{T}\in{\rm R}^{d\times d}$, we get that $E\left(Z\left(\boldsymbol{x}\right)\right)=\mu$ and the corresponding autocovariance function is $Cov\left(Z\left(\boldsymbol{x}_1\right),Z\left(\boldsymbol{x}_2\right)\right)=K_1\left(\left\|\boldsymbol{T}\left(\boldsymbol{x}_1-\boldsymbol{x}_2\right)\right\|_2\right)$. When correcting for geometric anisotropy we need to revert the coordinate transformation. $Z\left(\boldsymbol{T}^{-1}\boldsymbol{x}\right)$ has the same mean as $Z\left(\boldsymbol{x}\right)$ but isotropic autocovariance function $K_1$. When correcting for stretching and rotation of the coordinates we have $$ \boldsymbol{T}^{-1}=\left(\begin{array}{cc}1&0\\0&\lambda\end{array}\right)\left(\begin{array}{cc}\cos\varphi&-\sin\varphi\\\sin\varphi&\cos\varphi\end{array}\right). $$ Here, $\lambda$ and $\varphi$ are called the anisotropy ratio and anisotropy angle, respectively. All the models that we consider in this chapter can be extended to account for geometric anisotropy by introducing these two parameters.

Bayesian kriging

The first steps towards Bayesian modeling and prediction in geostatistics were made by Kitanidis (1986) and Omre (1987) who developed a Bayesian version of universal kriging. One of the advantages of the Bayesian approach, besides its ability to deal with the uncertainty about the model parameters, is the possibility to work with only a few measurements. A disadvantage of Bayesian kriging over the non-Bayesian methodology is certainly the increased computational complexity. Assume a Gaussian random field model in the form of the form Eq. 1 with known covariance matrix $\boldsymbol{K}$ but unknown parameter vector $\boldsymbol{\beta}$. From Bayesian analysis we know that it is natural to assume a prior of the form $\boldsymbol{\beta}\sim\mbox{N}\left(\boldsymbol{m}_b,\sigma^2\boldsymbol{V}_b\right)$ for $\boldsymbol{\beta}$, where $\boldsymbol{V}_b$ is a positive semidefinite matrix. It can be shown that the posterior distribution for $\boldsymbol{\beta}$ is $$ \boldsymbol{\beta}\left|\right.\boldsymbol{Z}\sim\mbox{N}\left(\tilde{\boldsymbol{\beta}},\sigma^2\boldsymbol{V}_{\tilde{\boldsymbol{\beta}}}\right), $$ where $\tilde{\boldsymbol{\beta}}=\boldsymbol{V}_{\tilde{\boldsymbol{\beta}}}\left(\sigma^2\boldsymbol{F}^T\boldsymbol{K}^{-1}\boldsymbol{Z}+\boldsymbol{V}_{b}^{-1}\boldsymbol{m}_b\right)$ and $\boldsymbol{V}_{\tilde{\boldsymbol{\beta}}}=\left(\sigma^2\boldsymbol{F}^T\boldsymbol{K}^{-1}\boldsymbol{F}+\boldsymbol{V}^{-1}_{b}\right)^{-1}$. The predictive distribution of $Z\left(\boldsymbol{x}_0\right)$ is also Gaussian and given by $$ Z\left(\boldsymbol{x}_0\right)\left|\right.\boldsymbol{Z}\sim\mbox{N}\left(\boldsymbol{k}^T\boldsymbol{K}^{-1}\boldsymbol{Z}+\boldsymbol{s}^T\tilde{\boldsymbol{\beta}},\sigma^2-\boldsymbol{k}^T\boldsymbol{K}^{-1}\boldsymbol{k}+\sigma^2\boldsymbol{s}^T\boldsymbol{V}_{\tilde{\boldsymbol{\beta}}}\boldsymbol{s}\right), $$ where $\boldsymbol{F}$, $\boldsymbol{s}$ and $\boldsymbol{k}$ are defined as in Section \ref{sec.1}. From the above representation of the Bayesian kriging predictor it becomes clear that Bayesian kriging bridges the gap between simple and universal kriging. We get simple kriging in case of complete knowledge of the trend, which corresponds to $\mathbf{V}_b=\boldsymbol{0}$, whereas we get the universal kriging predictor if we have no knowledge of $\boldsymbol{\beta}$ ($\mathbf{V}_b^{-1}=\boldsymbol{0}$ in the sense that the smallest eigenvalue of $\mathbf{V}_b$ converges to infinity). Interestingly, the Bayesian universal kriging predictor has a smaller or equal variance than the classical universal kriging predictor (see Eq. 2) since $\left(\boldsymbol{F}^T\boldsymbol{K}^{-1}\boldsymbol{F}+\sigma^{-2}\boldsymbol{V}_b^{-1}\right)^{-1}\preceq\left(\boldsymbol{F}^T\boldsymbol{K}^{-1}\boldsymbol{F}\right)^{-1}$, where $\preceq$ denotes the Loewner partial ordering.

Bayesian universal kriging is not fully Bayesian because $\boldsymbol{K}$ is assumed known. Diggle and Ribeiro (2007) summarize the results for a fully Bayesian analysis of Gaussian random field models of the form Eq. 1, where $K_{\sigma^2,\theta}=\sigma^2\Sigma_{\vartheta_1}$ and $\vartheta_1$ is the range parameter of an isotropic autocorrelation function model.

Transformed Gaussian kriging

Probably the simplest way to extend the Gaussian random field model is to assume that a differentiable transformation of the original random field, $Z_1\left(\boldsymbol{x}\right)=g\left(Z\left(\boldsymbol{x}\right)\right)$, is Gaussian. The mean of the transformed field is unknown and parameterized by $\boldsymbol{\beta}$, $E\left(Z_1\left(\boldsymbol{x}\right)\right)=\boldsymbol{f}\left(\boldsymbol{x}\right)^T\boldsymbol{\beta}$. If we assume that the transformation function $g$ and the covariance function $K$ of $Y\left(\boldsymbol{x}\right)$ are known, the optimal predictor for $Z\left(\boldsymbol{x}_0\right)$ can be derived using the results from Section \ref{sec.1}. However, in practice neither $K$ nor $g$ is known and we have to estimate them from the data.

A family of one-parameter transformation functions $g_{\lambda}$ that is widely used in statistics is the so-called Box-Cox family $$ g_{\lambda}\left(z\right)=\left\{\begin{array}{ll}\frac{z^\lambda-1}{\lambda},&\lambda\neq 0,\\\log\left(z\right),&\lambda=0.\end{array}\right. $$ The Box-Cox transformation is valid for positive-valued random fields and is able to model moderately skewed, unimodal data.

The likelihood of the data $\boldsymbol{Y}$ in the transformed Gaussian model can be written as $$ p\left(\mathbf{Y}\left|\right.\boldsymbol{\Theta}\right)=J_{\lambda}(\mathbf{Y})\left(2\pi\right)^{-\frac{n}{2}}\left|\sigma^{2}\boldsymbol{\Sigma}_{\boldsymbol{\theta}}\right|^{-\frac{1}{2}}\exp\left[-\frac{1}{2\sigma^2}\left(g_{\lambda}(\mathbf{Y})-\boldsymbol{F}\boldsymbol{\beta}\right)^{T}\boldsymbol{\Sigma}_{\boldsymbol{\theta}}^{-1}\left(g_{\lambda}\left(\mathbf{Y}\right)-\boldsymbol{F}\boldsymbol{\beta}\right)\right], $$ where, $\boldsymbol{\Theta}=\left(\boldsymbol{\beta},\boldsymbol{\theta},\sigma^2,\lambda\right)$, $J_{\lambda}(\mathbf{Y})$ is the determinant of the Jacobian of the transformation, $g_{\lambda}\left(\boldsymbol{Y}\right)=\left(g_{\lambda}\left(Y_1\right),\dots,g_{\lambda}\left(Y_n\right)\right)$ and $\lambda$ is the transformation parameter. De Oliveira et al. (1997) point out that the interpretation of $\boldsymbol{\beta}$ changes with the value of $\lambda$, and the same is true for the covariance parameters $\sigma^2$ and $\boldsymbol{\theta}$, to a lesser extent though. To estimate the parameters $\lambda$ and $\boldsymbol{\theta}$, we make use of the profile likelihood approach that we have already encountered in Section \ref{sec.1}. For fixed values of $\lambda$ and $\boldsymbol{\theta}$, the maximum likelihood estimates for $\boldsymbol{\beta}$ and $\sigma^2$ are given by Eqs. \ref{eqn.beta}-\ref{eqn.sigma} with $\boldsymbol{Y}$ replaced by $g_{\lambda}\left(\boldsymbol{Y}\right)$. Again, the estimates for $\lambda$ and $\boldsymbol{\theta}$ cannot be written in closed form and must be found numerically by plugging $\hat{\boldsymbol{\beta}}$ and $\hat{\sigma}^{2}$ in the likelihood for numerical maximization.

The estimated parameters $\hat{\boldsymbol{\Theta}}$ are subsequently used for spatial prediction. To perform a plug-in prediction we make use of the conditional distribution of the Gaussian variable $Y_0\left|\right.\boldsymbol{Y},\hat{\boldsymbol{\Theta}}$ and back-transform it to the original scale by $g_{\lambda}^{-1}$. A Bayesian approach to spatial prediction in the transformed Gaussian model is proposed in De Oliveira et al. (1997).

The copula-based geostatistical model (Kazianka and Pilz (2010)) also works with transformations of the marginal distributions of the random field and is a generalization of transformed Gaussian kriging. In this approach all multivariate distributions of the random field are described by a copula (Genest (2011)) and a family of univariate marginal distributions. Due to the additional flexibility introduced by the choice of the copula and of the marginal distribution, these models are able to deal with extreme observations and multi-modal data.

Generalized linear geostatistical models

Generalized linear models (McCullagh and Nelder (1989)) provide a unifying framework for regression modeling of both continuous and discrete data. Diggle and Ribeiro (2007) extend the classical generalized linear model to what they call the generalized linear geostatistical model (GLGM). The responses $Y_i$, $i=1,\dots,n$, corresponding to location $\boldsymbol{x}_i$ are assumed to follow a family of univariate distributions indexed by their expectation, $\mu_i$, and to be conditionally independent given $\boldsymbol{Z}=\left(Z\left(\boldsymbol{x}_1\right),\dots,Z\left(\boldsymbol{x}_n\right)\right)$. The $\mu_i$ are specified through $$ h\left(\mu_i\right)=\boldsymbol{f}\left(\boldsymbol{x}_i\right)^T\boldsymbol{\beta}+Z\left(\boldsymbol{x}_i\right), $$ where $Z\left(\boldsymbol{x}\right)$ is a Gaussian random field with autocovariance function $K_{\boldsymbol{\theta}}$ and $h$ is a pre-defined link function. The two most frequently applied GLGMs are the Poisson log-linear model, where $Y_i$ is assumed to follow a Poisson distribution and the link function is the logarithm, and the binomial logistic-linear model, where $Y_i$ is assumed to follow a Binomial distribution, $Bin(n_i,p(\boldsymbol{x}_i))$, with $\mu_i=n_i p\left(\boldsymbol{x}_i\right)$ and $h\left(\mu_i\right)=\log\left(p\left(\boldsymbol{x}_i\right)/\left(1-p\left(\boldsymbol{x}_i\right)\right)\right)$. These models are suitable for representing spatially referenced count data and binomial trials.

Since maximum likelihood estimation of the parameters is difficult, a Markov chain Monte Carlo (Robert and Casella (2004)) approach is proposed to sample from the posteriors of the model parameters as well as from the predictive distributions at unobserved locations $\boldsymbol{x}_0$. The algorithm proceeds by sampling from $\boldsymbol{Z}\left|\right.\boldsymbol{Y},\boldsymbol{\beta},\boldsymbol{\theta}$, from $\boldsymbol{\theta}\left|\right.\boldsymbol{Z}$ and from $\boldsymbol{\beta}\left|\right.\boldsymbol{Z},\boldsymbol{Y}$ with the help of Metropolis-Hastings updates. At iteration $t+1$ and actual sample $\left(\boldsymbol{Z}^t,\boldsymbol{\theta}^t,\boldsymbol{\beta}^t,Z^t\left(\boldsymbol{x}_0\right)\right)$, perform the following steps:

$ \renewcommand{\theenumi}{(\alph{enumi})} $

  • Update $\boldsymbol{Z}$. For $i=1,\dots,n$, sample a new proposal $Z'\left(\boldsymbol{x}_i\right)$ from the conditional Gaussian distribution $p\left(Z\left(\boldsymbol{x}_i\right)\left|\right.\boldsymbol{\theta}^t,\boldsymbol{Z}^t_{-i}\right)$, where $\boldsymbol{Z}_{-i}^t$ denotes $\boldsymbol{Z}^t=\left(Z^t\left(\boldsymbol{x}_1\right),\dots,Z^t\left(\boldsymbol{x}_n\right)\right)$ with its $i$th element removed. Accept $Z'\left(\boldsymbol{x}_i\right)$ with probability $r=\min\left\{1,\frac{p\left(Y_i\left|\right.\boldsymbol{\beta}^t,Z'\left(\boldsymbol{x}_i\right)\right)}{p\left(Y_i\left|\right.\boldsymbol{\beta}^t,Z^t\left(\boldsymbol{x}_i\right)\right)}\right\}$.
  • Update $\boldsymbol{\theta}$. Sample a new proposal $\boldsymbol{\theta}'$ from a proposal distribution $J\left(\boldsymbol{\theta}\left|\right.\boldsymbol{\theta}^t\right)$. Accept the new proposal with probability $r=\min\left\{1,\frac{p\left(\boldsymbol{Z}^{t+1}\left|\right.\boldsymbol{\theta}'\right)J\left(\boldsymbol{\theta}^t\left|\right.\boldsymbol{\theta}'\right)}{p\left(\boldsymbol{Z}^{t+1}\left|\right.\boldsymbol{\theta}^t\right)J\left(\boldsymbol{\theta}'\left|\right.\boldsymbol{\theta}^t\right)}\right\}$.
  • Update $\boldsymbol{\beta}$. Sample a new proposal $\boldsymbol{\beta}'$ from a proposal distribution $J\left(\boldsymbol{\beta}\left|\right.\boldsymbol{\beta}^t\right)$.

Accept the new proposal with probability $r=\min\left\{1,\frac{\prod_{i=1}^np\left(Y_i\left|\right.Z^{t+1}\left(\boldsymbol{x}_i\right),\boldsymbol{\beta}'\right)J\left(\boldsymbol{\beta}^t\left|\right.\boldsymbol{\beta}'\right)}{\prod_{i=1}^np\left(Y_i\left|\right.Z^{t+1}\left(\boldsymbol{x}_i\right),\boldsymbol{\beta}^t\right)J\left(\boldsymbol{\beta}'\left|\right.\boldsymbol{\beta}^t\right)}\right\}$

  • Draw a sample $Z^{t+1}\left(\boldsymbol{x}_0\right)$ from the conditional Gaussian distribution $Z\left(\boldsymbol{x}_0\right)\left|\right.\boldsymbol{Z}^{t+1},\boldsymbol{\theta}^{t+1}$.

If point predictions for $Z\left(\boldsymbol{x}_0\right)$ are needed, the Monte Carlo approximation to the expected value of $Z\left(\boldsymbol{x}_0\right)\left|\right.\boldsymbol{Y}$ can be used, i.e. $E\left(\boldsymbol{Z}\left(\boldsymbol{x}_0\right)\left|\right.\boldsymbol{Y}\right)\approx\frac{1}{M}\sum_{t=1}^MZ^t\left(\boldsymbol{x}_0\right)$, where $M$ is the number of simulations.


References

[1] De Oliveira, V., Kedem, B. & Short, D. (1997) Bayesian prediction of transformed Gaussian fields. Journal of the American Statistical Association 92, 1422--1433.
[2] Diggle, P. & Ribeiro, P. (2007) Model-based geostatistics. New York: Springer
[3] Genest, C. (2011) Copulas in inference. In: Lovric, M. International Encyclopedia of Statistical Science. Heidelberg: Springer Science+Business Media, LLC.
[4] Kazianka, H. & Pilz, J. (2010) Copula-based geostatistical modeling of continuous and discrete data including covariates. Stochastic Environmental Research and Risk Assessment 24, 661-673.
[5] Kitanidis, P. (1986) Parameter uncertainty in estimation of spatial function: Bayesian analysis. Water Resources Research 22, 499-507.
[6] Mase, S. (2011) Geostatistics and kriging predictors. In: Lovric, M. International Encyclopedia of Statistical Science. Heidelberg: Springer Science+Business Media, LLC.
[7] McCullagh, P. & Nelder, J. (1989) Generalized linear models. Boca Raton: Chapman & Hall/CRC.
[8] Omre, H. (1987) Bayesian kriging - merging observations and qualified guesses in kriging. Mathematical Geology 19, 25-39.
[9] Robert, C. & Casella, G. (2004) Monte Carlo statistical methods. New York: Springer.
[10] Schabenberger, O. & Gotway, C. (2005) Statistical methods for spatial data analysis. Boca Raton: Chapman & Hall/CRC.
[11] Stein, M. (1999) Interpolation of spatial data. New York: Springer.
[12] Yaglom, A. (1987) Correlation theory of stationary and related random functions. New York: Springer.


{Reprinted with permission from Lovric, Miodrag (2011), International Encyclopedia of Statistical Science. Heidelberg: Springer Sicnece+Business Media, LCC}


  1. Vienna University of Technology, Wiedner Hauptstrasse 8 /105-1, 1040 Vienna, Austria. Email: hannes.kazianka@fam.tuwien.ac.at
  2. Alpen-Adria University of Klagenfurt, Universitaetsstrasse 65-67, 9020 Klagenfurt, Austria. Email: juergen.pilz@uni-klu.ac.at
How to Cite This Entry:
Model-based geostatistics. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Model-based_geostatistics&oldid=37769