next up previous
Next: The bootstrap: Some Examples Up: Lectures Previous: The underlying principle

The questions addressed

Suppose we are interested in the estimation of an unknown parameter $\theta$ (this doesn't mean that we are in a parametric context). The two main questions asked at that stage are :

The second question has to be answered through information on the distribution or at least the variance of the estimator.

Of course there are answers in very simple contexts: for instance when the parameter of interest is the mean $\mu$ then the estimator $\bar{X}$ has a known standard deviation : the estimated standard error noted sometimes

\begin{displaymath}
\hat{\sigma}=\left[(X_i-\bar{X})^2/n^2\right]\end{displaymath}

However no such estimator is available for the sample median for instance.

In maximum likelihood theory the question 1 is answered through using the mle and then the question 2 can be answered with an approximate standard error of $\widehat{se}(\theta)=
\frac{1}{\sqrt{Fisher Info}}$

The bootstrap is a more general way to answer question 2 , with the following aspects:

If we had several samples from the unknown (true) distribution $F$ then we could consider the variations of the estimator :

\begin{displaymath}
\begin{array}{llllll}
F & \stackrel{\mbox{Random Sample}}{\l...
...,X_2^B...X_n^B) & = & {\cal X}_n^B & \hat{\theta}_B
\end{array}\end{displaymath}

Such a situation is never the case, so we replace these new samples by a resampling procedure based on the only information we have about $F$, and that is an empirical $\hat{F}_n$ :this side is what is called bootstrapping.


next up previous
Next: The bootstrap: Some Examples Up: Lectures Previous: The underlying principle
Susan Holmes 2004-05-19