(Reference : Eric Lehmann, 1998,pp.381438.)
We often speak of the asymptotic properties of
the sample mean .These refer to the sequence .
These functions are the same in some sense, for all
sample size. The notion of statistical functional makes this
clearer.
Suppose we are interested in realvalued parameters.
We often have a situation where the parameter of
interest is a function of the
distribution function , these are called statistical functionals.
The usual estimates for these functionals are obtained by simply plugging in the empirical distribution function for the unknown theoretical one.
Thus taking into account that
for any function we have:
We say that if the random variable has
cdf and the rv has cdf ,
converges in law to , and we write
Convergence in Probability
Note:
If
where is a limit distribution and
then
.
Definition:
A statsitic is said to be pivotal if its distribution does not
depend on any unknown parameters.
Example: Student's statistic.
The bootstrap is said to work if
As for the convergence in law:
I backed this up with a simulation experiment, I generated radnom uniforms from (0,1) and then multiplied them by 5 to simulate a U(0,5) distribution, I then computed many samples of size 15 from this distribution and took the maximums.
>> rand(3,10) ans = Columns 1 through 6 0.5804 0.8833 0.3863 0.2362 0.2711 0.6603 0.7468 0.1463 0.9358 0.9170 0.8861 0.1625 0.0295 0.8030 0.7004 0.2994 0.4198 0.0537 Columns 7 through 10 0.3936 0.6143 0.8899 0.2002 0.4927 0.4924 0.0578 0.3766 0.2206 0.8396 0.2538 0.0489 >> [v,i]=max(test1) v = Columns 1 through 7 0.7468 0.8833 0.9358 0.9170 0.8861 0.6603 0.4927 Columns 8 through 10 0.8396 0.8899 0.3766 i = 2 1 2 2 2 1 2 3 1 2  function v=smaxi(B,n,maxi) %Simulation of the uniform distribution %on (0,maxi) with estimation of the maximum %Samples of size n, B simulations rands=maxi*rand(n,B); [v,i]=max(rands);  mm=smaxi(10000,15,5); hist(mm,40)
This is what the histogram looks like:
As can be seen although the sample size was not very big, the sampling distribution is already quite close to exponential.
In the boosttrap example, we satrt with a given sample, sample1, that we will resample 10,000 times computing 10,000 maxima. We do not use loops but rather matrices that are more efficient both in Splus and matlab.
Non parametric Bootstrap
sample1=(5*rand(15,1)); Columns 1 through 7 1.1892 3.0412 4.4574 4.6107 4.6531 1.6721 3.9533 Columns 8 through 14 2.4167 2.7374 1.9671 2.7069 1.6799 2.4536 4.0456 Column 15 3.2760 >> [v,i]=max(sample1) v = 4.6531 i = 5 >> indices=randint(n,B,n)+1 indices = 13 1 13 1 7 5 9 10 9 7 2 6 1 6 8 3 1 3 2 4 13 11 1 1 6 1 14 15 1 7 6 10 1 5 9 7 4 1 5 5 10 12 3 2 7 4 15 11 15 13 2 2 3 14 12 10 8 4 9 9 10 14 3 1 6 15 9 10 6 9 2 6 6 9 1 11 3 13 12 8 1 2 9 6 13 4 7 1 14 15 9 9 14 6 1 9 14 3 15 3 5 13 5 6 11 11 15 4 4 13 10 8 12 14 14 7 7 4 9 7 2 2 4 11 5 1 4 2 3 4 13 5 10 7 8 4 13 3 9 1 15 7 2 1 7 9 11 12 6 6 [out,i]= max(orig(indices)) Columns 1 through 7 4.6531 4.6531 4.6531 4.6531 4.6531 4.6531 4.6107 Columns 8 through 10 4.6107 4.6531 4.6531 function out=bmax(B,orig) %Function to bootstrap the maximum [n,p]=size(orig); indices=randint(n,B,n)+1; [out,i]= max(orig(indices));
This is what the histogram looks like:

Knowing that the distribution function is restricted to a certain parametric family can help alot.
Suppose that we want to do the maximum, still knowing that in fact the sample was drawn from a uniform with unkown upper bound . We would be better off to generate many samples from the Uniform(0,), and look at the distribution of the maximum, there will of course be a slight bias to the left, but the distribution will be of the right form.
Instead of new data by resampling we can generate new data by generating samples from the bivariate Normal, however we will have to plug in an estimate for the variance/covariance structure obtained from the original data.
Parametric Simulation
>> c=sqrt((1.776^2)/.776^2) c = 0.8128 function [ys,zs]=gennorm(B,my,mz,sy,sz,c) %Simulation of the normal distribution %with sy2,sz2,rho the variances and correlation %C=sqrt((1rho^2)/rho^2) %and (my,mz) as the means %B simulations rs=randn(B,2); r1=rs(:,1); r2=rs(:,2); ys=my+sy*r1; zs=mz+(sz/(1+c^2))*(r1+c*r2); >> [ys,zs]=gennorm(1000,my,mz,sy,sz,c); >> corrcoef(ys,zs) ans = 1.0000 0.7558 0.7558 1.0000
This is what the points looks like:
Parametric Bootstrap Simulations
y=law15(:,1) z=law15(:,2) >> mz=mean(z) mz = 3.0947 >> my=mean(y) my = 600.2667 >> var(z) ans = 0.0593 >> var(y) ans = 1.7468e+03 >> corrcoef(y,z) ans = 1.0000 0.7764 0.7764 1.0000 >> cov(y,z) ans = 1.0e+03 * 1.7468 0.0079 0.0079 0.0001 >> sy=sqrt(var(y)) sy = 41.7945 >> sz=sqrt(var(z)) sz = 0.2435 corrs=zeros(1,1000); for b=(1:1000) [ys,zs]=gennorm(15,my,mz,sy,sz,c); cor=corrcoef(ys,zs); corrs(b)=cor(1,2); end >> hist(corrs,40)
This is what the histogram looks like:
Here, you can compare to the 'true' sampling distribution for the law school data as obtained by sampling without replacement 100,000 times from the original 82 observation population.