Theoretically, we could give a complete enumeration of the bootstrap sampling distribution, all we need to know is how to compute the statistic for all the bootstrap resamples. There is a way of going through all resamles as defined by the simplex , it is called a Gray code. As an example, consider the law school data used by Efron (1982). One can only hope to enumerate completely for moderate sample sizes ( today). For larger sample sizes partial enumeration through carefully spaced points is discussed in Diaconis and Holmes (1994a). Another idea is to use a small dose of randomness, not as much as Monte Carlo, by doing a random walk between close points so as to be able to use updating procedures all the same, this is detailed in the case of exploring the tails of a bootstrap distribution in Diaconis and Holmes (1994b).

It is easy to give a recursive description of such a list, starting from the list for (namely 0,1). Given a list of length , form by putting a zero before each entry in , and a one before each entry in . Concatenate these two lists by writing down the first followed by the second in reverse order. Thus from we get and then the list displayed above for . For the list becomes :

Gray codes were invented by F. Gray (1939) for sending sequences of bits using a frequency transmitting device. If the ordinary integer indexing of the bit sequence is used then a small change in reception, between 15 and 16, for instance, has a large impact on the bit string understood. Gray codes enable a coding that minimizes the effect of such an error. A careful description and literature review can be found in Wilf (1989). One crucial feature: there are non-recursive algorithms for providing the successor to a vector in the sequence in a simple way. This is implemented through keeping track of the divisibility by 2 and of the step number.

One way to express this is as follows : let be the binary representation of the integer , and let be the string of rank in the Gray code list. Then and . For example, when , the list above shows the string of rank is ; now . So . Thus from a given string in the Gray code and the rank one can compute the successor. There is a parsimonious implementation of this in the algorithm given in the appendix. Proofs of these results can be found in Wilf (1989).

Let be the original data supposed to be independent and identically distributed from an unknown distribution on a space . The bootstrap proceeds by supposing that replacement of by , the empirical distribution, can provide insight on sampling variability problems.

Practically one proceeds by repeatedly choosing from the points with replacement. This leads to bootstrap replications . There are such possible replications, however these are not all different and grouping together replications that generate the same subset we can characterize each resample by its weight vector where is the number of times appears in the replication. Thus .

Let the space of compositions of into at most parts be

Thus . We proceed by running through all compositions in a systematic way. Note that the uniform distribution on induces a multinomial distribution on

To form the exhaustive bootstrap distribution of a statistic one need only compute each of the statistics and associate a weight with it. The shift from to gives substantial savings. For the law school data, while .

Efficient updating avoids multiplying such large numbers by factors of . This is what makes the computation feasible. Gray codes generate compositions by changing two coordinates of the vector by one up and one down. This means that can be easily updated by multiplying and dividing by the new coordinates. Similar procedures, discussed in Section C below allow efficient changes in the statistics of interest.

Following earlier work by Nijenhuis, Wilf and Knuth, Klingsberg (1982) gave methods of generating Gray codes for compositions. We will discuss this construction briefly here, details can be found in Klingsberg (1982) and Wilf (1989).

For , the algorithm produces the 10 compositions of in the
following order:

The easiest way to understand the generation of such a list is recursive, construction of the -compositions of can actually be done through that of the compositions of .

For any , the 2-composition is just provided by the list , which is of length . So the 2-compositions out of 3 are

the 2-compositons out of 2 are

and 2-compositions out of 1 are

Finally there is only one 2-composition of :

The 3-out of 3 list is obtained by appending a to the list, a
1 to the list, a 2 to the list and a 3 to the
list. These four lists are then concatenated by writing the first, the
second in reverse order, the third in its original order followed by the
fourth in reverse order. This is actually written:

and more generally

The same procedure leads to the 35 compositions of in the following
order:

The lists generated in this way have the property that two successive compositions differ only by in two coordinates.

Klingsberg (1982) provides a simple nonrecursive algorithm that generates the successor of any compositon in this Gray code. This is crucial for the implementationin the the present paper. It requires that one keep track of the whereabouts of the first two non-zero elements and an updating counter. Both a and a version of the algorithm are provided in the appendix.

We conclude this subsection by discussing a different algorithm due to Nijenhuis and Wilf (1978, pp. 40-46) which runs through the compositions in lexicographic order (reading from right to left). This algorithm was suggested for bootstrapping by Fisher and Hall (1991).

__ algorithm to run through compositions __.

(1) Set .

(2) Let first with . Set .

(3) Stop when .

For example, the following list gives all 35 compoisitions in in the order produced by the N.W. algroithm

The data consists of 15 pairs of numbers (GPA, LSAT) for a sample of American law schools. The correlation coefficient is .

I gave you inclass the handouts
of the complete bootstrap distribution:
**Figure 1.1** Exhaustive Bootstrap for the Correlation Coefficient of the
Law School Data

And a Monte Carlo study with