top

code  /  papers  /  links


Examples


The central idea in compressive sampling is that the number of samples we need to capture a signal depends primarily on its structural content, rather than its bandwidth.

As a concrete example, suppose that f is a discrete signal (of length 256) composed of 16 complex sinusoids of whose frequencies, phases, and amplitudes are unknown.  As such, the discrete Fourier transform (DFT) of f has 16 nonzero components (illustrated below).


Fourier transform of f


How many time-domain samples do we need to capture f?  As there are no restriction on the frequencies in f, it is not at all bandlimited; any of the 256 components of the DFT can be nonzero.  The Shannon/Nyquist theory then says that to recover this signal (via linear "sinc'' interpolation), we will need to have all 256 samples in the time domain.

Now suppose that we are given only 80 samples.  Below is a plot of f in the time domain, the observed samples are in red.


f in the time domain

Of course, we will not be able to "fill in" the missing samples using sinc interpolation (or any other kind of linear method). 
But since we know that the signal we wish to recover is "sparse", we can take a different approach to the problem.  Given these 80 observed samples, the set of length-256 signals that have samples that match our observations is an affine subspace of dimension 256-80=176.  From the candidate signals in this set, we choose the one whose DFT has minimum L1 norm; that is, the sum of the magnitudes of the Fourier transform is the smallest.  In doing this, we are able to recover the signal exactly!


Recovered signal, time domain


Recovered signal, frequency domain

In general, if there are B sinusoids in the signal, we will be able to recover using L1 minimization from on the order of B log N samples (see the "Robust Uncertainty Principles..." paper for a full exposition).


The framework is easily extended to more general types of measurements (in place of time-domain samples), and more general types of sparsity (rather than sparisty in the frequency domain).   Suppose that instead of taking K samples in the time domain, we project the signal onto a randomly chosen K dimensional subspace.  Then if f is B sparse is a known orthobasis,  and K is on the order of B log N, f can be recovered without error by solving an l1 minimization problem.

Here is an example.  The 1 megapixel image below has a perfectly sparse wavelet exapansion; it is a superposition of 25,000 Daubechies-8 wavelets. 



Instead of observing this image directly, say we took 100,000 "random measurements", i.e. we are given the projection onto a 96,000 subspace.  Then we look for the image whose wavelet transform has the smallest L1 norm that has the same projection onto this subspace (the measurements agree with what we have observed).  The result is below; as we can see the image has been reconstructed perfectly. 




The lesson here is that the number of measurements needed to acquire this image is not dictated by the resolution (1 million pixels), but rather by its inherent complexity (25k non-zero wavelets).

In the "real world", signals and images are not perfectly sparse, and there is always some degree of uncertainty in the measurements.   Fortunately, the recovery procedure can be made robust to these types of perturbations (see the stability and statistical estimation papers for details).

Another example in imaging is given below: we observe 25,000 random measurements of the 65,536 pixel image below, and quantize them to 1 digit (that is, one of 10 predefined levels).  The recovery (which is performed by solving a relaxed recovery problem, see stability for details) has an error that is on the same order as the quantization error.  Despite being very nonlinear, the recovery procedure is exceptionally robust.
65k pixel image
25k random measurements
quantized measurements
recovered image



The L1-MAGIC  package includes MATLAB code that solves these (and other) recovery problems.