**Maxwell Jacob Grazier G'Sell**- Ph.D. Candidate, Department of Statistics, Stanford University
- Advisor: Robert Tibshirani
- Email: maxg(at)stanford.edu
- Office: 235 Sequoia Hall
- Address: 390 Serra Mall; Stanford University; Stanford, CA 94305

### Education

- B.S. in Physics and Applied Math, California Institute of Technology, 2009
- Ph.D. Candidate in Statistics, Stanford University, 2009-Present

### Awards and Honors

- Stanford Statistics Department Teaching Assistant Award, 2013
- National Science Foundation Graduate Research Fellowship, 2009-2014
- National Science Foundation VIGRE Fellowship, 2009-2011
- California Institute of Technology, Axline Scholarship, 2005-2009

### Research

I am interested in the development of statistical methodology, particularly methods that include ideas from optimization and computer science, as well as applications of statistics to the sciences and to sensor or instrument data. Lately, I have been working on inference problems that arise in regularized regression, as well as the application of optimization to assessments of estimator sensitivity and robustness.

I have also been fortunate to enjoy collaborations within the university and with industry. This has included working with researchers in the physical sciences and the medical school, as well as with industry partners on statistical aspects of business decisions.

#### Publications

- Max Grazier G'Sell, Jonathan Taylor, Robert Tibshirani (2013).
**Adaptive testing for the graphical lasso.**Submitted. [preprint] [show abstract]We consider tests of significance in the setting of the graphical lasso for inverse covariance matrix estimation. We propose a simple test statistic based on a subsequence of the knots in the graphical lasso path. We show that this statistic has an exponential asymptotic null distribution, under the null hypothesis that the model contains the true connected components.

Though the null distribution is asymptotic, we show through simulation that it provides a close approximation to the true distribution at reasonable sample sizes. Thus the test provides a simple, tractable test for the significance of new edges as they are introduced into the model. Finally, we show connections between our results and other results for regularized regression, as well as extensions of our results to other correlation matrix based methods like single-linkage clustering.

- Max Grazier G'Sell, Stefan Wager, Alexandra Chouldechova,
Robert Tibshirani (2013).
**False Discovery Rate Control for Sequential Selection Procedures, with Application to the Lasso.**[preprint] [show abstract]We consider a hypothesis testing scenario where we have a p-value p_j for each of a set of hypotheses H_1, H_2, ..., H_m, and these hypotheses must be rejected in a sequential manner. Because of the sequential setup, the standard approach of Benjamini and Hochberg cannot be applied. We propose two novel procedures called ForwardStop and StrongStop that are closely related to the Benjamini-Hochberg procedure, and prove that their False Discovery Rate is controlled at a pre-specified level in the sequential layout. This paper is motivated by recent work of Lockhart et al. deriving p-values for forward adaptive regression in the lasso framework. We apply ForwardStop and StrongStop to the lasso, and also propose two specialized procedures for it.

- Max Grazier G'Sell, Shai S. Shen-Orr, Robert Tibshirani (2013).
**Sensitivity Analysis for Inference with Partially Identifiable Covariance Matrices.**Accepted to Computational Statistics. [preprint] [show abstract]In some multivariate problems with missing data, pairs of variables exist that are never observed together. For example, some modern biological tools can produce data of this form. As a result of this structure, the covariance matrix is only partially identifiable, and point estimation requires that identifying assumptions be made. These assumptions can introduce an unknown and potentially large bias into the inference. This paper presents a method based on semidefinite programming for automatically quantifying this potential bias by computing the range of possible equal-likelihood inferred values for convex functions of the covariance matrix. We focus on the bias of missing value imputation via conditional expectation and show that our method can give an accurate assessment of the true error in cases where estimates based on sampling uncertainty alone are overly optimistic.

- Max Grazier G'Sell, Trevor Hastie, Robert Tibshirani (2013).
**False Variable Selection Rates in Regression.**In revision. [pre-revision preprint] [show abstract]There has been recent interest in extending the ideas of False Discovery Rates (FDR) to variable selection in regression settings. Traditionally the FDR in these settings has been defined in terms of the coefficients of the full regression model. Recent papers have struggled with controlling this quantity when the predictors are correlated. This paper shows that this full model definition of FDR suffers from unintuitive and potentially undesirable behavior in the presence of correlated predictors. We propose a new false selection error criterion, the False Variable Rate (FVR), that avoids these problems and behaves in a more intuitive manner. We discuss the behavior of this criterion and how it compares with the traditional FDR, as well as presenting guidelines for determining which is appropriate in a particular setting. Finally, we present a simple estimation procedure for FVR in stepwise variable selection. We analyze the performance of this estimator and draw connections to recent estimators in the literature.

### Teaching

I have been a teaching assistant for several classes at Stanford:

- Introduction to Statistical Methods (Stats 60 at
Stanford)
[show description]
An introductory course in statistics, requiring precalculus. This is primarily an undergraduate course, and includes weekly lectures by the teaching assistants.
- Theory of Probability (Stats 116 at Stanford)
[show description]
An introductory course in probability. This is primarily an undergraduate course, and includes weekly lectures by the teaching assistants.
- Paradigms for Computing with Data (Stats 290 at Stanford)
[show description]
An in-depth course on statistical computing and programming, with a focus on R. The course discusses advanced functionality of R, including common R packages, writing R packages, and interfacing R with C and Python. This is primarily a graduate level course.
- Statistical Learning (Stats 315a at Stanford)
[show description]
An overview of supervised learning, including topics in: linear regression; sparse regression; classification (LDA, logistic regression, SVMs); basis expansions and splines; kernel methods; generalized additive models; Gaussian mixtures and the EM algorithm; model assessment and selection; cross-validation; and the bootstrap. This is primarily a graduate level course.
- Observational Studies (Stats 355 at Stanford)
[show description]
A course on the design and analysis of observational studies. The course included the potential outcomes framework, randomized experiments, controlling for observable bias, assessing sensitivity to unobserved bias, instrumental variables, and observational study design. This was primarily a graduate level course.

I was also a teaching assistant at the California Institute of Technology for several statistics courses and a class on cooking.

### Personal

In addition to statistics, I am interested in computer science, programming, and robotics. I enjoy reading, cooking, and woodworking, and I am slowly trying to learn to sketch.