repeatedly re-fitting a model with different weighted versions of the observed data. The
ubiquitous tools of cross-validation (CV) and the bootstrap are examples of this technique.
These methods are powerful in large part due to their model agnosticism but can be slow
to run on modern, large data sets due to the need to repeatedly re-fit the model. We use a
linear approximation to the dependence of the fitting procedure on the weights, producing
results that can be faster than repeated re-fitting by orders of magnitude. This linear
approximation is sometimes known as the “infinitesimal jackknife” (IJ) in the statistics
literature, where it has mostly been used as a theoretical tool to prove asymptotic results.
We provide explicit finite-sample error bounds for the infinitesimal jackknife in terms of a
small number of simple, verifiable assumptions. Without further modification, though, we
note that the IJ deteriorates in accuracy in high dimensions and incurs a running time
roughly cubic in dimension. We additionally show, then, how dimensionality reduction can
be used to successfully run the IJ in high dimensions in the case of leave-one-out cross
validation (LOOCV). Specifically, we consider L1 regularization for generalized linear
models. We prove that, under mild conditions, the resulting LOOCV approximation exhibits
computation time and accuracy that depend on the (small) recovered support size rather
than the full dimension. Simulated and real-data experiments support our theory
—
Recent years have witnessed an increased cross-fertilisation between the fields of statistics and computer science. In the era of Big Data, statisticians are increasingly facing the question of guaranteeing prescribed levels of inferential accuracy within certain time budget. On the other hand, computer scientists are progressively modelling data as noisy measurements coming from an underlying population, exploiting the statistical regularities of the data to save on computation.
This cross-fertilisation has led to the development and understanding of many of the algorithmic paradigms that underpin modern machine learning, including gradient descent methods and generalisation guarantees, implicit regularisation strategies, high-dimensional statistical models and algorithms.
About the event
This event will bring together experts to talk about advances at the intersection of statistics and computer science in machine learning. This two-day conference will focus on the underlying theory and the links with applications, and will feature 12 talks by leading international researchers.
The intended audience is faculty, postdoctoral researchers and Ph.D. students from the UK/EU, in order to introduce them to this area of research and to the Turing.
Add comment