Uniformly Valid Confidence Intervals Post-Model-Selection

Please sign up for meetings at the link below:

docs.google.com/spreadsheets/d/1f8qVDhJVjjzt1slDbwiEwwKS77Bb4Yk_kg4vCM4ObfI/edit?usp=sharing

Abstract

We suggest general methods to construct asymptotically uniformly valid confidence intervals post-model-selection. The constructions are based on principles recently proposed by Berk et al. (2013). In particular the candidate models used can be misspecified, the target of inference is model-specific, and coverage is guaranteed for any data-driven model selection procedure. After developing a general theory we apply our methods to practically important situations where the candidate set of models, from which a working model is selected, consists of fixed design homoskedastic or heteroskedastic linear models, or of binary regression models with general link functions. In an extensive simulation study, we find that the proposed confidence intervals perform remarkably well, even when compared to existing methods that are tailored only for specific model selection procedures.

Link to paper:

arxiv.org/abs/1611.01043