OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected in early Hilary to allow all events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed as soon as possible.
If you have any questions, please contact halo@digital.ox.ac.uk
We study linear subset regression in the context of a high-dimensional linear model. Consider y = a + b’z + e with univariate response y and a d-vector of random regressors z, and a submodel where y is regressed on a set of p explanatory variables that are given by x = M’z, for some d x p matrix M. Here, `high-dimensional’ means that the number d of available explanatory variables in the overall model is much larger than the number p of variables in the submodel. In this paper, we present Pinsker-type results for prediction of y given x. In particular, we show that the mean squared prediction error of the best linear predictor of y given x is close to the mean squared prediction error of the corresponding Bayes predictor E[y|x], provided only that p/log(d) is small. We also show that the mean squared prediction error of the (feasible) least-squares predictor computed from n independent observations of (y,x) is close to that of the Bayes predictor, provided only that both p/log(d) and p/n are small. Our results hold uniformly in the regression parameters and over large collections of distributions for the design variables z.
Please sign up for meetings here: docs.google.com/spreadsheets/d/1GRwPBmtpUwstC4fdLZrnxfnARNYHedHykoRZG4Xq2Bo/edit#gid=0