OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
This is joint work with Geoffrey Chinot, Felix Kuchelmeister and Matthias Löffler.
We consider the classification problem where one observes a design matrix X ∈ Rn×p and a binary response variable Y = sign(Xβ* +ξ) ∈ {±1}n. Here β* ∈ Rp is an vector of unknown coefficients with llβ*ll2= 1 and ξ ∼ N(0, σ2I) ∈ Rn is an unobservable noise vector independent of X. We will present some theoretical results on the misclassification error of a class of estimators β of β* which are based on L1-regularization or L1-interpolation. The emphasis in this talk will be on the interpolating estimator. It is observed in empirical studies that classification algorithms achieving zero training error can perform well in test sets. We aim at contributing to a theoretical understanding of this phenomenon in the high-dimensional situation (i.e. p >> n). To allow for small test error we focus on the case where σ2 is small. In the special setting of i.i.d. Gaussian design, we examine the minimum L1-norm interpolator or max-margin classifier and its rate of convergence under L1-sparsity assumptions. Related is the noisy one-bit compressed sensing problem, where we apply the algorithm of Plan and Vershynin [2013] and (re-)establish rates under L0– and L1-sparsity conditions.
References
Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming
Communications on Pure and Applied Mathematics, 66(8):1275–1297, 2013.