On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
This is joint work with Geoffrey Chinot, Felix Kuchelmeister and Matthias Löffler.
We consider the classification problem where one observes a design matrix X ∈ Rn×p and a binary response variable Y = sign(Xβ* +ξ) ∈ {±1}n. Here β* ∈ Rp is an vector of unknown coefficients with llβ*ll2= 1 and ξ ∼ N(0, σ2I) ∈ Rn is an unobservable noise vector independent of X. We will present some theoretical results on the misclassification error of a class of estimators β of β* which are based on L1-regularization or L1-interpolation. The emphasis in this talk will be on the interpolating estimator. It is observed in empirical studies that classification algorithms achieving zero training error can perform well in test sets. We aim at contributing to a theoretical understanding of this phenomenon in the high-dimensional situation (i.e. p >> n). To allow for small test error we focus on the case where σ2 is small. In the special setting of i.i.d. Gaussian design, we examine the minimum L1-norm interpolator or max-margin classifier and its rate of convergence under L1-sparsity assumptions. Related is the noisy one-bit compressed sensing problem, where we apply the algorithm of Plan and Vershynin [2013] and (re-)establish rates under L0– and L1-sparsity conditions.
References
Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming
Communications on Pure and Applied Mathematics, 66(8):1275–1297, 2013.