Gradient-free stochastic optimization

This talk will deal with optimization problems in a statistical learning setup where the learner has no access to unbiased estimators of the gradient of the objective function. It includes stochastic optimization with zero-order oracle, continuum bandit and contextual continuum bandit problems. I’ll give an overview of recent results on minimax optimal algorithms and fundamental limits for these problems.