OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected in early Hilary to allow all events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed as soon as possible.
If you have any questions, please contact halo@digital.ox.ac.uk
Multi-armed bandits are a mathematical framework for studying sequential decision-making problems with partial feedback. Recommendation, personalization, hyperparameter tuning, and clinical trials are examples of application areas that use this framework. In this talk, I will introduce some basic algorithms for solving bandit problems and show applications of these algorithms to digital markets.