OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Empirical risk minimization may lead to poor prediction performance when the target distribution differs from the source populations. This talk discusses leveraging data from multiple sources and constructing more generalizable and transportable prediction models. We introduce an adversarially robust prediction model to optimize a worst-case reward concerning a class of target distributions and show that our introduced model is a weighted average of the source populations’ conditional outcome models. We leverage this identification result to robustify arbitrary machine learning algorithms, including, for example, high-dimensional regression, random forests, and neural networks. In our adversarial learning framework, we propose a novel sampling method to quantify the uncertainty of the adversarial robust prediction model. Moreover, we introduce guided adversarially robust transfer learning (GART) that uses a small amount of target domain data to guide adversarial learning. We show that GART achieves a faster convergence rate than the model fitted with the target data. Our comprehensive simulation studies suggest that GART can substantially outperform existing transfer learning methods, attaining higher robustness and accuracy.