Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
We develop joint out-of-sample tests for multiple testing problems that arise when comparing predictive accuracy using loss or utility functions that contain shape parameters. Our tests cover forecast comparison scenarios in which the shape parameter (vector) takes values in some subset of Euclidean space. We apply our tests to three such forecast evaluation problems. First, we consider hypotheses of equal (superior) expected utility between two portfolio strategies, defined over an interval of risk aversion parameter values. Second, we consider hypotheses of equal (superior) predictive ability between two conditional quantile forecast models using Murphy diagrams. Finally, we consider hypotheses of equal (superior) predictive ability of univariate quantile forecasts of portfolio returns–as generated by multivariate models of the portfolio assets–by examining all portfolios with positive weights summing to one. In empirical applications we show that the new tests reject at least as often as benchmarks tests, such as the standard Wald test or Bonferonni multiple correction, and are better behaved than the benchmarks in practice, in that p-values remain stable as we test at more elements of the multiple hypothesis. Monte Carlo experiments verify that our tests have good size and power properties in small sample.