On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
The GMM Continuous Updating Estimator (CUE) is known to suffer from large finite—sample variability in general. The variability may be so large that the CUE’s do not have any moments. We first identify the underlying cause for this adverse behaviour of the CUE’s, namely the weighting matrix entering the CUE objective function. We then propose to resolve this problem by introducing a class of penalised CUE’s, where large values of this weighting matrix are penalised. We show that the added penalty reduces finite—sample variability and restores moments. We also analyse the higher—order properties of the penalised version which provides guidelines for how to choose the penalty. Our preferred penalised CUE, which we call the quasi-likelihood GMM (QL-GMM) estimator, uses the log-determinant of the optimal weighting matrix as penalty. Through simulations, we find that in practice the penalised CUE dominates the standard CUE both in terms of computational and statistical properties: The former is a lot easier to compute and comes with significantly smaller variances compared to the latter. The variance reduction comes with only a small price in terms of slightly bigger biases.