OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
The GMM Continuous Updating Estimator (CUE) is known to suffer from large finite—sample variability in general. The variability may be so large that the CUE’s do not have any moments. We first identify the underlying cause for this adverse behaviour of the CUE’s, namely the weighting matrix entering the CUE objective function. We then propose to resolve this problem by introducing a class of penalised CUE’s, where large values of this weighting matrix are penalised. We show that the added penalty reduces finite—sample variability and restores moments. We also analyse the higher—order properties of the penalised version which provides guidelines for how to choose the penalty. Our preferred penalised CUE, which we call the quasi-likelihood GMM (QL-GMM) estimator, uses the log-determinant of the optimal weighting matrix as penalty. Through simulations, we find that in practice the penalised CUE dominates the standard CUE both in terms of computational and statistical properties: The former is a lot easier to compute and comes with significantly smaller variances compared to the latter. The variance reduction comes with only a small price in terms of slightly bigger biases.