Oxford Events, the new replacement for OxTalks, will launch on 16th March. The two-week OxTalks freeze period starts on Monday 2nd March. During this time, there will be no facility to publish or edit events. The existing OxTalks site will remain available to view during this period. Once Oxford Events launches, you will need a Halo login to submit events. Full details are available on the Staff Gateway.
The GMM Continuous Updating Estimator (CUE) is known to suffer from large finite—sample variability in general. The variability may be so large that the CUE’s do not have any moments. We first identify the underlying cause for this adverse behaviour of the CUE’s, namely the weighting matrix entering the CUE objective function. We then propose to resolve this problem by introducing a class of penalised CUE’s, where large values of this weighting matrix are penalised. We show that the added penalty reduces finite—sample variability and restores moments. We also analyse the higher—order properties of the penalised version which provides guidelines for how to choose the penalty. Our preferred penalised CUE, which we call the quasi-likelihood GMM (QL-GMM) estimator, uses the log-determinant of the optimal weighting matrix as penalty. Through simulations, we find that in practice the penalised CUE dominates the standard CUE both in terms of computational and statistical properties: The former is a lot easier to compute and comes with significantly smaller variances compared to the latter. The variance reduction comes with only a small price in terms of slightly bigger biases.