OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected to start before the end of Hilary Term to allow all future events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed on the Staff Gateway and via email to identified OxTalks users.
If you have any questions, please contact halo@digital.ox.ac.uk
We propose an algorithm for machine learning of conditional distribution functions for a dependent variable (Y ) with continuous support. The algorithm produces a complete description of the conditional distribution function at all observed points in the covariate (X) space, and provides a similar estimate for other possible covariate values. The descriptions it provides are quite general and are globally valid conditional densities.The algorithm is multilayered and feed-forward. Each layer has the same statistical interpretation: Layer k takes a vector e(k-1) that is nearly perfectly marginally Gaussian and makes it more marginally Gaussian and more independent of X. It does this by applying a continuous monotonic transformation that varies depending on an observation’s X value. Each layer is estimated by an elastic net regularization of maximum likelihood. We demonstrate Wilks’ phenomenon for the composite algorithm and show how to calculate the algorithm’s effective dimension.
Please sign up for meetings here: docs.google.com/spreadsheets/d/1qPQqXivNYBDNJY_0OdHcZfjslHLu5UtSVQMd0LpETqc/edit#gid=0