Synaptic plasticity as inference

Many models of synaptic plasticity have been proposed in the theoretical neuroscience literature. Most are Hebbian or spike-time-specific in nature and provide dynamics which approach a better estimate of the connection weights over time. But how close are these estimates to the best-possible connection strengths? In addition, these rules require learning rates, usually considered hyper-parameters of the model. Fixed learning rates neglect the reality that many signals in the nervous system are temporally correlated: error signals, pre-synaptic firing activity, etc. In these conditions, fixed learning rates over/under weight new information as conditions change; how can this be avoided?
By considering the synaptic plasticity as an inference or Bayesian filtering problem, we can solve both of these problems, deriving plasticity dynamics which capture the uncertainty in the weight estimate an have adaptive learning rates. In particular we will consider models where the feed-back error to the connections is temporally correlated and discuss necessary approximations to obtain biologically feasible learning dynamics.