Humans can discover and exploit shared structure in a problem domain to improve learning performance, to the point of being able to learn from a very limited amount of data. The theory of meta-learning hypothesizes that such fast learning is supported by slower learning processes which unfold over many problem instances. While a number of artificial meta-learning algorithms have been proposed, the biological mechanisms that support this form of learning are largely unknown. Here, we present a biologically plausible meta-learning rule in which synaptic changes are buffered and contrasted across more than one problem before being consolidated. Our rule is theoretically justified and, unlike standard machine learning methods, it does not require reversing learning trajectories in time or evaluating second-order derivatives, two operations that are difficult to conceive in neural circuits. Experiments reveal that our meta-learning rule enables deep neural network models to learn new tasks from few labeled examples. We conclude by discussing a systems model where the hippocampus plays the role of an instructor which is in charge of prescribing auxiliary learning problems to the cortex. Our theory suggests that the concerted action of hippocampus and cortex may enable meta-learning to be implemented using a simple synaptic plasticity rule.