Humans learn to perform many different tasks over their lifespan. In machine learning, this “continual learning” is a major unsolved challenge, as artificial neural networks, in contrast to humans, suffer from catastrophic interference when learning new tasks. I’ll present a summary of recently published work as well as preliminary findings from an ongoing project in which we examined choice patterns and neural responses of humans and state-of-the-art neural networks while they were learning to perform multiple categorisation tasks. Humans benefited from sequential learning of one task at a time, which seemed to allow them to learn optimally segregated representations of each task. In contrast, neural networks were only able to learn both tasks when trained in an interleaved fashion. We found out that interference under sequential training occurs predominantly in deep layers of the network, which encode abstract, task relevant variables. Furthermore, we discovered that humans who had a strong prior to represent the stimuli in a manner which was beneficial for rule learning benefited even more from a sequential training curriculum. We then trained neural networks to develop a similar prior bias in the early layers and observed that they suffered from less interference between sequentially learned tasks. We have now begun to collect neuroimaging data to formally compare how task representations differ between biological and artificial information processing systems and obtain empirical evidence for a mechanistic explanation of this hallmark of human cognition.