Using artificial orthographies as a window onto how the brain learns to read

My work explores how readers of alphabetic languages learn to break words down into letters and map these onto sounds, as well as to recognize words and access their meanings. In a meta-analysis of neuroimaging studies of reading, Taylor, Rastle, and Davis (2013) showed that, in accordance with cognitive models of reading, there is evidence for two pathways to reading in the brain. A dorsal pathway maps spelling to sound and a ventral pathway maps spelling to meaning. To explore the role of these brain regions in learning, I will discuss artificial orthography experiments in which adults learned to read novel words written in novel symbols. Experiment 1 (Taylor, Rastle, & Davis, 2017) examines how a focus on print-to-sound versus print-to-meaning associations influences performance and neural activity when learning to read. This revealed that letter–sound knowledge is crucial both to learning to read aloud and comprehend words and that left hemisphere dorsal brain regions (inferior parietal cortex, inferior frontal gyrus) are of primary importance in the earliest stages of literacy acquisition. Experiment 2 used representational similarity analysis (RSA) to probe how ventral occiptotemporal cortex (vOT) represents newly learned words. We revealed that, with only two weeks of training, mid-to-anterior vOT abstracts across letter-position (i.e. responds similarly to the B in BAD and the B in CAB). Furthermore, in the left hemisphere, anterior vOT representations abstract from visual form entirely to capture similarity between the sounds and meanings of words.