Our research has three main branches:
- Neurobiology of speech production: One of the major questions in speech neuroscience is what role different regions of the brain play in speech production. A major focus of our lab in this area has been to understand the functional role of the cerebellum in speech motor control. This includes understanding both its typical function in healthy speakers as well as what goes wrong when the cerebellum is damaged in patients with ataxic dysarthria, a motor speech disorder associated with damage to or degeneration of the cerebellum.
- Mechanisms of learning in speech motor control and their neural substrates: Speech control is not fixed: speakers contend with a changing vocal tract during childhood, readily change how they speak to be more similar to the people who they are speaking with (even in the course of a short conversation!), and show practice-based improvement in producing novel sound sequences. These abilities are well established, yet the mechanisms that underlie this flexibility remain poorly understood. Our research in this area explores how people are able to learn from sensory errors (differences between what they expect to hear and what they actually hear) as well as from reward-based feedback from an external source.
- Development of a computation model of speech motor control based on feedback control: The process through which the central nervous system organizes and controls the complex anatomical structure of the vocal tract to produce speech remains poorly understood. One approach to better understanding this speech motor control system is through the use of computational models. Importantly, models can serve to test hypothesis about the computational processes in human speech. The model we are developing is based on the concept of optimal feedback control. In this framework, motor commands are not preplanned, as in other theories of speech motor control. Rather, they are generated based on both the current state of the vocal tract and the production goal. To allow for fast control in the presence of feedback delays, the current state is estimated from both sensory (auditory and somatosensory) information as well as an internal estimate of the vocal tract state, which is generated based on a copy of previous motor commands. This framework has provided powerful insights in other motor domains, and makes some unique predictions about speech motor control that we are currently testing.