Human neuroimaging of predictive processing, Stephanie Rossit
Decoding the sound of hand-object interactions in early somatosensory cortex
Fraser Smith, Bruno Giordano, Amanda Kaas and Kerri Bailey (University of East Anglia; CNRS and Aix Marseille University, France; Maastricht University, The Netherlands)
Neurons, even in earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Such connections provide one way for prior experience and the current context to shape the responses of early sensory brain areas. Recently we have shown that cross-modal connections from vision to primary somatosensory cortex (S1) transmit content-specific information about familiar but not unfamiliar visual object categories [1]. In the present work, we investigated whether hearing sounds depicting familiar hand-object interactions would also trigger such activity in S1. In a rapid event-related fMRI experiment, right handed participants (N=10) listened to five exemplars from each of three categories of auditory stimuli: hand-object interactions (e.g. bouncing a ball), animal calls (e.g. dog barking), and pure tones (unfamiliar control). Participants listened attentively, and performed a one-back repetition counting task, which eliminated any need for a motor response during scanning. An independent finger-mapping localizer was completed afterwards, and used to define finger-sensitive voxels within anatomically drawn masks of the right and left post-central gyrus (PCG). Multivariate pattern analysis revealed significant decoding of different hand-object interactions within bilateral PCG, whilst no significant decoding was found for either control category. Crucially, in the finger-selective voxels, decoding accuracies were significantly higher for decoding hand-object interactions compared to both control categories in left PCG. Thus cross-modal connections from audition to early somatosensory cortex transmit content specific information about familiar hand-object sounds. This finding is consistent with Predictive Coding models, which suggest that the key goal of even the earliest sensory brain areas is to use the current context, together with prior knowledge, to predict forthcoming stimulation.
Visual predictions in different layers of visual cortex
Lars Muckli (University of Glasgow)
Normal brain function involves the interaction of internal processes with incoming sensory stimuli. We have created a series of brain imaging experiments that sample internal models and feedback mechanisms in early visual cortex. Primary visual cortex (V1) is the entry-stage for cortical processing of visual information. We can show that there are two information counter-streams concerned with: (1) retinotopic visual input and (2) top-down predictions of internal models generated by the brain. Our results speak to the conceptual framework of predictive coding. Internal models amplify and disamplify incoming information. The brain is a prediction-machinery. Healthy brain function will strike a balance between precision of prediction and prediction update based on prediction error. Our results incorporate state of the art, layer-specific ultra-high field fMRI and other imaging techniques. We argue that fMRI with it’s capability of measuring dendritic energy consumption is sensitive to record activity in different parts of layer spanning neurons which enriches our computational understanding of counter stream brain mechanisms.
Hand-selective areas in the ventral and dorsal visual streams represent how to appropriately grasp 3D tools
Stéphanie Rossit, Fraser W. Smith, Courtney Mansfield, Diana Tonin, Holly Weaver, Jenna Green, Janak Saada and Ethan Knights (University of East Anglia; Norfolk and Norwich University Hospitals NHS Foundation Trust)
Tools are manipulable objects that, unlike other objects in the world (e.g., buildings), are tightly linked to highly predictable action procedures. Neuroimaging has revealed a left-lateralized network of dorsal and ventral visual stream regions for tool-use, but the exact role of these regions remains unclear. Moreover, studies involving actual hand actions with real tools are rare as most research to date used proxies for tool-use including 2D visual stimuli (e.g., pictures) or pantomimes. Here we investigated with real 3D tools, whether the human brain represents actual object-specific functional grasps, using functional magnetic resonance imaging (fMRI) with multi-voxel pattern analysis (MVPA). Specifically, we tested if patterns of brain activity would differ depending on whether the grasp was consistent or inconsistent with how tools are typically grasped for use (e.g., grasp knife by handle rather than by its serrated edge). In a block-design fMRI paradigm, 19 participants grasped the left or right sides of 3D-printed tools (kitchen utensils) and non-tool objects (bar-shaped objects) with the right-hand. Importantly, and unknown to participants, by varying movement direction (right/left) the tool grasps were performed in either a typical (by the handle) or atypical (by the business end) manner. In addition, for each participant separate perceptual localizer runs were obtained to functionally define regions of interest (ROI). ROI MVPA showed that typical vs. atypical grasping could be decoded significantly higher for tools than non-tools in hand-selective (but not tool-, body- or object-selective) regions of the left lateral occipital temporal cortex (LOTC) and intraparietal sulcus (IPS). These findings indicate that representations of how to appropriately grasp tools are automatically evoked (even when irrelevant to task performance) in hand-selective regions of LOTC and IPS.
Decoding the sound of hand-object interactions in early somatosensory cortex
Fraser Smith, Bruno Giordano, Amanda Kaas and Kerri Bailey (University of East Anglia; CNRS and Aix Marseille University, France; Maastricht University, The Netherlands)
Neurons, even in earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Such connections provide one way for prior experience and the current context to shape the responses of early sensory brain areas. Recently we have shown that cross-modal connections from vision to primary somatosensory cortex (S1) transmit content-specific information about familiar but not unfamiliar visual object categories [1]. In the present work, we investigated whether hearing sounds depicting familiar hand-object interactions would also trigger such activity in S1. In a rapid event-related fMRI experiment, right handed participants (N=10) listened to five exemplars from each of three categories of auditory stimuli: hand-object interactions (e.g. bouncing a ball), animal calls (e.g. dog barking), and pure tones (unfamiliar control). Participants listened attentively, and performed a one-back repetition counting task, which eliminated any need for a motor response during scanning. An independent finger-mapping localizer was completed afterwards, and used to define finger-sensitive voxels within anatomically drawn masks of the right and left post-central gyrus (PCG). Multivariate pattern analysis revealed significant decoding of different hand-object interactions within bilateral PCG, whilst no significant decoding was found for either control category. Crucially, in the finger-selective voxels, decoding accuracies were significantly higher for decoding hand-object interactions compared to both control categories in left PCG. Thus cross-modal connections from audition to early somatosensory cortex transmit content specific information about familiar hand-object sounds. This finding is consistent with Predictive Coding models, which suggest that the key goal of even the earliest sensory brain areas is to use the current context, together with prior knowledge, to predict forthcoming stimulation.
- Smith, FW & Goodale MA (2015). Decoding visual objects in early somatosensory cortex. Cerebral Cortex, 25,1020-31
Visual predictions in different layers of visual cortex
Lars Muckli (University of Glasgow)
Normal brain function involves the interaction of internal processes with incoming sensory stimuli. We have created a series of brain imaging experiments that sample internal models and feedback mechanisms in early visual cortex. Primary visual cortex (V1) is the entry-stage for cortical processing of visual information. We can show that there are two information counter-streams concerned with: (1) retinotopic visual input and (2) top-down predictions of internal models generated by the brain. Our results speak to the conceptual framework of predictive coding. Internal models amplify and disamplify incoming information. The brain is a prediction-machinery. Healthy brain function will strike a balance between precision of prediction and prediction update based on prediction error. Our results incorporate state of the art, layer-specific ultra-high field fMRI and other imaging techniques. We argue that fMRI with it’s capability of measuring dendritic energy consumption is sensitive to record activity in different parts of layer spanning neurons which enriches our computational understanding of counter stream brain mechanisms.
Hand-selective areas in the ventral and dorsal visual streams represent how to appropriately grasp 3D tools
Stéphanie Rossit, Fraser W. Smith, Courtney Mansfield, Diana Tonin, Holly Weaver, Jenna Green, Janak Saada and Ethan Knights (University of East Anglia; Norfolk and Norwich University Hospitals NHS Foundation Trust)
Tools are manipulable objects that, unlike other objects in the world (e.g., buildings), are tightly linked to highly predictable action procedures. Neuroimaging has revealed a left-lateralized network of dorsal and ventral visual stream regions for tool-use, but the exact role of these regions remains unclear. Moreover, studies involving actual hand actions with real tools are rare as most research to date used proxies for tool-use including 2D visual stimuli (e.g., pictures) or pantomimes. Here we investigated with real 3D tools, whether the human brain represents actual object-specific functional grasps, using functional magnetic resonance imaging (fMRI) with multi-voxel pattern analysis (MVPA). Specifically, we tested if patterns of brain activity would differ depending on whether the grasp was consistent or inconsistent with how tools are typically grasped for use (e.g., grasp knife by handle rather than by its serrated edge). In a block-design fMRI paradigm, 19 participants grasped the left or right sides of 3D-printed tools (kitchen utensils) and non-tool objects (bar-shaped objects) with the right-hand. Importantly, and unknown to participants, by varying movement direction (right/left) the tool grasps were performed in either a typical (by the handle) or atypical (by the business end) manner. In addition, for each participant separate perceptual localizer runs were obtained to functionally define regions of interest (ROI). ROI MVPA showed that typical vs. atypical grasping could be decoded significantly higher for tools than non-tools in hand-selective (but not tool-, body- or object-selective) regions of the left lateral occipital temporal cortex (LOTC) and intraparietal sulcus (IPS). These findings indicate that representations of how to appropriately grasp tools are automatically evoked (even when irrelevant to task performance) in hand-selective regions of LOTC and IPS.