Thursday, December 5, 2019

Human sensorimotor system rapidly localizes touch on a hand-held tool; somatosensory cortex efficiently extracts touch location from the tool’s vibrations, recruiting & repurposing neural processes that map touch on the body

Somatosensory Cortex Efficiently Processes Touch Located Beyond the Body. Luke E. Miller et al. Current Biology, December 5 2019. https://doi.org/10.1016/j.cub.2019.10.043

Highlights
•    Human sensorimotor system rapidly localizes touch on a hand-held tool
•    Brain responses in a deafferented patient suggest vibrations encode touch location
•    Somatosensory cortex efficiently extracts touch location from the tool’s vibrations
•    Somatosensory cortex reuses neural processes devoted to mapping touch on the body

Summary: The extent to which a tool is an extension of its user is a question that has fascinated writers and philosophers for centuries [1]. Despite two decades of research [2, 3, 4, 5, 6, 7], it remains unknown how this could be instantiated at the neural level. To this aim, the present study combined behavior, electrophysiology and neuronal modeling to characterize how the human brain could treat a tool like an extended sensory “organ.” As with the body, participants localize touches on a hand-held tool with near-perfect accuracy [7]. This behavior is owed to the ability of the somatosensory system to rapidly and efficiently use the tool as a tactile extension of the body. Using electroencephalography (EEG), we found that where a hand-held tool was touched was immediately coded in the neural dynamics of primary somatosensory and posterior parietal cortices of healthy participants. We found similar neural responses in a proprioceptively deafferented patient with spared touch perception, suggesting that location information is extracted from the rod’s vibrational patterns. Simulations of mechanoreceptor responses [8] suggested that the speed at which these patterns are processed is highly efficient. A second EEG experiment showed that touches on the tool and arm surfaces were localized by similar stages of cortical processing. Multivariate decoding algorithms and cortical source reconstruction provided further evidence that early limb-based processes were repurposed to map touch on a tool. We propose that an elementary strategy the human brain uses to sense with tools is to recruit primary somatosensory dynamics otherwise devoted to the body.

Data for the EEG experiments and skin-neuron modeling: https://osf.io/c4qmr

Results and Discussion

In somatosensory perception, there is evidence in many species that intermediaries are treated like non-neural sensory extensions of the body [9]. For example, some spiders actively use their web as an extended sensory “organ” to locate prey [10]. Analogously, humans can use tools to sense the properties of objects from a distance [11, 12, 13], such as when a blind person uses a cane to probe the surrounding terrain. This sensorimotor ability is so advanced that humans can almost perfectly localize touch on the surface of a tool [7], suggesting a strong parallel with tactile localization on the body. Characterizing the neural dynamics of tool-extended touch localization provides us with a compelling opportunity to investigate the boundaries of somatosensory processing and hence the question of sensory embodiment: to what extent does the human brain treat a tool like an extended sensory organ?

From a theoretical perspective [7], the sensory embodiment of a tool predicts that—as is the case with biological sense organs—the cerebral cortex (1) rapidly extracts location-based information from changes in a tool’s sensory-relevant mechanical state (e.g., vibrations) and (2) makes this information available to the somatosensory system in an efficient manner. The peripheral code for touch location is likely different for skin (e.g., a place code) and tools (e.g., a temporal code) [7]. Transforming a temporal to a spatial code—a necessary step for tool-extended localization—is a non-trivial task for the brain. We predict that, to do so efficiently, (3) the brain repurposes low-level processing stages dedicated to localizing touch on the body to localize touch on a tool. Direct evidence for sensory embodiment requires understanding how the structural dynamics of tools couple to the neural dynamics of the cortex. Such evidence can be obtained using neuroimaging methods with high temporal resolution. To this aim, we combined electroencephalography (EEG) and computational modeling to test the aforementioned predictions.
The Cerebral Cortex Rapidly Processes Where a Tool Was Touched

In an initial experiment (n = 16), participants localized touches applied on the surface of a 1-m wooden rod (Figure 1A) while we recorded their cortical dynamics using EEG. We designed a delayed match-to-sample task that forced participants to compare the location of two touches (delivered via solenoids; Figure S1A) separated in time (Figure 1B). If the two touches were felt to be in different locations, participants made no overt response. If they were felt to be in the same location, participants used a pedal with their ipsilateral left foot to report whether the touches were close or far from their hand. Participants never used the rod before the experiment and never received performance feedback. As a result, participants had to rely on pre-existing internal models of tool dynamics [14]. Regardless, accuracy was near ceiling for all participants (mean: 96.4% ± 0.71%; range: 89.7%–99.5%), consistent with our prior finding [7].

When a stimulus feature is repeatedly presented to a sensory system, the responses of neural populations representing that feature are suppressed [15]. Effects of repetition are a well-accepted method for timestamping when specific features in a complex input are extracted [16]. Repetition paradigms have previously been used to characterize how sensory signals are mapped by sensorimotor cortices [17, 18, 19, 20]. Our experimental paradigm allowed us to leverage these repetition suppression effects to characterize when the brain extracted where a rod has been touched. Specifically, the amplitude of evoked brain responses reflecting the processing of impact location will be reduced if the rod is hit at the same location twice in a row compared to two distinct locations (Figure 1C).

We first characterized the cortical dynamics of tool-extended touch localization. Touching the surface of the rod led to widespread evoked responses over contralateral regions (Figures S1 and S3), starting ∼24 ms after contact (Figure S1B); this time course is consistent with the known conduction delays between upper limb nerves and primary somatosensory cortex [21]. A nonparametric cluster-based permutation analysis identified significant location-based repetition suppression in a cluster of sensorimotor electrodes between 48 and 108 ms after contact (p = 0.003; Figures 1D–1I, S1C, S1D, and S3; Table S1). This cluster spanned two well-characterized processing stages previously identified for touch on the body: (1) recurrent sensory processing within primary somatosensory (SI) and motor (MI) cortices between 40 and 60 ms after stimulation [22], which has been implicated in spatial processing [23, 24], and (2) feedforward and feedback processing between SI, MI, and posterior parietal regions between 60 and 100 ms after stimulation [25, 26], proposed to contribute to transforming a sensory map into a higher-level spatial representation [18, 27]. This suppression was too quick to reflect signals related to motor preparation/inhibition, which generally occur ∼140 ms after touch [28].
Location-based Repetition Suppression Is Driven by Vibratory Signals

We previously suggested that, during tool-extended sensing, where a rod is touched is encoded pre-neuronally by patterns of vibration (i.e., vibratory motifs; Figures S1E and S1F). When transiently contacting an object, specific resonant modes (100–1,000 Hz for long wooden rods) are selectively excited, giving rise to vibratory motifs that unequivocally encode touch location [7]. These rapid oscillations are superimposed onto a slowly evolving rigid motion that places a load on the participant’s fingers and wrist. Given that the somatosensory system is sensitive to both slow-varying loads (via proprioception) and rapid vibrations imposed to the hand (via touch), these two signals are difficult to disentangle experimentally.

To adjudicate between the contribution of each aspect of the mechanical signal, we repeated experiment 1 with a deafferented participant (DC) who lost proprioception in her right upper limb (33% accuracy in clinical testing) following the resection of a tumor near the right medulla oblongata [29]. Importantly, light touch was largely spared in her right limb (100% accuracy). DC completed the EEG experiment while holding a rod in her deafferented hand and intact left hand (separate blocks). Her behavioral performance was good for both the intact (72%) and deafferented (77%) limbs. Crucially, her neural dynamics exhibited the observed repetition suppression for both limbs, with a magnitude comparable to that of the healthy participants (Figures 1F, 1I, and S2). Though not excluding possible contributions from slow varying rigid motion (when available), this result strongly suggests that the observed suppression was largely driven by information encoded by vibrations.
Processing of Vibratory Motifs Is Temporally Efficient

We used a biologically plausible skin-neuron model [8] to quantify how efficiently the brain extracts touch location on a tool. According to principles of efficient coding, sensory cortices attempt to rapidly and sparsely represent the spatiotemporal statistics of the natural environment with minimal information loss [30]. DC’s results suggest that the brain uses vibratory motifs to extract contact location on a rod. It has been hypothesized that the spiking patterns of Pacinian afferents encode object-to-object contact during tool use [31], a claim that we found model-based evidence for [7]. This temporal code must be decoded in somatosensory processing regions, perhaps as early as the cuneate nucleus [32].

We derived an estimate of “maximal efficiency” by quantifying the time course of location encoding in a simulated population of Pacinian afferents in the hand (Figures 2A and 2B). Support-vector machine (SVM) classification revealed a temporal code that was unambiguous about contact location within 20 ms (Figure 2C). This code was efficient, corresponding to 4.6 ± 1.7 spikes per afferent. Taking into account the known conduction delays between first-order afferents and SI [21], this finding—along with our prior study [7]—suggests that location encoding within 35–40 ms would reflect an efficient representational scheme. The early suppression observed in experiment 1 (Figures 1D–1I) is consistent with this estimate. This suggests that somatosensory cortex views these temporal spike patterns as meaningful tactile features, allowing humans to efficiently use a rod as an extended sensor.

No comments:

Post a Comment