Friday, December 18, 2020

Early blindness does not critically impact on the development of the “social brain,” with social tasks performed on the basis of auditory or tactile information driving consistent node activations like those of the sighted

Social cognition in the blind brain: A coordinate‐based meta‐analysis. Maria Arioli  Emiliano Ricciardi  Zaira Cattaneo. Human Brain Mapping, December 15 2020. https://doi.org/10.1002/hbm.25289

Abstract: Social cognition skills are typically acquired on the basis of visual information (e.g., the observation of gaze, facial expressions, gestures). In light of this, a critical issue is whether and how the lack of visual experience affects neurocognitive mechanisms underlying social skills. This issue has been largely neglected in the literature on blindness, despite difficulties in social interactions may be particular salient in the life of blind individuals (especially children). Here we provide a meta‐analysis of neuroimaging studies reporting brain activations associated to the representation of self and others' in early blind individuals and in sighted controls. Our results indicate that early blindness does not critically impact on the development of the “social brain,” with social tasks performed on the basis of auditory or tactile information driving consistent activations in nodes of the action observation network, typically active during actual observation of others in sighted individuals. Interestingly though, activations along this network appeared more left‐lateralized in the blind than in sighted participants. These results may have important implications for the development of specific training programs to improve social skills in blind children and young adults.

4 DISCUSSION

The study of the neural bases of social cognition in the blind brain has been somehow neglected, with only a few studies specifically investigating whether and how the lack of visual input affects the functional architecture of the “social brain.” Some studies showed similar patterns of brain activity in early blind and sighted individuals during tasks tapping on social cognition abilities (Bedny et al., 2009; Ricciardi et al., 2009), while other studies suggested that social brain networks develop differently following early visual deprivation (Gougoux et al., 2009; Holig et al., 2014). These inconsistencies reported in the neuroimaging literature on social processing in blind individuals may also reflect possible confounds associated with individual studies, for example, the influence of experimental and analytic procedures as well as that of the small sample sizes (Carp, 2012). Moreover, the effects reported by individual studies are harder to generalize to the entire target group (here, the early blind), regardless the specific procedures used (Muller et al., 2018).

In light of this, we pursued a meta‐analytic approach to isolate the most consistent results in the available literature, controlling for possible confounding effects via stringent criteria for study selection. In particular, we aimed to investigate: (a) the neural coding of others' representation in early blind individuals, and (b) the specific brain regions recruited in early blind compared with sighted control individuals, during social processing of others. Although we could count only on a low number of contrasts, preventing more detailed analyses such as a direct comparison of nodes of the social brain possibly differently engaged by auditory and haptic inputs, our sample is in line with current recommendations for ALE meta‐analyses (Eickhoff et al., 2016; Muller et al., 2018).

Our findings demonstrate that the regions typically associated with the key nodes of the AON mediate social cognition abilities in early blind individuals on the basis of non‐visual inputs, as they do during actual observation of others in sighted individuals (Gardner et al., 2015). In particular, we found consistent overlapping activations in the middle temporal gyrus bilaterally, alongside the left fusiform gyrus and the right superior temporal gyrus and finally in the right inferior frontal gyrus of early blind individuals during others' processing. The bilateral activation of the middle temporal gyrus has been previously reported for the AON in a quantitative meta‐analysis on more than 100 studies in sighted individuals (Caspers et al., 2010). Moreover, similar results, including right frontal activation, are reported in a meta‐analysis investigating brain regions showing mirror properties through visual and auditory modalities in sighted individuals (Molenberghs, Cunnington, & Mattingley, 2012).

Of note, we found stronger consistent responses in the AON in the early blind as compared to sighted subjects. This is not surprising, since the studies we included in the meta‐analysis mostly employed auditory inputs (~90% of the studies, only two studies using haptic stimuli) that are the typical stimuli on which blind individuals rely on in social interactions, whereas sighted individuals are mainly guided by visual cues. This may also account for the different pattern of lateralization observed in the activation of the AON in blind and sighted individuals, with the former showing more consistent activations in the left hemisphere particularly in the middle temporal gyrus and fusiform gyrus, while the reverse comparison highlighted the activation of the right part of the middle and superior temporal cortex. The different pattern of hemispheric lateralization may depend on the high familiarity that blind individuals have in recognizing human actions or emotions on the basis of auditory (and haptic) stimuli, with prior evidence suggesting that action familiarity is associated with increasing activity in left part of the AON (Gardner et al., 2015; Ricciardi et al., 2009).

An additional analysis carried out on the whole sample of participants (regardless visual experience) confirmed the common activation in a right temporal cluster, comprising the STG, the TPJ and the MTG regions. This cluster appears to be consistently engaged in both groups during social processing and activations in these regions are likely to play a more general function in the perception of socially relevant stimuli, which is not bound to visual experience (Fairhall et al., 2017). These results fit with the involvement of the right STG and TPJ in a variety of social‐cognitive processes (Bahnemann, Dziobek, Prehn, Wolf, & Heekeren, 2010; Yang, Rosenblau, Keifer, & Pelphrey, 2015), such as biological motion perception (Beauchamp, Lee, Haxby, & Martin, 2003; Grossman et al., 2000; Peelen, Wiggett, & Downing, 2006), mentalizing (Schneider, Slaughter, Becker, & Dux, 2014; Wolf, Dziobek, & Heekeren, 2010), and emotion attribution to others on the basis of both visual and auditory/verbal information (e.g., Alba‐Ferrara, Ellison, & Mitchell, 2012; Ferrari, Schiavi, & Cattaneo, 2018; Gamond & Cattaneo, 2016; Lettieri et al., 2019; Redcay, Velnoskey, & Rowe, 2016; Sliwinska & Pitcher, 2018). These data suggest a two‐stage process in which the STS underpins an initial parsing of a stream of information, whether auditory or visual, into meaningful discrete elements, whose communicative meaning for decoding others' behavior and intentions involves more in‐depth analysis associated with increased activation in the TPJ node (Arioli & Canessa, 2019; Bahnemann et al., 2010; Redcay, 2008). Ethofer et al. (2006) showed that the pSTS is the input of the prosody processing system and represents the input to higher‐level social cognitive computations, associated with activity in the action observation system (Gardner et al., 2015), as well as in the mentalizing system (Schurz, Radua, Aichhorn, Richlan, & Perner, 2014). Accordingly, using visual stimuli Arioli et al. (2018) pointed to the pSTS as the input for the social interaction network, which includes key nodes of both action observation and theory of mind networks. Thus, the STS/TPJ regions may represent a domain‐specific hub associated with the analysis of the meaning of others' actions, regardless of the stimulation modality, and highly interconnected with the action observation and the mentalizing networks.

The lack of activation of the parietal cortex in the early blind, with the parietal cortex being another key node of the AON in sighted individuals, is possibly due to the low number of studies specifically focused on hand representation in our database (see Pellijeff, Bonilha, Morgan, McKenzie, & Jackson, 2006; 3/17, see Table 1). Indeed, Caspers et al. (2010) reported that only observation of hand actions was consistently associated with activations within parietal cortex, the observation of non‐hand actions was not. Moreover, although Ricciardi et al. (2009) reported a parietal activation in blind participants during auditory presentation of hand‐executed actions, this activation was much less extensive (particularly in the superior parietal cortex) in the blind compared to sighted controls. Another possible explanation for the lack of parietal activation in the blind in our meta‐analysis (also possibly accounting for the results by Ricciardi et al., 2009) is that the activation within the parietal part of the AON may be mainly driven by object‐related representations (Caspers et al., 2010), which are not present in several of the experiments included in the present analysis. In turn, our studies focused mainly on voice processing (6/17), and in part on face/body representation (4/17), this probably being responsible for the consistent activation we reported in the fusiform gyrus. Indeed, voice processing has been found to activate the fusiform face area in blind individuals (i.e., Fairhall et al., 2017).

The findings of this meta‐analysis suggest that the AON can develop despite the lack of any visual experience, with information acquired in other sensory modalities allowing an efficient representation of other individuals as agents with specific beliefs and intentions (vs. objects moved by physical forces). These results may explain why early blind individuals are able to efficiently interact in the social context and to learn by imitation of others' (e.g., Gamond et al., 2017; Oleszkiewicz et al., 2017; Ricciardi et al., 2009). Our findings suggest that regions of the social brain may work on the basis of different sensory inputs, depending on which sensory modality is available. Moreover, our findings are consistent with the results of a recent meta‐analysis by Zhang et al. (2019) and with prior fMRI studies with blind individuals in the social domain (e.g., Bedny et al., 2009; Ricciardi et al., 2009) suggesting that brain regions that are consistently recruited for different functions in sighted individuals, such as the dorsal fronto‐parietal network for spatial function and ventral occipito‐temporal network for object function, and—as shown here—the AON for social function, maintain their specialization despite the lack of a normal visual experience. This observation on the “social blind brain” is in line with the current, more general perspective on the blind brain that undergoes a functional reorganization due to the lack of visual experience, but whose large‐scale architecture appears to be significantly preserved (e.g., Ricciardi, Papale, Cecchetti, & Pietrini, 2020).

Interestingly, we did not find evidence for any cross‐modal consistent recruitment of the occipital cortex by social tasks in this meta‐analysis. Cross‐modal plasticity typically refers to activation of the occipital cortex of the early blind in response to input acquired in other sensory modalities, like hearing and touch (for reviews, Merabet & Pascual‐Leone, 2010; Singh, Phillips, Merabet, & Sinha, 2018; Voss, 2019), and may account (at least in part) for the superior perceptual abilities of blind subjects in the spared sensory modalities (e.g., Battal, Occelli, Bertonati, Falagiarda, & Collignon, 2020; Bauer et al., 2015). Alternatively, recruitment of the occipital cortex in the blind has been proposed to also subserve high‐level (cognitive) processing (e.g., Amedi, Raz, Pianka, Malach, & Zohary, 2003; Bedny, Pascual‐Leone, Dodell‐Feder, Fedorenko, & Saxe, 2011; Lane, Kanjlia, Omaki, & Bedny, 2015) suggesting that cortical circuits that are thought to have evolved for visual perception may come to participate in abstract and symbolic higher‐cognitive functions (see Bedny, 2017). Indeed, recent evidence has shown that during high‐level cognitive tasks (i.e., memory, language and executive control tasks), there is an increased connectivity between occipital cortex and associative cortex in the lateral prefrontal, superior parietal, and mid‐temporal areas (Abboud & Cohen, 2019), with these regions being also possibly involved in social perception (Caspers et al., 2010). In line with this, we would have expected social tasks to drive activations in the occipital cortex. This was not the case. The only region that showed a sort of cross‐modal response was the fusiform face area, in the ventral stream, probably guided by a high number of studies included in our meta‐analysis focusing on voice processing (cf. Holig et al., 2014; von Kriegstein, Kleinschmidt, Sterzer, & Giraud, 2005). In this regard, it is also worth noting that haptic perception by blind individuals of facial expressions and hand shapes (Kitada et al., 20132014) as well as whole‐body shape recognition via a visual‐to‐auditory sensory substitution device (SSD; Striem‐Amit & Amedi, 2014) led to activations in face and body‐dedicated circuits in the fusiform gyrus, showing that these dedicated circuits develop even in the absence of a normal visual experience. Our meta‐analysis shows that processes related to representation of others do not recruit the occipital cortex in the early blind, suggesting that differently to other cognitive tasks, social tasks may be mediated by higher‐level regions without the need to recruit additional occipital resources. Even if this might be related to the experimental heterogeneities that we highlighted above, the lack of a consistent recruitment of occipital cortex for social tasks we reported contributes to a better understanding of the functional role of “visual” areas in the blind brain.

In conclusion, our findings support the view that the brain of early blind individuals is functionally organized in the same way of the brain of sighted individuals although relying on different types of input (auditory and haptic) (see Bedny et al., 2009; Ricciardi et al., 2009). In the social domain, this may have important implications for educational programs for blind children. Early blindness may indeed predispose to features of social isolation, including autism (e.g., Hobson et al., 1999; Jure, Pogonza, & Rapin, 2016). It is therefore important to develop training administration guidelines specifically for persons with visual impairment (Hill‐Briggs, Dial, Morere, & Joyce, 2007). In this regard, an interesting approach would be to develop ad hoc auditory/haptic virtual reality social cognition trainings for children with blindness or severe visual impairment, as already employed with autistic children and young adults (e.g., Didehbani, Allen, Kandalaft, Krawczyk, & Chapman, 2016; Kandalaft, Didehbani, Krawczyk, Allen, & Chapman, 2013). Consistent evidence suggests that audio‐based virtual environments may be effective for the transfer of navigation skills in the blind (Connors, Chrastil, Sanchez, & Merabet, 2014; Sanchez & Lumbreras, 1999), and haptic virtual perception may be a valid and effective assistive technology for the education of blind children in domains like math learning (e.g., Espinosa‐Castaneda & Medellin‐Castillo, 2020). This approach—especially audio‐based virtual environments—may thus be extended to the social domain to allow the safe and non‐threatening practice of particular social skills in an educational setting. In this respect, and considering the importance for visually impaired children to study in a mainstream school (e.g., Davis & Hopwood, 2007; Parvin, 2015), school‐based social cognitive interventions on the social participation of children with blindness or severe visual impairment would be particularly critical, with teachers and peers being involved responding and reinforcing blind children' initiated interactions. A detailed description of the neuro‐cognitive processes underlying social cognition skills in blind individuals is thus critical to tailor training protocols aiming at targeting specific neuro‐cognitive functions. This may have also a translational clinical impact on the development of non‐invasive advanced SSDs able to translate social cues that are only visually available (such as face expressions, gestures and body language) to auditory or tactile feedback that can be processed by the intact social brain of visually deprived individuals, in terms of more abstract conceptual signals (see Cecchetti, Kupers, Ptito, Pietrini, & Ricciardi, 2016; Striem‐Amit & Amedi, 2014). These devices may help blind individuals in their interactions both in the physical and virtual (i.e., meetings via Skype, Meet or others) social world.

No comments:

Post a Comment