Sunday, January 1, 2023

Most people uncritically swallowed the fake diagnosis of their true selves by a supposedly transformative, but bogus new brain-reading machine

Emulating future neurotechnology using magic. Jay A. Olson et al. Consciousness and Cognition, Volume 107, January 2023, 103450. https://doi.org/10.1016/j.concog.2022.103450

Abstract: Recent developments in neuroscience and artificial intelligence have allowed machines to decode mental processes with growing accuracy. Neuroethicists have speculated that perfecting these technologies may result in reactions ranging from an invasion of privacy to an increase in self-understanding. Yet, evaluating these predictions is difficult given that people are poor at forecasting their reactions. To address this, we developed a paradigm using elements of performance magic to emulate future neurotechnologies. We led 59 participants to believe that a (sham) neurotechnological machine could infer their preferences, detect their errors, and reveal their deep-seated attitudes. The machine gave participants randomly assigned positive or negative feedback about their brain’s supposed attitudes towards charity. Around 80% of participants in both groups provided rationalisations for this feedback, which shifted their attitudes in the manipulated direction but did not influence donation behaviour. Our paradigm reveals how people may respond to prospective neurotechnologies, which may inform neuroethical frameworks.


Introduction

Novelist Arthur C. Clarke (2013) famously asserted that “any sufficiently advanced technology is indistinguishable from magic”. But the reverse can also be true: magic tricks can be made indistinguishable from advanced technology. When paired with real scientific equipment, magic techniques can create compelling illusions that allow people to experience prospective technologies first-hand. Here, we demonstrate that a magic-based paradigm may be particularly useful to emulate neurotechnologies and potentially inform neuroethical frameworks.

Broadly defined, neurotechnology involves invasive or non-invasive methods to monitor or modulate brain activity (Goering et al., 2021). Recent developments in neural decoding and artificial intelligence have made it possible, in a limited fashion, to infer various aspects of human thought (Ritchie et al., 2019). The pairing of neural imaging with machine learning has allowed researchers to decode participants’ brain activity in order to infer what they are seeing, imagining, or even dreaming (Horikawa et al., 2013, Horikawa and Kamitani, 2017). For example, one study identified the neural correlates of viewing various face stimuli; EEG data from a single participant could be used to determine which of over one hundred faces was being presented (Nemrodov et al., 2018). Other studies have used fMRI brain activity patterns to infer basic personality traits after exposing people to threatening stimuli (Fernandes et al., 2017). Similar decoding methods have also been used to determine what verbal utterances participants were thinking about in real time (Moses et al., 2019).

Other recent developments have enabled researchers to decode information that participants are not even aware of themselves. One fMRI study decoded the semantic category of words (e.g., animal or non-animal) presented below the level of awareness (Sheikh et al., 2019). Researchers have used the same method to infer which of two images participants would choose several seconds before the participants themselves were aware of making this decision (Koenig-Robert and Pearson, 2019).

Although these findings are impressive, brain reading remains in its infancy. The information decoded from brain activity is often relatively rudimentary and requires cooperation from participants. Brain reading is further limited by the cost and technical expertise required to design and operate the imaging machines. Nevertheless, given that brain reading has the potential to become a powerful and commonplace technology in the future (Yuste et al., 2017), it is important to avoid the delay fallacy wherein discussions of the implications of emerging technologies lag behind the technological frontier (Mecacci and Haselager, 2017, van de Poel and Royakkers, 2011).

Ethicists have accordingly started to speculate about the potential ramifications of various neurotechnologies. Future developments in neural decoding may carry implications across several domains including personal responsibility, autonomy, and identity (Goering et al., 2021, Ryberg, 2017). For example, brain reading could be used to predict the risk of recidivism (Ienca and Andorno, 2017) or to influence attributions of criminal responsibility by inferring one’s mental state at the time of the crime (Meynen, 2020). Regarding autonomy, employers could use future brain reading to screen out undesirable characteristics in their employees. Brain reading also has the potential to undermine personal identity by changing how we think about ourselves. Some people may see feedback from neurotechnology as a more objective and accurate representation of personality traits, biases, or beliefs than those accessible through introspection (cf. Berent and Platt, 2021). In this way, technology may trump our subjective experiences in the understanding of who we are.

Although neurotechnology could potentially boost self-understanding, many people find the prospect of brain reading intrusive (Richmond, 2012); it violates the long-standing expectation that one’s thoughts are private (Moore, 2016). The implications of this potential loss of privacy, however, remain unclear. Thomas Nagel (1998, p. 4) argues that such privacy is fundamental to a properly functioning society: “the boundary between what we reveal and what we do not, and some control over that boundary, are among the most important attributes of our humanity.” Conversely, aside from nefarious uses such as government control, Lippert-Rasmussen (2016) argues that access to others’ thoughts could offer an additional source of information to foster intimacy and authenticity. In his view, “the gaze of others would become much less oppressive if everyone’s inner lives were transparent to everyone else” (p. 230). The speculated consequences of future neurotechnology thus show considerable range.

Importantly, these consequences may not remain merely speculative. Given the widespread and complex implications of future brain reading technologies, ethicists have proposed forward-thinking policies such as the adoption of “neurorights” to protect citizens (Baselga-Garriga et al., 2022, Yuste et al., 2017). These efforts to safeguard people from the uses and misuses of brain reading depend, in part, on our ability to anticipate people’s future reactions. More caution is needed, for example, if people see brain reading as an invasion of privacy versus a novel way to promote authenticity.

However, simply asking people about how they would react to future neurotechnologies may be insufficient. People often overestimate their responses to future events (Dillard et al., 2020, Gilbert et al., 1998) and have difficulty explaining their attitudes reliably (Hall et al., 2012). One study found that when people read vignettes of neurotechnology predicting and influencing behaviour, they interpret the situations based on their current metaphysical assumptions, even if these assumptions would be contradicted by the information in the vignettes (Rose et al., 2015). Reasoning hypothetically about a future machine may have limited validity compared to the concrete experience of having a machine control one’s mind. Instead, “Wizard of Oz” prototyping could offer a potential solution (Kelley, 1984). Here, a simulation is created of a future product by fabricating an apparently working prototype, which is then tested in real-world scenarios to generate more accurate responses from users.

We developed a Wizard of Oz-style paradigm to emulate prospective neurotechnologies based on elements of performance magic. Indeed, many of the abilities enabled by future neurotechnologies can be mimicked using magic tricks. Most relevant is the branch of performance magic known as mentalism, which involves mimicking abilities such as mind reading, thought insertion, and prediction. A brain scanner decoding a participant’s thoughts resembles a magician reading the mind of a spectator, and a device that inserts thoughts to affect behaviour resembles magicians influencing the audience’s decisions without their awareness (Olson et al., 2015). In this way, magic could create the compelling illusion of future neurotechnological developments before they are available.

We have previously demonstrated the believability of combining magic with neurotechnology by convincing university students that a brain scanner could both read and influence their thoughts (Olson et al., 2016). In a condition designed to simulate mind reading, participants chose an arbitrary two-digit number while inside a sham MRI scanner. The machine ostensibly decoded their brain activity while they focused on the number. A simple magic trick allowed the experimenter to demonstrate that the machine’s decoded number matched the one that the participant had previously chosen. The same magic trick was then used to simulate thought insertion. In this mind-influencing condition, participants were again instructed to think of a number. Instead of being told that the machine would decode their brain activity, they were told that the machine would manipulate their brain through “electromagnetic fluctuations”. The magic trick made it appear as if the machine had randomly chosen a number and then influenced participants to choose it. In this condition, participants felt less control over their decisions and reported a range of experiences, including hearing an ominous voice controlling their choices.

By combining neuroscientific-looking props with magic, we were thus able to convince educated participants to both believe in and directly experience a “future” machine that could accurately read and influence their decisions. However, given the relatively inconsequential target of the brain reading — arbitrary number choices — it is difficult to assess how participants would react to having the machine decode thoughts that are more meaningful or private, including those relevant to neuroethics.

Here, we extend our method to create a future context in which brain reading is powerful enough to decode information central to the self, such as political attitudes. We focused on attitudes towards charity because people often believe that such moral values characterise one’s “true self” (Strohminger et al., 2017). According to the lay understanding, this true self is a more private and accurate version of the self that is indicative of one’s core identity (Schlegel et al., 2011). We aimed to manipulate this core aspect of the self in order to assess reactions to more personal and ethically relevant domains. To do so, we emulated a neurotechnological machine that could identify people’s attitudes towards charity better than their own introspection. First, we aimed to explore how participants would react to a potential invasion of mental privacy by having a machine seemingly infer their consumer preferences and political attitudes. Second, we explored the crucial issue of people’s trust in neurotechnology by simulating a scenario in which the machine could give personal feedback that is inconsistent with what participants report. Finally, we investigated how people might adapt their own beliefs based on this discrepant feedback. How might people react to this dissonance between their own subjective feelings and the machine’s seemingly objective assessment? Could such brain reading supersede one’s own judgement? We present a novel method to begin answering these questions.


No comments:

Post a Comment