Friday, May 1, 2020

Understanding Brain Death

Understanding Brain Death. Robert D. Truog, Erin Talati Paquette, Robert C. Tasker. JAMA, May 1, 2020. doi:10.1001/jama.2020.3593

The concept of brain death, or the determination of death by neurological criteria, was first proposed by a Harvard committee in the United States in 1968,1 and then adopted into the Uniform Determination of Death Act (UDDA) in 1981.2 Although the UDDA was widely accepted and endorsed by medical professional organizations, in recent years the concept has come under greater scrutiny and is increasingly the focus of legal challenges. Most urgent is that the current diagnostic standards do not satisfy the wording of the law. The UDDA defines brain death as the “irreversible cessation of all functions of the entire brain.” Yet, it is now widely acknowledged that some patients who meet the current diagnostic standards may retain brain functions that are not included in the required tests, including hypothalamic functioning.3 Until the UDDA is revised to be more specific about which functions must be lost to satisfy the definition (such as, for example, consciousness and the capacity to breathe), current medical practice will not be in alignment with the legal standard.

Fixing this problem will require resolution of a longstanding debate about what brain death actually means. Beecher,4 the chair of the 1968 Harvard committee, clearly thought that brain death was a new and distinct definition of death, different from biological death. He wrote that “when consciousness is permanently lost… this is the ‘moment’ of death.”4 But in 1981, the authors of the UDDA completely rejected this view in proposing both a cardiorespiratory and a neurological standard for determining death, insisting that “the use of two standards in a statute should not be permitted to obscure the fact that death is a unitary phenomenon.”2(p7) To support this position, the UDDA authors pointed to evidence that the brain is the master integrator of the body’s functions, such that once the brain is severely damaged, bodily functions deteriorate, with cardiac arrest and biological death invariably following the injury within several days. This unified view has continued to be the position of most experts, with one asserting that “Globally, [physicians] now invariably equate brain death with death and do not distinguish it biologically from cardiac arrest.”5

In recent years, this view has been challenged by multiple reports of cases of prolonged biological survival in patients who meet criteria for brain death. One well-known case is that of Jahi McMath, a teenaged girl who survived biologically for almost 5 years after being diagnosed as brain dead following surgery at age 13 years. During most of this time, she was cared for at home, continuing to grow and develop, along with the onset of menarche. In another case, a boy diagnosed as brain dead from meningitis at age 4 years survived biologically for more than 20 years. At autopsy, his brain was completely calcified, with no identifiable neural tissue, either grossly or microscopically. Recently, a woman was found to be 9 weeks pregnant when she was diagnosed as brain dead at age 28 years; she was maintained for several months until she delivered a healthy baby followed by donation of multiple organs.

The relative rarity of these cases is because brain death is typically a self-fulfilling prophecy; biological death usually quickly follows the diagnosis, either from organ donation or ventilator withdrawal. But in cases for which organ support is continued, as when a brain-dead woman is pregnant or when a court order requires physicians to continue treatment, prolonged biological survival may occur. As counterintuitive as it may seem, when functions such as breathing and nutrition are medically supported, the brain is not essential for maintaining biological integration and functioning.

If brain death is neither the absence of all brain function nor the biological death of the person, then what is it? Current tests for determining brain death focus on establishing 3 criteria: unconsciousness, apnea, and irreversibility of these 2 states. First, unconsciousness is diagnosed by demonstration of the absence of response to painful stimuli and absence of brainstem reflexes. While individual brainstem reflexes are irrelevant to whether the patient is alive or dead (for example, people can live normal lives with nonresponsive pupils), demonstrating that the brainstem is nonfunctional is an indirect way of inferring that the reticular activating system is nonfunctional. This neural network in the brainstem is essential for maintaining wakefulness, and thereby is a necessary substrate for consciousness. Second, apnea is diagnosed by removing patients from the ventilator for several minutes and demonstrating that they make no effort to breathe despite a high level of carbon dioxide in the blood. Third, irreversibility is assumed if the cause of the injury is known, no reversible causes can be identified, and the patient’s condition does not change over several hours. Collectively, the testing for brain death is designed to show that the patient is in a state of “irreversible apneic unconsciousness.”

Irreversible apneic unconsciousness is not the same as biological death. But should patients in this condition be considered to be legally dead? This is a complex question that hinges on metaphysical and moral views about the necessary and sufficient characteristics of a living person. The British position on this point is interesting and relevant. While the United Kingdom does not have a law on brain death, the Code of Practice of the Academy of Royal Medical Colleges explicitly endorses the view that irreversible apneic unconsciousness should be recognized as death.6 The Code states, “Death entails the irreversible loss of those essential characteristics which are necessary to the existence of a living human person and, thus, the definition of death should be regarded as the irreversible loss of the capacity for consciousness, combined with irreversible loss of the capacity to breathe.”6 Contrary to the US position, the Code does not insist that brain death is the same as biological death. It states that while “the body may continue to show signs of biological activity … these have no moral relevance to the declaration of death.”6 Following Beecher,4 the British consider brain death to be a moral determination that is distinct from biological death, based on a particular view about what constitutes the essential characteristics of a human person.

One option for reconciling the discrepancy between the UDDA and the current diagnostic standards for brain death in the United States would be to revise the UDDA along the lines of the British model. This would align the legal definition of death with current diagnostic standards. It would, however, also raise questions about how to respond to individuals who reject the concept of brain death. Even though there is nothing irrational or unreasonable about preferring a biological definition of death over other moral, religious, or metaphysical alternatives, there are concerns about the potential effects of allowing citizens to opt out of being declared brain dead. The experience in New Jersey may be relevant to this question because for more than 25 years that state has had a law permitting citizens to opt out of the determination of death by neurological criteria, and this law has not had any documented influence on either organ donation or intensive care unit utilization.7

Another potential benefit of adopting the British approach would be to facilitate improvement and refinements in the tests that are used. It is remarkable that the core tests in use today to diagnose brain death are virtually the same as those first proposed in 1968, and the authors of guidelines have commented on the “severe limitations in the current evidence base” for the determination of brain death.8 In particular, concerns have been raised about the irreversibility of the diagnosis and the certainty of the determination of unconsciousness. The latter is particularly important because studies have suggested that the behavioral bedside tests used to diagnose unconsciousness in the vegetative state may be wrong as much as 40% of the time.9 In addition, the safety of the apnea test has been questioned,10 and alternatives that do not require acutely raising the level of carbon dioxide in the patient’s blood to potentially dangerous levels could be advantageous. Incorporating modern imaging techniques and new diagnostic technologies into the routine testing for brain death could give more confidence to the claim that the patient is unconscious, provide stronger evidence of irreversibility, and reduce concerns about the safety of the tests.

Until the UDDA or individual state laws are revised, lawsuits are likely to continue because current tests do not fulfill the language of the law. This challenge provides an opportunity to clarify the meaning of brain death, better educate the public about the diagnosis, and improve the tests to make them as safe and reliable as possible.


Full text, references, etc., at the DOI above

For China as a whole, the longest warm period during the last 2000 years occurred in the 10th–13th centuries, although there were multi-decadal cold intervals in the middle to late 12th century

Multi-scale temperature variations and their regional differences in China during the Medieval Climate Anomaly. Zhixin Hao, Maowei Wu, Yang Liu, Xuezhen Zhang & Jingyun Zheng. Journal of Geographical Sciences volume 30, pages119–130. Jan 6 2020. https://link.springer.com/article/10.1007/s11442-020-1718-7

Abstract: The Medieval Climate Anomaly (MCA, AD950-1250) is the most recent warm period lasting for several hundred years and is regarded as a reference scenario when studying the impact of and adaptation to global and regional warming. In this study, we investigated the characteristics of temperature variations on decadal-centennial scales during the MCA for four regions (Northeast, Northwest, Central-east, and Tibetan Plateau) in China, based on high-resolution temperature reconstructions and related warm-cold records from historical documents. The ensemble empirical mode decomposition method is used to analyze the time series. The results showed that for China as a whole, the longest warm period during the last 2000 years occurred in the 10th–13th centuries, although there were multi-decadal cold intervals in the middle to late 12th century. However, in the beginning and ending decades, warm peaks and phases on the decadal scale of the MCA for different regions were not consistent with each other. On the inter-decadal scale, regional temperature variations were similar from 950 to 1130; moreover, their amplitudes became smaller, and the phases did not agree well from 1130 to 1250. On the multi-decadal to centennial scale, all four regions began to warm in the early 10th century and experienced two cold intervals during the MCA. However, the Northwest and Central-east China were in step with each other while the warm periods in the Northeast China and Tibetan Plateau ended about 40–50 years earlier. On the multi-centennial scale, the mean temperature difference between the MCA and Little Ice Age was significant in Northeast and Central-east China but not in the Northwest China and Tibetan Plateau. Compared to the mean temperature of the 20th century, a comparable warmth in the MCA was found in the Central-east China, but there was a little cooling in Northeast China; meanwhile, there were significantly lower temperatures in Northwest China and Tibetan Plateau.

Sexual functioning was more strongly associated with self-esteem than were safe sex & sexual consent, and sexual permissiveness was unassociated with self-esteem

From 2019... Self-esteem and sexual health: a multilevel meta-analytic review. John K. Sakaluk, James Kim, Emily Campbell, Allegra Baxter & Emily A. Impett. Health Psychology Review, Volume 14, 2020 - Issue 2, Pages 269-293, Jun 17 2019. https://doi.org/10.1080/17437199.2019.1625281

ABSTRACT: Sexual health reflects physical, emotional, mental, and social elements of sexual well-being. Researchers often position self-esteem (i.e., global or domain-specific evaluations of self) as a key correlate of sexual health. We present the first comprehensive meta-analysis of correlations between self-esteem and sexual health. Our synthesis includes 305 samples from 255 articles, containing 870 correlations from 191,161 unique participants. The overall correlation between self-esteem and sexual health was positive and small (r = .12, 95% CI: .09, .15), characterised by considerable heterogeneity and robust to different corrections. Sexual functioning (r = .27, 95% CI: .21, .34) was more strongly associated with self-esteem than were safe sex (r = .10, 95% CI: .07, .13) and sexual consent (r = .19, 95% CI: .13, .24), and sexual permissiveness was unassociated with self-esteem (r = −.02, 95% CI: -.05, .008). Most moderators were nonsignificant, although moderator data were inconsistently available, and samples were North American-centric. Evidence of publication bias was inconsistent, and study quality, theory usage, and background research were not reliably associated with study outcomes. Our synthesis suggests a need for more specific theories of self-esteem corresponding to unique domains of sexual, highlighting a need for future theorising and research.

KEYWORDS: meta-analysis, self-esteem, safe sex, sexual consent, sexual health, sexual permissiveness, sexual functioning


Rolf Degen summarizing: To put oneself in the past or future with the power of thought engages largely the same brain areas, but future mental time travel draws particularly on the right hippocampus

The radiation of autonoetic consciousness in cognitive neuroscience: A functional neuroanatomy perspective. Amnon Dafni-Merom, Shahar Arzy. Neuropsychologia, April 30 2020, 107477. https://doi.org/10.1016/j.neuropsychologia.2020.107477

Highlights
• The concepts of autonoesis and mental time travel inspired key memory-related theories.
• These include constructive episodic simulation, scene construction and self-projection.
• Meta-analysis revealed shared and specific activations for these theories.
• Mental-travels in the social, temporal and spatial domains share activations within the DMN.

Abstract: One of Endel Tulving's most important contributions to memory research is the coupling of self-knowing consciousness (or “autonoesis”) with episodic memory. According to Tulving, autonoetic episodic memory enables the uniquely human neurocognitive operation of “mental time travel”, which is the ability to deliberately “project” oneself to a specific time and place to remember personally experienced events that occurred in the past and simulate personal happenings that may occur in the future. These ideas ignited an explosion of research in the years to follow, leading to the development of several related concepts and theories regarding the role of the human self in memory and prospection. In this paper, we first explore the expansion of the concept of autonoetic consciousness in the cognitive neuroscience literature as well as the formulation of derivative concepts and theories. Subsequently, we review such concepts and theories including episodic memory, mental time travel, episodic simulation, scene construction and self-projection. In view of Tulving's emphasis of the temporal and spatial context of the experience, we also review the cognitive operation involved in “travel” (or “projection”) in these domains as well as in the social domain. We describe the underlying brain networks and processes involved, their overlapping activations and involvement in giving rise to the experience. Meta-analysis of studies investigating the underlying functional neuroanatomy of these theories revealed main overlapping activations in sub-regions of the medial prefrontal cortex, the precuneus, retrosplenial cortex, temporoparietal junction and medial temporal lobe. Dissection of these results enables to infer and quantify the interrelations in between the different theories as well as with in respect to Tulving's original ideas.

Keywords: AutonoesisEpisodic memoryScene constructionSelf-projectionConstructive episodic simulationMental time travelMental lines


Conventional swearing: 30% increase in pain threshold & pain tolerance; new “swear” words, “fouch” & “twizpipe,” were rated as more emotional & humorous but did not affect pain threshold or tolerance

Swearing as a Response to Pain: Assessing Hypoalgesic Effects of Novel “Swear” Words. Richard Stephens and Olly Robertson. Front. Psychol., April 30 2020. https://doi.org/10.3389/fpsyg.2020.00723

Abstract: Previous research showing that swearing alleviates pain is extended by addressing emotion arousal and distraction as possible mechanisms. We assessed the effects of a conventional swear word (“fuck”) and two new “swear” words identified as both emotion-arousing and distracting: “fouch” and “twizpipe.” A mixed sex group of participants (N = 92) completed a repeated measures experimental design augmented by mediation analysis. The independent variable was repeating one of four different words: “fuck” vs. “fouch” vs. “twizpipe” vs. a neutral word. The dependent variables were emotion rating, humor rating, distraction rating, cold pressor pain threshold, cold pressor pain tolerance, pain perception score, and change from resting heart rate. Mediation analyses were conducted for emotion, humor, and distraction ratings. For conventional swearing (“fuck”), confirmatory analyses found a 32% increase in pain threshold and a 33% increase in pain tolerance, accompanied by increased ratings for emotion, humor, and distraction, relative to the neutral word condition. The new “swear” words, “fouch” and “twizpipe,” were rated as more emotional and humorous than the neutral word but did not affect pain threshold or tolerance. Changes in heart rate and pain perception were absent. Our data replicate previous findings that repeating a swear word at a steady pace and volume benefits pain tolerance, extending this finding to pain threshold. Mediation analyses did not identify a pathway via which such effects manifest. Distraction appears to be of little importance but emotion arousal is worthy of future study.

Discussion

This study contributes to the psychology literature on swearing in the context of pain (Stephens et al., 2009Stephens and Umland, 2011Philipp and Lombardo, 2017Robertson et al., 2017) as the first attempt to create new “swear” words and assess some of their psychological properties. Our experiment assessed the effects of repeating three different words – a conventional swear word (“fuck”) and two new “swear” words (“fouch” and “twizpipe”) - on pain perception and tolerance, compared with a neutral word control condition (a word to describe a table). We ran a well-powered experiment with a sample consisting of 92 native English speakers. We used an ice-cold water hand immersion task known as the cold pressor procedure. This provides a controlled stimulus that is painful but not harmful and yields scores for pain threshold (time at which pain is reported) and pain tolerance (time at which the hand is removed). We also recorded heart rate as well as ratings of pain perception, emotion, humor, and distraction. The order in which participants completed the conditions (“fuck,” “fouch,” “twizpipe,” and neutral word) was randomized to guard against order effects. Pain Catastrophizing and Fear of Pain scores were gathered to help understand sample characteristics. The scores were similar to our previous data (Stephens and Umland, 2011) in which the overall mean score for Pain Catastrophizing was 25.30 (SD = 9.64) and for Fear of Pain was 87.45 (SD = 16.43). This indicates that our sample may be considered typical for these variables and, as such, that these variables are unlikely to have unduly influenced the pain outcomes.
Hypotheses (i) to (iii) were put forward as manipulation checks to ensure that the made-up “swear” words had the desired properties in terms of the emotion, humor, and distraction ratings. Hypothesis (i) that emotion ratings would be greater for “fouch” vs. neutral word was supported, and hypothesis (ii) that humor and distraction ratings would be greater for “twizpipe” vs. neutral word was partially supported in that the humor rating was greater for “twizpipe.” Interestingly, both made-up “swear” words showed higher ratings for emotion and humor compared with the neutral word. Hypothesis (iii) that emotion, humor, and distraction ratings would be greater for “fuck” vs. neutral word was supported. Our tests of hypotheses (i) to (iii) demonstrate that our manipulation of creating new “swear” words was successful in that “fouch” and “twizpipe” were able to evoke some of the properties of swearing, in terms of emotion rating and humor. This was not the case for distraction, however, since only “fuck” was found to have a raised distraction rating compared with the neutral word. Given that both new “swear” words had demonstrated potential to influence pain perception via increased emotion ratings and/or distracting a person from the pain via increased humor ratings, it seemed appropriate to continue with the analyses and test whether the new “swear” words had any effect on the pain outcomes. We also note that “fuck” was rated as humorous in this context, consistent with the findings of Engelthaler and Hills (2018), who found the word “fuck” was rated in the top 1% of funniest words when 5000 English words were presented one at a time.
Hypotheses (iv) to (vii) were put forward as tests of whether the conventional swear word and the new “swear” words would show hypoalgesic effects and associated changes in heart rate, as found previously. Hypothesis (iv), that cold pressor pain onset latency (pain threshold) would be increased for “fuck,” “fouch,” and “twizpipe” vs. neutral word, was supported for “fuck” but not for “fouch” or “twizpipe.” Hypothesis (v), that cold pressor pain tolerance latency would be increased for “fuck,” “fouch,” and “twizpipe” vs. neutral word, was also supported for “fuck” but not for “fouch” or “twizpipe”. Together, these findings extend previous research on swearing and pain by replicating, in a pre-registered study, the beneficial effect of swearing on pain tolerance and showing that swearing has an additional beneficial effect on pain threshold (onset latency), a behavioral pain measure that has not previously been assessed.
Regarding the new “swear” words, our confirmatory analyses showed no beneficial effects for pain threshold and tolerance. On the suggestion of a peer reviewer, we ran exploratory equivalence tests assessing whether the effect sizes for these words were within a range considered to be negligible. These analyses confirmed the absence of a beneficial effect for pain threshold and tolerance beyond a smallest effect size of interest based on the conservatively small estimate of dz = 0.3 entered into the power calculation. That these new “swear” words had no effect on pain threshold and tolerance is not altogether surprising. While it is not properly understood how swear words gain their power, it has been suggested that swearing is learned during childhood and that aversive classical conditioning contributes to the emotionally arousing aspects of swear word use (Jay, 2009Tomash and Reed, 2013). This suggests that how and when we learn conventional swear words is an important aspect of how they function. Clearly, the new “swear” words utilized in the present study were not learned during childhood and so there was no possibility that this aspect could have had an influence. On the other hand, “fouch” and “twizpipe” were chosen because they had potential to mirror some properties of conventional swearing. Like the swear word, these words were rated as more emotion-evoking and humorous than the neutral word control condition. Nevertheless, these properties did not facilitate pain alleviation effects, suggesting that surface properties of swear words (such as how they sound) do not explain the hypoalgesic effects of swearing. An overall absence of pain alleviation effects for the new “swear” words in the present study would be expected based on Jay’s (2009) childhood aversive classical conditioning theory. There is little evidence for this theory other than a low powered experiment (N = 26) finding that participants reporting a higher frequency of punishment for swearing as children showed an increased skin conductance response when reading swear words, compared with participants reporting a lower frequency of punishment for swearing (Tomash and Reed, 2013). To investigate this theory further, future research should aim to verify the frequency with which such aversive classical conditioning events occur in childhood and assess the relationship between prior punishment for swearing and autonomic arousal in an adequately powered design.
Hypothesis (vi), that pain perception would be decreased for “fuck,” “fouch,” and “twizpipe” vs. neutral word, was not supported. We should not be surprised at the lack of differences for pain perception as this may indicate that participants base behavioral decisions of reporting pain onset and removing the hand on similar perceived pain levels, albeit levels that have been modified by repeating a swear word. On that basis we suggest that measuring subjective pain perception is of limited usefulness in future studies assessing hypoalgesic effects of swearing where behavioral measures such as the cold pressor procedure are employed.
Hypothesis (vii), that change from resting heart rate would be increased for “fuck” and “fouch” vs. neutral word, was not supported. The lack of heart rate differences across conditions is at odds with previous studies which have shown elevated heart rate for swearing versus a neutral word (Stephens et al., 2009Stephens and Umland, 2011). This may be due to the design of the present study in which participants completed four consecutive word repetition/cold pressor immersion conditions rather than two, as previously. Repeated presentations of similar tasks, as well as repeated exposure to aversive stimuli, have been found to result in blunted cardiovascular stress reactivity (Hughes et al., 2018). Blunted cardiovascular stress reactivity refers to the reduction in cardiovascular response to acute physiological or psychological stress (Brindle et al., 2017). It seems reasonable to suggest that repeated exposure to cold pressor-mediated acute pain may have induced cardiovascular blunting.
In the absence of clear autonomic responses to swearing, we assessed the exploratory hypothesis (viii) that the effects of swearing on pain tolerance would be mediated by one or more psychological variables, in the form of the emotion, humor, or distraction rating scores. However, none of the ratings showed evidence of mediation, with 95% confidence intervals for humor and distraction being approximately symmetrically balanced across the origin. The latter effect is of interest because swearing in the context of pain is often characterized as a deliberate strategy for distraction, and distraction is recognized as being an effective psychological means of influencing descending pain inhibitory pathways (Edwards et al., 2009). While swearing was rated as distracting (more so than the other words) the level of distraction was not related to the pain alleviation effects. Thus, based on our evidence, distraction may not be important in explaining how swearing produces hypoalgesic effects. The analysis assessing whether emotion ratings mediate the effect of swearing on extending pain tolerance also showed no effect, although here the 95% confidence interval only narrowly crossed the origin. While offering no evidential support for a mediation effect, further study assessing mediation of hypoalgesic effects of swearing via emotional arousal, in the absence of changes in heart rate, might fruitfully demonstrate this as a viable mechanism. Such an effect would be in keeping with previous research finding pain relieving effects of emotional arousal (Stephens and Allsop, 2012).
However, there is a caveat to this. At the study outset we theorized that swearing may increase emotional arousal without specifying the valence of that arousal. During peer review we were directed to literature linking emotion elicitation and pain modulation, and in particular, research by Lefebvre and Jensen (2019) who report that inducing a state of negative affect by asking participants to recall a time when they experienced a high degree of worry led to increased ratings of pain from pressure applied to the finger, relative to baseline. In addition, the same study found that inducing a state of positive affect by asking participants to recall a happy memory led to decreased ratings of pain. It is apparent that emotional modulation of pain can be explained by the two-factor behavioral inhibition system-behavioral activation system (BIS-BAS) model of pain (Jensen et al., 2016). According to the BIS-BAS model, negative affect contributes toward pain-related avoidance behaviors and associated negative cognitions, thereby increasing the subjective experience of pain. Conversely, positive affect contributes toward approach behaviors and positive cognitions, thus decreasing the subjective experience of pain. One limitation of the present study is that the measure of emotion elicitation was not valenced. This may explain why emotion was not shown to be a mediating variable in the link between swearing and hypoalgesia. Future research should assess both positive and negative emotion arousal due to swearing.
A further limitation might have been that participants did not consider themselves to be swearing when repeating the novel “swear” words. This remains unknown as we did not carry out a manipulation check asking participants whether they considered using these words was swearing. On the other hand, the novel “swear” words were selected by a panel of experts and laypeople briefed to choose words that could be used in similar ways to swear words, and which shared properties of swear words including emotional resonance and humor potential. It is also worth noting that “Fouch” begins with a fricative, defined as a sound created by forcing air through a narrow channel (here the lower teeth and upper lip) which some have associated with swearing, although other contest such a link (Stack Exchange, 2014).
Additionally, maintaining the ice water temperature in the range 3–5°C might be considered too wide a variation, such that the physical intensity of the pain stimulus was not consistent across participants. In mitigation there was no systematic variation of the temperature across the four word conditions. As shown in Table 1, the starting temperatures for each immersion were fairly consistent, with means ranging from 3.91 to 3.98 (SDs 0.50 to 0.53). This indicates that approximately 65% of immersions had starting temperatures within a 1°C range of 3.5–4.5°. Therefore, variation in temperature is unlikely to have biased the results.

A final limitation is that participants may have guessed the aims of the study and consequently demand characteristics may have influenced the results. In advertising the study as “psychological effects of vocal expressions, including swearing, while immersing the hand in ice water” we aimed to hide our predictions. Nevertheless, due to widespread media exposure for findings of previous studies conducted in the Keele Swear Lab we cannot rule out, nor quantify the extent to which, participant behavior was influenced by expectations of participants.