Tuesday, November 5, 2019

Moderate drinking: Enhanced cognition and lower dementia risk, substantive reductions in risk for cardiovascular and diabeters events are reported (but robust conclusions remain elusive)

Clarifying the neurobehavioral sequelae of moderate drinking lifestyles and acute alcohol effects with aging. Sara Jo Nixon, Ben Lewis. International Review of Neurobiology, November 5 2019. https://doi.org/10.1016/bs.irn.2019.10.016

Abstract: Epidemiological estimates indicate not only an increase in the proportion of older adults, but also an increase in those who continue moderate alcohol consumption. Substantial literatures have attempted to characterize health benefits/risks of moderate drinking lifestyles. Not uncommonly, reports address outcomes in a single outcome, such as cardiovascular function or cognitive decline, rather than providing a broader overview of systems. In this narrative review, retaining focus on neurobiological considerations, we summarize key findings regarding moderate drinking and three health domains, cardiovascular health, Type 2 diabetes (T2D), and cognition. Interestingly, few investigators have studied bouts of low/moderate doses of alcohol consumption, a pattern consistent with moderate drinking lifestyles. Here, we address both moderate drinking as a lifestyle and as an acute event.
Review of health-related correlates illustrates continuing inconsistencies. Although substantive reductions in risk for cardiovascular and T2D events are reported, robust conclusions remain elusive. Similarly, whereas moderate drinking is often associated with enhanced cognition and lower dementia risk, few benefits are noted in rates of decline or alterations in brain structure. The effect of sex/gender varies across health domains and by consumption levels. For example, women appear to differentially benefit from alcohol use in terms of T2D, but experience greater risk when considering aspects of cardiovascular function. Finally, we observe that socially relevant alcohol doses do not consistently impair performance in older adults. Rather, older drinkers demonstrate divergent, but not necessarily detrimental, patterns in neural activation and some behavioral measures relative to younger drinkers. Taken together, the epidemiological and laboratory studies reinforce the need for greater attention to key individual differences and for the conduct of systematic studies sensitive to age-related shifts in neurobiological systems.

Keywords: AlcoholModerate drinkingHealthCognitionBehaviorAgingOlder adultsAcute administrationNeurophysiology

In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly

The Eighty Five Percent Rule for optimal learning. Robert C. Wilson, Amitai Shenhav, Mark Straccia & Jonathan D. Cohen. Nature Communications, volume 10, Article number: 4646 (2019). November 5 2019. https://www.nature.com/articles/s41467-019-12552-4

Abstract: Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.


Discussion

In this article we considered the effect of training accuracy on learning in the case of binary classification tasks and stochastic gradient-descent-based learning rules. We found that the rate of learning is maximized when the difficulty of training is adjusted to keep the training accuracy at around 85%. We showed that training at the optimal accuracy proceeds exponentially faster than training at a fixed difficulty. Finally we demonstrated the efficacy of the Eighty Five Percent Rule in the case of artificial and biologically plausible neural networks.
Our results have implications for a number of fields. Perhaps most directly, our findings move towards a theory for identifying the optimal environmental settings in order to maximize the rate of gradient-based learning. Thus the Eighty Five Percent Rule should hold for a wide range of machine learning algorithms including multilayered feedforward and recurrent neural networks (e.g. including ‘deep learning’ networks using backpropagation9, reservoir computing networks21,22, as well as Perceptrons). Of course, in these more complex situations, our assumptions may not always be met. For example, as shown in the Methods, relaxing the assumption that the noise is Gaussian leads to changes in the optimal training accuracy: from 85% for Gaussian, to 82% for Laplacian noise, to 75% for Cauchy noise (Eq. (31) in the “Methods”).
More generally, extensions to this work should consider how batch-based training changes the optimal accuracy, and how the Eighty Five Percent Rule changes when there are more than two categories. In batch learning, the optimal difficulty to select for the examples in each batch will likely depend on the rate of learning relative to the size of the batch. If learning is slow, then selecting examples in a batch that satisfy the 85% rule may work, but if learning is fast, then mixing in more difficult examples may be best. For multiple categories, it is likely possible to perform similar analyses, although the mapping between decision variable and categories will be more complex as will be the error rates which could be category specific (e.g., misclassifying category 1 as category 2 instead of category 3).
In Psychology and Cognitive Science, the Eighty Five Percent Rule accords with the informal intuition of many experimentalists that participant engagement is often maximized when tasks are neither too easy nor too hard. Indeed it is notable that staircasing procedures (that aim to titrate task difficulty so that error rate is fixed during learning) are commonly designed to produce about 80–85% accuracy17. Similarly, when given a free choice about the difficulty of task they can perform, participants will spontaneously choose tasks of intermediate difficulty levels as they learn23. Despite the prevalence of this intuition, to the best of our knowledge no formal theoretical work has addressed the effect of training accuracy on learning, a test of which is an important direction for future work.
More generally, our work closely relates to the Region of Proximal Learning and Desirable Difficulty frameworks in education24,25,26 and Curriculum Learning and Self-Paced Learning7,8 in computer science. These related, but distinct, frameworks propose that people and machines should learn best when training tasks involve just the right amount of difficulty. In the Desirable Difficulties framework, the difficulty in the task must be of a ‘desirable’ kind, such as spacing practice over time, that promotes learning as opposed to an undesirable kind that does not. In the Region of Proximal Learning framework, which builds on early work by Piaget27 and Vygotsky28, this optimal difficulty is in a region of difficulty just beyond the person’s current ability. Curriculum and Self-Paced Learning in computer science build on similar intuitions, that machines should learn best when training examples are presented in order from easy to hard. In practice, the optimal difficulty in all of these domains is determined empirically and is often dependent on many factors29. In this context, our work offers a way of deriving the desired difficulty and the region of proximal learning in the special case of binary classification tasks for which stochastic gradient-descent learning rules apply. As such our work represents the first step towards a more mathematical instantiation of these theories, although it remains to be generalized to a broader class of circumstances, such as multi-choice tasks and different learning algorithms.

[...] our work points to a mathematical theory of the state of ‘Flow’34. This state, ‘in which an individual is completely immersed in an activity without reflective self-consciousness but with a deep sense of control’ [ref. 35, p. 1], is thought to occur most often when the demands of the task are well matched to the skills of the participant. This idea of balance between skill and challenge was captured originally with a simple conceptual diagram (Fig. 5) with two other states: ‘anxiety’ when challenge exceeds skill and ‘boredom’ when skill exceeds challenge. These three qualitatively different regions (flow, anxiety, and boredom) arise naturally in our model. Identifying the precision, β, with the level of skill and the level challenge with the inverse of true decision variable, 1/Δ, we see that when challenge equals skill, flow is associated with a high learning rate and accuracy, anxiety with low learning rate and accuracy and boredom with high accuracy but low learning rate (Fig. 5b, c). Intriguingly, recent work by Vuorre and Metcalfe, has found that subjective feelings of Flow peaks on tasks that are subjectively rated as being of intermediate difficulty36. In addition work on learning to control brain computer interfaces finds that subjective, self-reported measures of ‘optimal difficulty’, peak at a difficulty associated with maximal learning, and not at a difficulty associated with optimal decoding of neural activity37. Going forward, it will be interesting to test whether these subjective measures of engagement peak at the point of maximal learning gradient, which for binary classification tasks is 85%.

What We Know And Don't Know About Stressful Life Events and Disease Risk

Ten Surprising Facts About Stressful Life Events and Disease Risk. Sheldon Cohen, Michael L.M. Murphy, and Aric A. Prather. Annual Review of Psychology, Vol. 70:577-597. https://doi.org/10.1146/annurev-psych-010418-102857

Abstract: After over 70 years of research on the association between stressful life events and health, it is generally accepted that we have a good understanding of the role of stressors in disease risk. In this review, we highlight that knowledge but also emphasize misunderstandings and weaknesses in this literature with the hope of triggering further theoretical and empirical development. We organize this review in a somewhat provocative manner, with each section focusing on an important issue in the literature where we feel that there has been some misunderstanding of the evidence and its implications. Issues that we address include the definition of a stressful event, characteristics of diseases that are impacted by events, differences in the effects of chronic and acute events, the cumulative effects of events, differences in events across the life course, differences in events for men and women, resilience to events, and methodological challenges in the literature.

Keywords: stressors, life events, health, disease

---
REFLECTIONS AND CONCLUSIONS

What We Know About Stressful Life Events and Disease Risk

What we can be sure of is that stressful life events predict increases in severity and progression of multiple diseases, including depression, cardiovascular diseases, HIV/AIDS, asthma, and autoimmune diseases. Although there is also evidence for stressful events predicting disease onset, challenges in obtaining sensitive assessments of premorbid states at baseline (for example, in cancer and heart disease) make interpretation of much of these data as evidence for onset less compelling. In general, stressful life events are thought to influence disease risk through their effects on affect, behavior, and physiology. These effects include affective dysregulation such as increases in anxiety, fear, and depression. Additionally, behavioral changes occurring as adaptations or coping responses to stressors, such as increased smoking, decreased exercise and sleep, poorer diets, and poorer adherence to medical regimens, provide important pathways through which stressors can influence disease risk. Two endocrine response systems, the hypothalamic-pituitaryadrenocortical (HPA) axis and the sympathetic-adrenal-medullary (SAM) system, are particularly reactive to psychological stress and are also thought to play a major role in linking stressor exposure to disease. Prolonged or repeated activation of the HPA axis and SAM system can interfere with their control of other physiological systems (e.g., cardiovascular, metabolic, immune), resulting in increased risk for physical and psychiatric disorders (Cohen et al. 1995b, McEwen 1998).

Chronic stressor exposure is considered to be the most toxic form of stressor exposure because chronic events are the most likely to result in long-term or permanent changes in the emotional, physiological, and behavioral responses that influence susceptibility to and course of disease. These exposures include those to stressful events that persist over an extended duration (e.g., caring for a spouse with dementia) and to brief focal events that continue to be experienced as overwhelming long after they have ended (e.g., experiencing a sexual assault). Even so, acute stressors seem to play a special role in triggering disease events among those with underlying pathology (whether premorbid or morbid), such as asthma and heart attacks.

One of the most provocative aspects of the evidence linking stressful events to disease is the broad range of diseases that are presumed to be affected. As discussed above, the range of effects may be attributable to the fact that many behavioral and physiological responses to stressors are risk factors for a wide range of diseases. The more of these responses to stressful events are associated with risk for a specific disease, the greater is the chance that stressful events will increase the risk for the onset and progression of that disease. For example, risk factors for CVD include many of the behavioral effects of stressors (poor diet, smoking, inadequate physical activity). In addition, stressor effects on CVD (Kaplan et al. 1987, Skantze et al. 1998) and HIV (Capitanio et al. 1998, Cole et al. 2003) are mediated by physiological effects of stressors (e.g., sympathetic activation, glucocorticoid regulation, and inflammation).

It is unlikely that all diseases are modulated by stressful life event exposure. Rare conditions, such as those that are genetic and of high penetrance, leave little room for stressful life events to play a role in disease onset. For example, Tay-Sachs disease is an autosomal recessive disorder expressed in infancy that results in destruction of neurons in both the spinal cord and brain. This disease is fully penetrant, meaning that, if an individual carries two copies of the mutation in the HEXA gene, then they will be affected. Other inherited disorders, such as Huntington’s disease, show high penetrance but are not fully penetrant, leaving room for environmental exposures, behavioral processes, and interactions among these factors to influence disease onset. Note that, upon disease onset, it is unlikely that any disease is immune to the impact of stressor exposure if pathways elicited by the stressor are implicated in the pathogenesis or symptom course of the disease.


What We Do Not Know About Stressful Life Events and Disease Risk

There are still a number of key issues in understanding how stressful events might alter disease pathogenesis where the data are still insufficient to provide clear answers. These include the lack of a clear conceptual definition of what constitutes a stressful event. Alternative approaches (adaptation, threat, goal interruption, demand versus control) overlap in their predictions, providing little leverage for empirically establishing the unique nature of major stressful events. The lack of understanding of the primary nature of stressful events also obscures the reasons for certain events (e.g., interpersonal, economic) being more potent.

Two other important questions for which we lack consistent evidence are whether the stress load accumulates with each additional stressor and whether previous or ongoing chronic stressors moderate responses to current ones. The nature of the cumulative effects of stressors is key to obtaining sensitive assessments of the effects of stressful events on disease and for planning environmental (stressor-reduction) interventions to reduce the impact of events on our health. Evidence that single events may be sufficient to trigger risk for disease has raised two important questions. First, are some types of events more potent than others? Wea ddress this question above (in the section titled Fact 6: Certain Types of Stressful Events Are Particularly Potent) using the existing evidence, but it is important to emphasize the relative lack of studies comparing the impact of different stressors on the same outcomes (for some exceptions, see Cohen et al. 1998, Kendler et al. 2003, Murphy et al. 2015). Second, are specific types of events linked to specific diseases? This question derives from scattered evidence of stressors that are potent predictors of specific diseases [e.g., social loss for depression (Kendler et al. 2003), work stress for CHD (Kivim¨aki et al. 2006)] and of specific stress biomarkers [e.g., threats to social status leading to cortisol responses (Denson et al. 2009, Dickerson & Kemeny 2004)]. While it is provocative, there are no direct tests of the stressor-disease specificity hypothesis. A proper examination of this theory would require studies that not only conduct broad assessments of different types of stressful life events, but also measure multiple unique diseases to draw comparisons. Such studies may not be feasible due to the high costs of properly assessing multiple disease outcomes and the need for large numbers of participants to obtain sufficient numbers of persons developing (incidence) or initially having each disease so as to measure progression. Comparisons of limited numbers of diseases proposed to have different predictors (e.g., cancer and heart disease) are more efficient and may be a good initial approach to this issue.

Another area of weakness is the lack of understanding of the types of stressful events that are most salient at different points in development. For example, although traumatic events are the type of events studied most often in children, the relative lack of focus on more normative events leaves us with an incomplete understanding of how different events influence the current and later health of young people. Overall, the relative lack of comparisons of the impact of the same events (or equivalents) across the life course further muddies our understanding of event salience as we age.

It is noteworthy that the newest generation of instruments designed to assess major stressful life events has the potential to provide some of the fine-grained information required to address many of the issues raised in this review (for a review, see Anderson et al. 2010; see also Epel et al. 2018). For example, the Life Events Assessment Profile (LEAP) (Anderson et al. 2010) is a computer-assisted, interviewer-administered measure designed to mimic the LEDS. Like the LEDS, the LEAP assesses events occurring within the past 6–12 months, uses probing questions to better define events, assesses exposure duration, and assigns objective levels of contextual threat based on LEDS dictionaries. Another instrument, the Stress and Adversity Inventory (STRAIN) (Slavich & Shields 2018), is a participant-completed computer assessment of lifetime cumulative exposure to stressors. The STRAIN assesses a range of event domains and timing of events (e.g., early life, distant, recent) and uses probing follow-up questions. Both the LEAP and the STRAIN are less expensive and time consuming than the LEDS and other interview techniques and are thus more amenable to use in large-scale studies.

The fundamental question of whether stressful events cause disease can only be rigorously evaluated by experimental studies. Ethical considerations prohibit conducting experimental studies in humans of the effects of enduring stressful events on the pathogenesis of serious disease. A major limitation of the correlational studies is insufficient evidence of (and control for) selection in who gets exposed to events, resulting in the possibility that selection factors such as environments, personalities, or genetics are the real causal agents. The concern is that the social and psychological characteristics that shape what types of stressful events people are exposed to may be directly responsible for modulating disease risk. Because it is not possible to randomly assign people to stressful life events, being able to infer that exposure to stressful events causally modulates disease will require the inclusion of covariates representing obvious individual and environmental confounders, as well as controls for stressor dependency—the extent to which individuals are responsible for generating the stressful events that they report.

Even with these methodological limitations, there is evidence from natural experiments that capitalize on real-life stressors occurring outside of a person’s control, such as natural disasters, economic downsizing, or bereavement (Cohen et al. 2007). There have also been attempts to reduce progression and recurrence of disease using experimental studies of psychosocial interventions. However, clinical trials in this area tend to be small, methodologically weak, and not specifically focused on determining whether stress reduction accounts for intervention-induced reduction in disease risk. Moreover, trials that do assess stress reduction as a mediator generally focus on the reduction of nonspecific perceptions of stress and negative affect instead of on the elimination or reduction of the stressful event itself. In contrast, evidence from prospective cohort studies and natural experiments is informative. These studies typically control for a set of accepted potentially confounding demographic and environmental factors such as age, sex, race or ethnicity, and SES. It is also informative that the results of these studies are consistent with those of laboratory experiments showing that stress modifies disease-relevant biological processes in humans and with those of animal studies that investigate stressors as causative factors in disease onset and progression (Cohen et al. 2007).

Despite many years of investigation, our understanding of resilience to stressful life events is incomplete and even seemingly contradictory (e.g., Brody et al. 2013). Resilience generally refers to the ability of an individual to maintain healthy psychological and physical functioning in the face of exposure to adverse experiences (Bonanno 2004). This definition suggests that when a healthy individual is exposed to a stressful event but does not get sick and continues to be able to function relatively normally, this person has shown resilience. What is less clear is whether there are certain types of stressful events for which people tend to show greater resilience than for others. It seems likely that factors that increase stressor severity, such as imminence of harm, uncontrollability, and unpredictability, also decrease an event’s potential to be met with resilience. Additionally, it may be possible that stressful events that are more commonly experienced are easier to adapt to due to shared cultural experiences that provide individuals with expectations for how to manage events. Conversely, less common events (e.g., combat exposure) or experiences that carry significant sociocultural stigma (e.g., rape) might be less likely to elicit resilience. As efforts to test interventions to promote resilience continue to be carried out, careful characterizations of stress exposures, including the complexities discussed in this review, will be critical to understanding the heterogeneity in physical and mental health outcomes associated with stressful life events.


Check also, from 2018...Salutogenic effects of adversity and the role of adversity for successful aging. Jan Höltge. Fall 2018, University of Zurich, Faculty of Arts, PhD Thesis. https://www.bipartisanalliance.com/2019/10/from-2018salutogenic-effects-of.html

And Research has predominantly focused on the negative effects of adversity on health and well-being; but under certain circumstances, adversity may have the potential for positive outcomes, such as increased resilience and thriving (steeling effect):
A Salutogenic Perspective on Adverse Experiences. The Curvilinear Relationship of Adversity and Well-Being. Jan Höltge et al. European Journal of Health Psychology (2018), 25, pp. 53-69. https://www.bipartisanalliance.com/2018/08/research-has-predominantly-focused-on.html