Saturday, October 10, 2020

Use caution when applying behavioural science to policy

Use caution when applying behavioural science to policy. Hans IJzerman, Neil A. Lewis Jr., Andrew K. Przybylski, Netta Weinstein, Lisa DeBruine, Stuart J. Ritchie, Simine Vazire, Patrick S. Forscher, Richard D. Morey, James D. Ivory & Farid Anvari. Nature Human Behaviour, Oct 9 2020. https://www.nature.com/articles/s41562-020-00990-w

Abstract: Social and behavioural scientists have attempted to speak to the COVID-19 crisis. But is behavioural research on COVID-19 suitable for making policy decisions? We offer a taxonomy that lets our science advance in ‘evidence readiness levels’ to be suitable for policy. We caution practitioners to take extreme care translating our findings to applications.


Researchers in the social and behavioural sciences periodically debate whether their research should be used to address pressing issues in society. To provide a few examples, in the 1940s psychologists discussed using research to address problems related to intergroup relations, problems brought to the fore by the Holocaust and other acts of rampant prejudice. In the 1990s, psychologists debated whether their research should inform legal decision-making. In the 2010s, psychologists argued for advising branches of government as economists often do. And now, in 2020, psychologists and other social and behavioural scientists are arguing that our research should inform the response to the new coronavirus disease (henceforth COVID-19)1,2.


We are a team mostly consisting of empirical psychologists who conduct research on basic, applied and meta-scientific processes. We believe that scientists should apply their creativity, efforts and talents to serve our society, especially during crises. However, the way that social and behavioural science research is often conducted makes it difficult to know whether our efforts will do more good than harm. We will provide some examples from the field of social-personality psychology, where most of us were trained, to illustrate our concerns. This focus is not meant to imply that our field alone suffers from the issues we will discuss. Instead, a growing meta-science literature suggests that many other social and behavioural disciplines have encountered dynamics similar to those faced by our field.


What are those dynamics? First, study participants, mainly students, are drawn from populations that are in Western (mostly US), educated, industrialized, rich and democratic (WEIRD) societies3. Second, even with this narrow slice of population, the effects in published papers are not estimated with precision, sometimes barely ruling out trivially small effects under ostensibly controlled conditions. Third, many studies use a narrow range of stimuli and do not test for stimulus generalisability4. Fourth, many studies examine effects on measures, such as self-report scales, that are infrequently validated or linked to behaviour, much less to policy-relevant outcomes5. Fifth, independently replicated findings, even under ideal circumstances, are rare. Finally, our studies often fail to account for deeper cultural, historical, political and structural factors that play important moderating roles during the process of translation from basic findings to application. Together, these issues produce empirical insights that are more heterogeneous than might be apparent from a scan of the published literature.


Confident applications of social and behavioural science findings, then, require first and foremost an assessment of the evidence quality and weighing heterogeneity and the trade-offs and opportunity costs that follow. We must identify reliable findings that can be applied, have been investigated in the nations for which the application is intended and are derived from investigations using diverse stimuli. But the assessment of how ‘ready’ the intervention is must be included when persuading decision-makers to apply social and behavioural science evidence, particularly in crisis situations when lives are at stake and resources are limited. Not doing so can have disastrous consequences.


Here we propose one approach for assessing the quality of evidence before application and dissemination. Specifically, we draw inspiration from the US National Aeronautics and Space Administration (NASA)’s ‘technology readiness levels’ (TRL6), a benchmarking system for systematically evaluating the quality of scientific evidence and which has been used by the European Commission to judge how ready scientific applications beyond space flight are for operational environments. TRLs rank a technology’s readiness for application from 1 to 9 (see Fig. 1). At TRL1, basic principles have been reliably observed, reported and translated to a formal model. In TRL2, basic principles have been developed and tested in an application area. It is not until TRL4, when a prototype is developed, that tests are run in various environments that are as representative of the eventual application area(s) as possible. Later, at TRL6, the system is tested in a ‘real’ environment (like ground-to-space). At the very highest level (TRL9), the system has been ‘flight-proven’ through successful mission operations. These TRLs provide a useful framework to jumpstart conversations about how to assess the readiness of social and behavioural science evidence for application and dissemination.


Fig 1


Introducing evidence readiness levels

The desire to “directly inform policy and individual and collective behaviour in response to the pandemic” (p. 461)1 overlooks existing evidence frameworks and the challenges we identify, illustrating that a simple taxonomy is necessary to have at hand during crises. As a very preliminary step to this end we propose a social and behavioural science variant of TRLs, evidence readiness levels (ERLs; Fig. 2).


Fig 2


There are several frameworks for assessing evidence quality across different scientific fields. The one that comes closest to what we envision is the Society for Prevention’s standards for prevention interventions7, as they incorporate standards for efficacy dissemination and feedback loops from crisis to theory. However, none of the existing frameworks capture the meta-scientific insights generated in our field in the last decade.


Our ERLs do not map perfectly onto NASA’s TRLs, and we should not expect them to; there are many differences between behavioural and rocket science. In the social and behavioural sciences we think this process should start with defining problem(s) in collaboration with the stakeholders most likely to implement the interventions (ERL1). These concepts can then be further developed in consultation with people in the target settings to gather preliminary information about how settings or context might alter processes (ERL2). From there, researchers can conduct systematic reviews and other meta-syntheses to select evidence that could potentially be applied (ERL3). These systematic reviews require a number of bias-detection techniques. It is well-known that the behavioural sciences suffer from publication bias and other practices that compromise the integrity of research evidence. Some findings may be reliable, but the onus is on us to identify which are and which are not and which generalize or don’t. Yet, these systematic reviews must still be done with an awareness that the currently available statistical techniques do not completely correct for bias and that the resultant findings are at most at ERL3.


Following this, one can gather information about stimulus and measurement validity and equivalence for application in the target setting (ERL4). Next, researchers—in consultation with local experts—should consider the potential benefits and harms associated with applying potential solutions (ERL5) and generate estimates of effects in a pilot sample (ERL6). With preliminary effects in hand, the team can then begin to test for heterogeneity in low-stakes (ERL7) and higher-stakes (ERL8) samples and settings, which would build the confidence necessary to apply the findings in the real target setting or crisis situation (ERL9).


Even at ERL9, evidence evaluation continues; applications of social and behavioural work, particularly in a crisis, should be iterative, so high-quality evidence is fed back to evaluate the effectiveness of the intervention and to develop critical and flexible improvements. Feedback should be grounded in collaboration between basic and applied researchers, as well as with stakeholders, to ensure that the resulting evidence is relevant and actionable. Failure to continually re-evaluate interventions in light of new data could lead to unnecessary harm, where even the best evidence was inadequate to predict the intervention’s real-world effects.


A benchmarking system such as the ERL requires us to think carefully about the nature of our research that can be applied credibly and guides where research investments should be made. For example, we can better recognise that our goal of gathering reliable insights (ERL3) provides a necessary foundation for further collective efforts that scaffold towards scalable and generalizable interventions (ERL7). Engaging community experts, identifying relevant theories, and collecting extensive observations are key to framing challenges and working with interdisciplinary teams to address them (ERL1). Behavioural scientists from different cultures then discuss how interventions may need to differ in nature across context and cultures. The multidisciplinary and multi-stakeholder nature of ERLs requires us to fundamentally rethink how we produce, and communicate confidence in, application-ready findings.


The current crisis provides a chance for social and behavioural scientists to question how we understand and communicate the value of our scientific models in terms of ERLs. It also requires us to communicate those ERLs to policy-makers so that they know whether we are making educated guesses (ERL3 or below) or can be confident about the application of our findings because we have tested and replicated them in representative environments (ERL7). When providing policy advice on the basis of scientific evidence, it is important to understand and be able to explain whether and how recommendations would impact affected individuals under a range of circumstances that are highly relevant to the crisis in question (ERL7).


Even if findings are at ERL3 after assessing evidence quality of primary studies, we have little way of knowing how much positive, or unintended negative, consequences an intervention might have when applied to a new situation. We are concerned to see social and behavioural scientists making confident claims about the utility of scientific findings for solving COVID-19 problems without regard for whether those findings are based on the kind of scientific methods that would move them up the ERL ladder1. The absence of recognised benchmarking systems makes this challenging. While it is tempting to instead qualify uncertainty by using non-committal language about the possible utility of existing findings (for example, ‘may’, ‘could’), this approach is fundamentally flawed because public conversations generally ignore these rhetorical caveats8. Scientists should actively communicate uncertainty, particularly when speaking to crises. Communicating that their ERL is only at 3 or 4 would empower policy-makers by providing clear understanding of how to weight our advice in terms of their options. Reaching a higher ERL is extremely complicated and will require radical changes in the way we conduct research, not only in response to crises.


How social and behavioural scientists can advance their ERLs

The field of genetics started in a position similar to the position that many behavioural sciences find themselves in now, with small, independently collected samples that produced unreliable findings. Attempts to identify candidate genes for many constructs of interest kept stalling at TRL1/ERL4. In one prominent example, 52 patients provided genetic material for an analysis of the relationship between the 5-HTT gene and major depression9, a finding that spurred enormous interest in the biological mechanisms underlying depression. Unfortunately, as with the current situation in psychology, these early results were contradicted by failed replication studies10.


Technological advances in genotyping unlocked different approaches for geneticists. Instead of working in isolated teams, geneticists pooled resources via consortium studies and thereby accelerated scientific progress and quality. Their recent studies (with samples that sometimes exceed 1,000,000) dwarf previous candidate gene studies in terms of sample size11. To accomplish this, geneticists devoted considerable time to developing research workflows, data harmonization systems and processes that increased the accuracy of their measurements. The new methodologies are not without flaws: for example, there is substantial scope for expanding the representativeness of study cohorts. But the progress that consortium research in genetics has made in a short time is impressive.


In recent years we have observed similar progress in the psychological sciences going from single, small-sample studies to large-scale replications12,13 and novel studies14 to the building of the prerequisite infrastructure to facilitate team science. One example is the Psychological Science Accelerator (PSA), a large standing network with experts facilitating study selection, data management, ethics and translation15. While the PSA is making important progress, problems surrounding measurement validity, sample generalizability and organizational diversity (40% of its leadership is from North America), which affect the network’s ability to accurately interpret findings, still present material challenges to the applicability of their projects. Therefore, the PSA will require substantial improvement and investment before it can generate practical ERL7-level evidence and further develop our proposed framework.


The COVID-19 crisis underscores the critical need to bring the social and behavioural sciences in line with other mature sciences. Diverse consortia of researchers with expertise in philosophy, ethics, statistics and data and code management are needed to produce the kind of research required to better understand people the world over. Realising this mature, inclusive and efficient model necessitates a shift in the knowledge production and evaluation models that guide the social and behavioural sciences.


Be cautious when applying social and behavioural science to policy

On balance, we hold the view that the social and behavioural sciences have the potential to help us better understand our world. However, we are less sanguine about whether many areas of social and behavioural sciences are mature enough to provide such understanding, particularly when considering life-and-death issues like a pandemic. We believe that, rather than appealing to policy-makers to recognise our value, we should focus on earning the credibility that legitimates a seat at the policy table. The ERL taxonomy is a sample roadmap for achieving this level of maturity as a science and for accurately and honestly communicating our current state of evidence. Collaborations among large and diverse teams with local knowledge and multidisciplinary expertise can help us move up the evidence ladder. Equally important, studies in the behavioural sciences must be designed to move up this ladder incrementally. Designing an ERL6 study that is built on a shaky ERL1 foundation will be of little use. Moving up requires investment, thought and, most important of all, epistemic humility. Without a systematic and iterative research framework, we believe that behavioural scientists should carefully consider whether well-intentioned advice may do more harm than good.


Do People Agree on How Positive Emotions Are Expressed? A Survey of Four Emotions and Five Modalities Across 11 Cultures

Manokara, Kunalan, Mirna Đurić, Agneta Fischer, and Disa Sauter. 2020. “Do People Agree on How Positive Emotions Are Expressed? A Survey of Four Emotions and Five Modalities Across 11 Cultures.” OSF Preprints. October 5. doi:10.31219/osf.io/ep9d5

Abstract: While much is known about how negative emotions are expressed in different modalities, our understanding of the nonverbal expressions of positive emotions remains limited. In the present research, we draw upon disparate lines of theoretical and empirical work on positive emotions, and systematically examine which channels are thought to be used for expressing four positive emotions: feeling moved, gratitude, interest, and triumph. Employing the intersubjective approach, an established tool in cross-cultural psychology, we first examined how the four positive emotions were reported to be expressed in a U.S.A. community sample (Study 1: n = 1015). We next confirmed the cross-cultural generalizability of our findings by surveying respondents from ten countries that diverged on cultural values (Study 2: n = 1834). Feeling moved was thought to be signaled with facial expressions, gratitude with the use of words, interest with words, face and voice, and triumph with body posture, vocal cues, facial expressions, and words. These findings provide cross-culturally consistent findings of differential expressions across positive emotions. Notably, positive emotions were mostly thought to be expressed via modalities that go beyond the face. In addition, we hope that the intersubjective approach will constitute a useful tool for researchers studying nonverbal expressions.


An evolutionary lens can help to make sense of reliable sex & individual differences that impact appearance enhancement, as well as the context-dependent nature of putative adaptations that function to increase physical attractiveness

An Evolutionary Perspective on Appearance Enhancement Behavior. Adam C. Davis & Steven Arnocky. Archives of Sexual Behavior, Oct 6 2020. https://link.springer.com/article/10.1007/s10508-020-01745-4

Abstract: Researchers have highlighted numerous sociocultural factors that have been shown to underpin human appearance enhancement practices, including the influence of peers, family, the media, and sexual objectification. Fewer scholars have approached appearance enhancement from an evolutionary perspective or considered how sociocultural factors interact with evolved psychology to produce appearance enhancement behavior. Following others, we argue that evidence from the field of evolutionary psychology can complement existing sociocultural models by yielding unique insight into the historical and cross-cultural ubiquity of competition over aspects of physical appearance to embody what is desired by potential mates. An evolutionary lens can help to make sense of reliable sex and individual differences that impact appearance enhancement, as well as the context-dependent nature of putative adaptations that function to increase physical attractiveness. In the current review, appearance enhancement is described as a self-promotion strategy used to enhance reproductive success by rendering oneself more attractive than rivals to mates, thereby increasing one’s mate value. The varied ways in which humans enhance their appearance are described, as well as the divergent tactics used by women and men to augment their appearance, which correspond to the preferences of opposite-sex mates in a heterosexual context. Evolutionarily relevant individual differences and contextual factors that vary predictably with appearance enhancement behavior are also discussed. The complementarity of sociocultural and evolutionary perspectives is emphasized and recommended avenues for future interdisciplinary research are provided for scholars interested in studying appearance enhancement behavior.


Friday, October 9, 2020

Ecological harshness only influenced men’s perceptions of women’s breasts for reproductive success, rating women with larger breasts as more attractive, fertile, healthier, reproductively successful, and likely to befriend

Does Ecological Harshness Influence Men’s Perceptions of Women’s Breast Size, Ptosis, and Intermammary Distance? Ray Garza, Farid Pazhoohi & Jennifer Byrd-Craven. Evolutionary Psychological Science, Oct 2 2020. https://link.springer.com/article/10.1007/s40806-020-00262-w

Abstract: Breasts are sexually dimorphic physical characteristics, and they are enlarged post-puberty suggesting that they have been driven by sexual selection to signal fertility and residual reproductive value. Although different hypotheses have attempted to explain why men are attracted to women’s breasts, the role that ecology plays in men’s perceptions of women’s breasts has been limited. The current study used an ecologically harsh prime to investigate if ecological harshness influenced men’s perceptions of women’s breast size, ptosis (i.e., sagginess), and intermammary distance. Men were primed with an ecological harsh prime (i.e., economy uncertainty) and asked to rate women whose breast size, ptosis, and intermammary distance (i.e., cleavage) had been manipulated. Ecological harshness only influenced men’s perceptions of women’s breasts for reproductive success. Overall, men rated women with larger breasts as more attractive, fertile, healthier, reproductively successful, and likely to befriend. The study contributes to the overall literature on men’s perceptions of women’s breasts and suggests that ecological harshness may influence men's perceptions of women's reproductive success.

Check also Effects of Breast Size, Intermammary Cleft Distance (Cleavage) and Ptosis on Perceived Attractiveness, Health, Fertility and Age: Do Life History, Self-Perceived Mate Value and Sexism Attitude Play a Role? Farid Pazhoohi, Ray Garza & Alan Kingstone. Adaptive Human Behavior and Physiology, February 28 2020. https://www.bipartisanalliance.com/2020/02/the-perception-of-attractiveness.html


Does social psychology persist over half a century? A direct replication of Cialdini et al.’s (1975) classic door-in-the-face technique

Genschow, O., Westfal, M., Crusius, J., Bartosch, L., Feikes, K. I., Pallasch, N., & Wozniak, M. (2020). Does social psychology persist over half a century? A direct replication of Cialdini et al.’s (1975) classic door-in-the-face technique. Journal of Personality and Social Psychology, Oct 2020. https://doi.org/10.1037/pspa0000261

Abstract: Many failed replications in social psychology have cast doubt on the validity of the field. Most of these replication attempts have focused on findings published from the 1990s on, ignoring a large body of older literature. As some scholars suggest that social psychological findings and theories are limited to a particular time, place, and population, we sought to test whether a classical social psychological finding that was published nearly half a century ago can be successfully replicated in another country on another continent. To this end, we directly replicated Cialdini et al.’s (1975) door-in-the-face (DITF) technique according to which people’s likelihood to comply with a target request increases after having turned down a larger request. Thereby, we put the reciprocal concessions theory—the original process explanation of the DITF technique—to a critical test. Overall, compliance rates in our replication were similarly high as those Cialdini et al. (1975) found 45 years ago. That is, participants were more likely to comply with a target request after turning down an extreme request than participants who were exposed to the target request only or to a similarly small request before being exposed to the target request. These findings support the idea that reciprocity norms play a crucial role in DITF strategies. Moreover, the results suggest that at least some social psychological findings can transcend a particular time, place, and population. Further theoretical implications are discussed. 


??? Total deaths due to falls have increased steadily since 1990, nearly doubling by 2017; age-standardised mortality rates have slightly decreased over the same period

The global burden of falls: global, regional and national estimates of morbidity and mortality from the Global Burden of Disease Study 2017. Spencer L James et al. Injury Prevention, Volume 26, Issue Supp 1. Oct 1 2020. http://dx.doi.org/10.1136/injuryprev-2019-043286

Abstract

Background Falls can lead to severe health loss including death. Past research has shown that falls are an important cause of death and disability worldwide. The Global Burden of Disease Study 2017 (GBD 2017) provides a comprehensive assessment of morbidity and mortality from falls.

Methods Estimates for mortality, years of life lost (YLLs), incidence, prevalence, years lived with disability (YLDs) and disability-adjusted life years (DALYs) were produced for 195 countries and territories from 1990 to 2017 for all ages using the GBD 2017 framework. Distributions of the bodily injury (eg, hip fracture) were estimated using hospital records.

Results Globally, the age-standardised incidence of falls was 2238 (1990–2532) per 100 000 in 2017, representing a decline of 3.7% (7.4 to 0.3) from 1990 to 2017. Age-standardised prevalence was 5186 (4622–5849) per 100 000 in 2017, representing a decline of 6.5% (7.6 to 5.4) from 1990 to 2017. Age-standardised mortality rate was 9.2 (8.5–9.8) per 100 000 which equated to 695 771 (644 927–741 720) deaths in 2017. Globally, falls resulted in 16 688 088 (15 101 897–17 636 830) YLLs, 19 252 699 (13 725 429–26 140 433) YLDs and 35 940 787 (30 185 695–42 903 289) DALYs across all ages. The most common injury sustained by fall victims is fracture of patella, tibia or fibula, or ankle. Globally, age-specific YLD rates increased with age.

Conclusions This study shows that the burden of falls is substantial. Investing in further research, fall prevention strategies and access to care is critical.


Discussion

This study represents the first time that GBD estimates for falls have been reported in this level of detail through recent years, and illustrates the substantial amount of mortality and health loss in every country, age group and sex. Globally, total deaths and DALYs due to falls have increased steadily since 1990, with death counts nearly doubling by 2017. Conversely, age-standardised mortality rates and DALY rates have slightly decreased over the same period. At the country level, age-standardised mortality due to falls was highest in the Solomon Islands, India and Vietnam. The patterns of MIRs described in the results of our study emphasise how mortality risk per fall varies substantially by country and reveal that certain areas of the world likely have inadequate capabilities of responding to injurious falls. Since mortality from falls is associated with age and since global populations are generally ageing, it is important for all countries to ensure that their older adult populations as well as their ageing populations have adequate access to caretaking and treatment resources now and in the future.10 More focused research in the countries with the highest MIRs should investigate the specific causes of injury deaths from falls, the associated risk factors, and the circumstances and context of falls in order to target prevention efforts and appropriately allocate treatment resources. We additionally describe how falls have improved in terms of incidence and cause-specific mortality in the highest SDI countries, but that these improvements have not necessarily been experienced in lower SDI countries. This pattern emphasises how it is critical for lower SDI countries to more thoroughly investigate patterns of falls and to invest in prevention and treatment programmes.

Among clinicians, falls are known to be an important risk in certain populations, as they can be an origin of injury that leads to more complex care, such as the otherwise healthy older adult who slips, falls, sustains a femur fracture and then is admitted to the hospital for surgical repair and develops a condition like healthcare-acquired pneumonia. Such vignettes emphasise how a fall can precipitate significant health loss and potentially death. 29However, a young person who falls can also suffer disability the rest of his or her life, leading to income loss, dependence on caretakers and adequate accessibility options. Among the countries with highest incidence in 2017 were Slovenia, Czech Republic and Slovakia—countries with high percentages of rural populations.30 In Slovenia, nearly half of the population lives in a rural area, and there is evidence that falls are less fatal and more frequent in rural older people.31 32 Age-standardised DALY rates were particularly high in specific regions, including Central Europe, Eastern Europe and Australasia. Many of these regions are experiencing intensive ageing of the population.33 Poland, for example, is projected to increase the population aged 65 and over by 4.9 million in the years 2015–2050, requiring significant public healthcare expenditure on therapeutic rehabilitation.34

Research suggests that falls can cause physical harm and psychological and financial harm. A 3-year longitudinal study conducted by Tinetti and Williams explored the short and long-term effects of a fall on the well-being of those 65 and older. Among the participants, injurious falls resulted in a variety of conditions, including hip fractures, other fractures and soft tissue injuries; ultimately these injurious falls led to a decline in daily functional status.35 Other research has shown that falling often triggers a fear of falling again, likely impairing one’s sense of mobility and autonomy.9 This fear is a proven risk factor for future falls; thus, one fall can initiate a cascade of negative health outcomes.9 Ultimately, the initial morbidity of a fall can manifest into significant health loss over time, amounting to considerable treatment and care costs.36 Future GBD research may provide estimates on the probability of long-term disability for individuals who sustain injurious falls.

In general, research on the prevention of falls has shown that improving personal health as well as addressing unsafe external factors can be effective in preventing falls. For example, exercise programmes have been shown to reduce falls among community-dwelling individuals aged 65 and older.8 37 A person’s surrounding environment has also been identified as a leading cause of falls,9 10 meaning it is possible to prevent falls through the improvement of living conditions and public spaces, especially if older adults and universal design principles attending to safety are kept in mind when spaces are designed, altered and maintained.38 While some external hazards for falls are well known (eg, slippery surfaces or poor lighting), others are less visible or obvious. For example, in the inpatient setting, a study by Vassallo et al found that the hospital wards with more inpatient beds within the sightline of the nursing station had fewer falls than the ward with poor visibility between beds and the nursing station.39 Location-specific research in falls prevention has also shown that exercise, home modification, educational materials and vision correction are all important.40 41 It is also important to consider how morbidity or mortality resulting from falls might be mitigated. Clinical literature has supported frequent medication review with avoidance of polypharmacy,42 and dietary supplementation with cholecalciferol (vitamin D3) for select patients as methods to both prevent fall incidents and to help minimise fracture risk, though more recent assessments and recommendations by the US Preventive Services Task Force have revealed mixed results in terms of the benefits of vitamin D supplementation.43–46

Our study has several limitations. The first limitation is a function of our case definition in non-fatal models, where we estimate the incidence of falls that require medical care. While not every fall leads to injury, it is possible that care-seeking behavior with similar injuries could vary by location. Similarly, it is possible that in survey data or routine outpatient care visits, a patient may not report falls in the past year even if they led to minor injuries. Since our case definition includes only falls that lead to injury, our MIR estimates are likely lower than if we included all falls regardless of whether they led to injury requiring medical care. However, since the purpose of estimating those ratios is to illustrate patterns in severity and access to treatment, this limitation does not impact the key themes highlighted in our study. In addition, a general limitation in GBD analysis is that some areas of the world that may have high burden of various diseases and injuries do not have reliable incidence and cause-of-death data, and therefore our estimation process relies more heavily on covariates and regional trends in those areas. Similarly, the nature-of-injury distributions and injury duration parameters rely more heavily on data from higher income locations and Dutch injury data, and therefore may benefit in the future from adding more data sources from lower income locations so that that these parameters can be refined with greater location heterogeneity in future studies. Accordingly, an emphasis of GBD estimation going forward is to continue seeking additional data sources to be used in our modelling process.

Experiences of ending engagements and canceling weddings: Rituals of wedding planning (e.g., trying on a dress and selecting a venue) appear to serve as a catalyst for this process

Beyond cold feet: Experiences of ending engagements and canceling weddings. J. Kale Monk et al. Journal of Social and Personal Relationships, July 30, 2020. https://doi.org/10.1177/0265407520942590

Abstract: The engagement period is a critical window to understand stay–leave decisions because it marks a stage when individuals are moving toward lifelong commitment, but do not have the obligations of legal marriage that make dissolution more difficult. According to Inertia Theory, felt momentum can propel couples through relationship transitions without sufficient consideration of their dedication, which could constrain partners in poor quality relationships. Drawing from this perspective, we examined how individuals reduce relationship momentum and end a marital engagement. We conducted interviews with individuals who made the decision to end their engagements and cancel their weddings (N = 30). Experiences were analyzed using grounded theory techniques. The core concept we identified, visualizing, consisted of imagining a relational future (or alternative present) that became heightened during the engagement period. Rituals of wedding planning (e.g., trying on a dress and selecting a venue) appear to serve as a catalyst for this process. This cognitive shift prompted individuals to slow relational momentum (e.g., through trial separations and the returning of rings) and reconsider “red flags” and constraints to leaving the relationship. Once participants decided to leave, they described the process of breaking off the engagement and uncoupling from their partners. Family members and friends who assisted in managing the emotional fallout and logistics of ending the engagement (e.g., canceling with vendors and informing guests) were reported as particularly helpful supports. Visualizing married life beyond the wedding may be leveraged to help individuals navigate premarital doubts.


Keywords: Courtship, decision-making, dissolution, family rituals, qualitative research

Fear of missing out: Across age cohorts, low self-esteem and loneliness were each associated with high levels of FoMO, particularly for individuals who were also engaged in relatively greater social media activity

Fear of missing out (FoMO): A generational phenomenon or an individual difference. Christopher T. Barry, Megan Y. Wong. Journal of Social and Personal Relationships, August 7, 2020. https://doi.org/10.1177/0265407520945394

Abstract: Fear of missing out (FoMO) regarding activities within one’s social circle is a potential downside of the advent of social media and more rapid forms of communication. To examine potential generational or individual implications of FoMO, this study considered age cohort differences and self-perception correlates of FoMO. Participants were 419 individuals from throughout the U.S. who were members of 14- to 17-year-old, 24- to 27-year-old, 34- to 37-year-old, or 44- to 47-year-old cohorts. There were no cohort differences in overall FoMO, FoMO regarding close friends, or FoMO regarding family members. Across age cohorts, low self-esteem and loneliness were each associated with high levels of FoMO, particularly for individuals who were also engaged in relatively greater social media activity. Thus, the present findings indicate that FoMO concerning others’ activities may be particularly problematic for some individuals who are highly engaged with social media.

Keywords Fear of missing out, self-compassion, social media engagement

Check also Fear of missing out: prevalence, dynamics, and consequences of experiencing FOMO. Marina Milyavskaya et al. Motivation and Emotion, Mar 2018. https://www.bipartisanalliance.com/2018/03/fear-of-missing-out-prevalence-dynamics.html



More worried about the prospects for boys than for girls, & for their own sons more than their own daughters; conservatives & men are most concerned about boys in general, but liberals are most worried about their own sons

Americans are more worried about their sons than their daughters. Richard V. Reeves and Ember Smith. Brookings, Wednesday, October 7, 2020. https://www.brookings.edu/blog/up-front/2020/10/07/americans-are-more-worried-about-their-sons-than-their-daughters/

Gender equality has been very much on the agenda in recent years. The challenges facing girls and women on many fronts are clear, including access to reproductive health care, protection from harassment in the workplace, labor force participation and rewards, and representation at the highest levels of politics and business.

But Americans are in general more worried about the prospects for boys than for girls, and for their own sons more than their own daughters, according to new data from the American Family Survey. Conservatives and men are most concerned about boys in general – but liberals are most worried about their own sons. These views may be influencing political trends, and in particular the growing partisanship gap between men and women.  

[...]

Americans are more worried about boys in general. Forty-one percent agree or strongly agree with the statement “I am worried about boys in the United States becoming successful adults,” compared to 33% saying the same for girls. But there is a big partisan divide here. Half (48%) of conservatives are worried about boys, and only 28% are worried about girls. Liberals, by contrast, are if anything slightly more worried about girls (44% compared to 41%). There is also a gender gap: 45% of men are worried about boys, only 31% are worried about girls. Overall, women are also more worried about boys than about girls, but by a much smaller margin (38% compared to 35%). 



14.6% of the general population reported insufficient left-right identification that 42.9% of individuals use a hand-related strategy

Distinguishing left from right: A large scale investigation of left right confusion in healthy individuals. Ineke van der Ham, H. Chris Dijkerman, Haike van Stralen. Quarterly Journal of Experimental Psychology, October 8, 2020. https://doi.org/10.1177/1747021820968519

Rolf Degen's take: https://twitter.com/DegenRolf/status/1314442081755172864

Abstract: The ability to distinguish left from right has been shown to vary substantially within healthy individuals, yet its characteristics and mechanisms are poorly understood. In three experiments, we focused on a detailed description of the ability to distinguish left from right, the role of individual differences, and further explored the potential underlying mechanisms. In Experiment 1, a questionnaire concerning self-reported Left-Right Identification (LRI) and strategy use was administered. Objective assessment was used in Experiment 2 by means of vocal responses to line drawings of a figure, with the participants’ hands in a spatially neutral position. In Experiment 3, the arm positions and visibility of the hands were manipulated to assess whether bodily posture influences left right decisions. Results indicate that 14.6% of the general population reported insufficient LRI and that 42.9% of individuals use a hand-related strategy. Furthermore, we found that spatial alignment of the participants’ arms with the stimuli increased performance, in particular with a hand-related strategy and females. Performance was affected only by the layout of the stimuli, not by the position of the participant during the experiment. Taken together, confusion about left and right occurs within healthy population to a limited extent, and a hand-related strategy affects LRI. Moreover, the process involved appears to make use of a stored body representation and not bottom-up sensory input. Therefore, we suggest a top-down body representation is the key mechanism in determining left and right, even when this is not explicitly part of the task.

Keywords: left-right identification, body representation, individual differences


People shy away from renting products that have a sentimental meaning for the owner, for fear of the increased responsibility

Why We Don't Rent What Others Love: The Role of Product Attachment in Consumer‐to‐Consumer Transactions. Antje R. H. Graul  Aaron R. Brough. Journal of Consumer Psychology, September 13 2020. https://doi.org/10.1002/jcpy.1193

Rolf Degen's take: 

Abstract: When listing a possession for rent on a consumer‐to‐consumer platform, owners typically write a brief product description. Such descriptions often include attachment cues—indications that the owner is emotionally attached to the product. How does knowing that an owner is sharing a possession that has sentimental value impact rental likelihood? Evidence from secondary data and four experiments suggests that although some owners mistakenly expect attachment cues to enhance a product's appeal, attachment cues instead tend to deter prospective renters. We attribute this effect to renters' desire to avoid the responsibility of protecting (e.g., from damage, loss, or theft) an item to which the owner is emotionally attached. Whereas prior research has examined how product attachment influences owners' decisions, we show how an owners' expression of attachment affects others involved in a transaction. By refuting the lay theories of some owners about how to attract renters, our findings provide practical implications for owners and the platforms that connect them to users in the multi‐billion‐dollar consumer‐to‐consumer rental market.


Thursday, October 8, 2020

The former passersby prepare themselves for a possible encounter with a police officer, in which case they could lie and claim that their mask unnoticeably slipped down from its proper position

Dishonesty and mandatory mask wearing in the COVID-19 pandemic. Yossef Tobol, Erez Siniver, Gideon Yaniv. Economics Letters, October 8 2020, 109617. https://doi.org/10.1016/j.econlet.2020.109617

Rolf Degen's take: https://twitter.com/DegenRolf/status/1314231853839192067

Abstract: In an attempt to slow down the spread of the coronavirus, an increasing number of countries, including Israel, have made wearing masks mandatory for their citizens not just in close public places but also while waking in the streets. Failing to comply with this regulation entails a fine enforced by the police. Still, while many passengers do wear a mask that covers both their mouth and nose, others wear a mask improperly around their chin or neck or walk the streets wearing no mask at all. We speculate that the former passersby prepare themselves for a possible encounter with a police officer, in which case they could lie and claim that their mask unnoticeably slipped down from its proper position. The present paper reports the results of a field experiment designed to examine the hypothesis that, given the opportunity, passersby who wear their mask around their chin or neck are more likely to lie than those who wear no mask at all, although intuition may suggest otherwise Incentivizing passersby’s dishonesty with the Die-Under-the-Cup (DUCT) task, the experiment results support our hypothesis.

Keywords: COVID-19 pandemicDishonestyLyingDie-Under-the-Cup task


People were relatively modest and self-critical about their funniness; extraversion & openness to experience predicted rating one’s responses as funnier; women rated their responses as less funny

Silvia, Paul, Gil Greengross, Katherine N. Cotter, Alexander P. Christensen, and Jeffrey M. Gredlein. 2020. “If You’re Funny and You Know It: Personality, Gender, and People’s Ratings of Their Attempts at Humor.” PsyArXiv. October 8. doi:10.31234/osf.io/3fgrj

Rolf Degen's take: https://twitter.com/DegenRolf/status/1314221698871439362

Abstract: In seven studies (n = 1,133), adults tried to create funny ideas and then rated the funniness of their responses, which were also independently rated by judges. People were relatively modest and self-critical about their ideas. Extraversion (r = .12 [.07, .18], k =7) and openness to experience (r = .09 [.03, .15], k = 7) predicted rating one’s responses as funnier; women rated their responses as less funny (d = -.28 [-.37, -.19], k = 7). The within-person correlation between self and judge ratings was small but significant (r = .13 [.07, .19], k = 7), so people had some insight into their ideas’ funniness.



Labor markets characterized by anonymity, relatively homogeneous work, and flexibility: Gender pay gaps can arise despite the absence of overt discrimination, labor segregation, and inflexible work arrangements

Litman L, Robinson J, Rosen Z, Rosenzweig C, Waxman J, Bates LM (2020) The persistence of pay inequality: The gender pay gap in an anonymous online labor market. PLoS ONE 15(2): e0229383. https://doi.org/10.1371/journal.pone.0229383

Abstract: Studies of the gender pay gap are seldom able to simultaneously account for the range of alternative putative mechanisms underlying it. Using CloudResearch, an online microtask platform connecting employers to workers who perform research-related tasks, we examine whether gender pay discrepancies are still evident in a labor market characterized by anonymity, relatively homogeneous work, and flexibility. For 22,271 Mechanical Turk workers who participated in nearly 5 million tasks, we analyze hourly earnings by gender, controlling for key covariates which have been shown previously to lead to differential pay for men and women. On average, women’s hourly earnings were 10.5% lower than men’s. Several factors contributed to the gender pay gap, including the tendency for women to select tasks that have a lower advertised hourly pay. This study provides evidence that gender pay gaps can arise despite the absence of overt discrimination, labor segregation, and inflexible work arrangements, even after experience, education, and other human capital factors are controlled for. Findings highlight the need to examine other possible causes of the gender pay gap. Potential strategies for reducing the pay gap on online labor markets are also discussed.




Skew in reproductive success (RS) is common across many animal species; study compares Afrocolombians (serially monogamous ) and Emberá (monogamous Amerindians in Colombia)

The multinomial index: a robust measure of reproductive skew. Cody T. Ross, Adrian V. Jaeggi, Monique Borgerhoff Mulder, Jennifer E. Smith, Eric Alden Smith, Sergey Gavrilets and Paul L. Hooper. Proceedings of the Royal Society B: Biological Sciences, October 7 2020. https://doi.org/10.1098/rspb.2020.2025

Abstract: Inequality or skew in reproductive success (RS) is common across many animal species and is of long-standing interest to the study of social evolution. However, the measurement of inequality in RS in natural populations has been challenging because existing quantitative measures are highly sensitive to variation in group/sample size, mean RS, and age-structure. This makes comparisons across multiple groups and/or species vulnerable to statistical artefacts and hinders empirical and theoretical progress. Here, we present a new measure of reproductive skew, the multinomial index, M, that is unaffected by many of the structural biases affecting existing indices. M is analytically related to Nonacs’ binomial index, B, and comparably accounts for heterogeneity in age across individuals; in addition, M allows for the possibility of diminishing or even highly nonlinear RS returns to age. Unlike B, however, M is not biased by differences in sample/group size. To demonstrate the value of our index for cross-population comparisons, we conduct a reanalysis of male reproductive skew in 31 primate species. We show that a previously reported negative effect of group size on mating skew was an artefact of structural biases in existing skew measures, which inevitably decline with group size; this bias disappears when using M. Applying phylogenetically controlled, mixed-effects models to the same dataset, we identify key similarities and differences in the inferred within- and between-species predictors of reproductive skew across metrics. Finally, we provide an R package, SkewCalc, to estimate M from empirical data.


2. Skew in a comparative context

Biological populations can differ greatly in the level of inequality characterizing the distribution of reproduction across same-sexed individuals [8]. In humans, reproductive inequality often varies substantially among cultural groups [9], especially as a function of marriage system and material wealth inequality. This topic has been of keen interest to evolutionary minded economists and anthropologists [28,29,49,50], who argue that the coevolutionary rise of monogamy, reproductive levelling, and highly unequal agrarian-state social structures constitutes one of the most striking counter-examples to otherwise well-accepted fitness/utility-based models of reproductive decision-making, like the polygyny threshold model [51]. Resolution of this paradoxical empirical pattern may be explained by norms for reproductive levelling [5255] that enhance food security, group functionality, and/or success in intergroup competition [5658], norms for monogamous partnering [29,50,5961], or the level of complementarity in returns to biparental investment in humans [61,62]. Tests of such predictions, however, require comparative datasets and unbiased skew measures.

Beyond humans, Johnstone [2] and Kutsukake & Nunn [8] argue that a large body of theory on reproductive skew predicts clear relationships between inequality in reproduction and various social, ecological, and genetic factors—including relatedness, ecological constraints on reproduction, and opportunities to suppress or control the reproductive activities of other individuals. Differences in reproductive skew are thus predicted to have wide-reaching consequences for the evolution of biological characteristics (e.g. ornamentation [63], and testes size [64]), as well as social and behavioural ones (e.g. stable group size [65], effective population size [48], male tenure length [1], sociality [66], and the patterning of violence [67] and aggression [68]). To effectively test such theory, however, cross-species or cross-genera comparisons are often needed, but they have also been relatively sparse (but see [1,8]).

In one of the widest-scale comparative studies of reproductive skew to date, Kutsukake & Nunn [8] investigate the cross-species patterning of reproductive skew in male primates as a function of a suite of covariates. The data here are strong: sex-specific reproductive behaviour has been well-studied across primate species, and primates possess the requisite variation in social systems, mating systems, and ecological setting needed to compare competing predictions [69]. However, even within a small clade like primates, estimating differences in reproductive skew across species introduces some unique challenges: differences in age-structure, group size, and mean reproductive rate can preclude statistical comparisons based on existing skew metrics. In §6, we show how biased skew metrics can confound inference in this comparative study and others like it. To remedy these issues, we introduce a new metric of reproductive skew—the multinomial index, M—that will facilitate wider-scale comparative research.

Participants showed greater concern for pain in close others than for their own pain, though this hyperaltruism was steeply discounted with increasing social distance

Social discounting of pain. Giles W. Story  Zeb Kurth‐Nelson  Molly Crockett  Ivo Vlaev  Ara Darzi  Raymond J. Dolan. Journal of the Experimental Analysis of Behavior, October 7 2020. https://doi.org/10.1002/jeab.631

Abstract: Impatience can be formalized as a delay discount rate, describing how the subjective value of reward decreases as it is delayed. By analogy, selfishness can be formalized as a social discount rate, representing how the subjective value of rewarding another person decreases with increasing social distance. Delay and social discount rates for reward are correlated across individuals. However no previous work has examined whether this relationship also holds for aversive outcomes. Neither has previous work described a functional form for social discounting of pain in humans. This is a pertinent question, since preferences over aversive outcomes formally diverge from those for reward. We addressed this issue in an experiment in which healthy adult participants (N = 67) chose the timing and intensity of hypothetical pain for themselves and others. In keeping with previous studies, participants showed a strong preference for immediate over delayed pain. Participants showed greater concern for pain in close others than for their own pain, though this hyperaltruism was steeply discounted with increasing social distance. Impatience for pain and social discounting of pain were weakly correlated across individuals. Our results extend a link between impatience and selfishness to the aversive domain.


Discussion

Here we examined for the first time the relationship between the evaluation of one's own future pain and a sensitivity to pain in others, and whether altruistic responses to another's pain depend on the social distance of the other person. We find support for two novel findings. Firstly, people show greater concern for pain in close others than for their own pain, though this hyperaltruism is steeply discounted (diminishes) with increasing social distance. Secondly, we find a correlation between dread and social discounting, such that people who more strongly prefer immediate pain show steeper social discounting of pain, and thereby tend to be less altruistic overall. In keeping with previous findings, participants chose to speed up the delivery of pain both for themselves or others, even if this entailed an increased intensity of the pain, consistent with an effect of dread (Badia et al., 1966; Berns et al., 2006; Cook & Barnes, 1964; Hare, 1966a; Loewenstein, 1987; Story et al., 2013).

Social Discounting of Pain versus Money

Social discounting is consistent with evolutionary notions of kin altruism, which proposes that altruism towards related others carries an evolutionary advantage (Curry et al., 2013; Madsen et al., 2007; Schaub, 1996). Our finding of social discounting for pain extends previous findings of hyperaltruism towards close others for money (Rachlin & Jones, 2008), whereby some people prefer to assign a hypothetical monetary reward ($75) to their closest friend or relative (Person #1) than to receive a larger sum themselves (e.g. $80). Rachlin and Jones (2008) note that hyperaltruistic behavior is irrational in the monetary context, since participants could take the $80 for themselves and give it to Person #1. The same authors speculated that, in addition to wishing to signal their closeness to Person #1, people may have chosen the hyper‐generous option due to an implicit cost of having to transfer money, or as a self‐control device to prevent them from keeping the money for themselves. That we find hyperaltruism for close others for painful outcomes, which are nontransferrable, supports a more intrinsic charitable motive, in keeping with kin altruism.

We show support for a model of social discounting in which the net degree of altruistic behavior depends on both the degree of discounting over social distance (Ksoc) and an additional ‘altruism factor’ (θ) that is independent of social distance. Those with a high altruism factor and low social discounting (high θ, low Ksoc) would be expected to show charitable or caring behavior even towards distant others, for instance victims of war or famine in other countries. By contrast, those with a high altruism factor but steep social discounting (high θ, high Ksoc) would be expected to be protective of close kin, but to engage in little altruistic behavior directed outside of their social circle. These categories appear to have high face validity. A future line of investigation might be to compare these parameters for pain with those for money. Existing studies directly comparing generosity for pain and money demonstrate more charitable behavior with painful outcomes (Davis et al., 2011; Story et al., 2015), however to our knowledge no studies have examined this across social distance to test whether the effects are attributable to higher θ or lower Ksoc.

Applied Social Discounting of Pain

Further research is also required to establish how social discounting of pain relates to real‐world behavior, either charitable or antisocial. Existing work has linked social discounting of money to a range of real‐world behavior. A recent study has demonstrated lower social discounting of reward in extraordinarily altruistic people who have donated a kidney to a stranger (Vekaria et al., 2017), while steeper social discounting has been demonstrated among boys with externalizing (antisocial) behavioral problems (Sharp et al., 2011). Further applied work in this vein might also examine aversive, as well as monetary, outcomes. The current study illustrates that such preferences can be readily elicited using hypothetical painful scenarios.

Other authors have examined the effect of state‐based changes on the social discount curve for reward. Some such models have also examined the effects on the numerator term in the social discount model, namely, θ. For example, Wu et al. (2019) showed that testosterone administration in males increased social discounting for distant others, but had no effect on generosity towards close others. Strikingly, Margittai et al. (2015) showed that experimentally induced psychosocial stress appeared to have the reverse effect. Stress increased the numerator term, but had no effect on the social discount factor, manifest as greater generosity towards close, but not distant, others; a follow on study (Margittai et al., 2018) demonstrated that oral administration of hydrocortisone had the same effect. Further work is needed to investigate influences on the numerator term, in particular to disentangle effects of the instantaneous utility term from the effect of θ, since these enter multiplicatively into the numerator. Painful stimuli, which allow the form of instantaneous utility to be elicited directly using willingness to pay, offer a route to achieving this.

Positive Correlation between Dread and Social Discounting of Pain

Previous work suggests that the ability to wait for future rewards and the ability to understand the mental states of others are linked. For instance, temporal discounting for reward and altruistic behavior have been shown to be correlated across individuals (Curry et al., 2008; Rachlin & Jones, 2008), and both are impaired in Borderline Personality Disorder (Bateman & Fonagy, 2004). Along these lines a tendency to expedite pain so as to mitigate dread might be conceptualized as a future‐oriented behavior, akin to showing altruism towards one's future self. Indeed, both dread and altruism for pain have been shown to relate to the strength of physiological response to imagined pain: People who show greater anticipatory brain responses to pain are more likely to expedite pain rather than delay it (Berns et al., 2006), and people who show greater skin conductance responses to pain in others are more likely to choose to relieve another's pain (Hein et al., 2011). In keeping with this idea, people with higher trait psychopathy have been shown to be less likely to choose to expedite their own impending pain (Hare, 1966b) and show diminished physiological responses to the anticipation of pain in others (Caes et al., 2012). By this reasoning dread might be associated with lower social discounting of pain. Strikingly however, and contrary to our prediction, we found evidence that ‘higher dreaders’ showed steeper social discounting for pain.

Our data do not permit firm conclusions regarding the reasons for this correlation. However, a possible interpretation is that choice of sooner pain represents more a generic form of impatience than previously thought. We found that preference for sooner pain was best accounted for in terms of waiting cost that scaled with delay, but not with pain intensity. This finding is difficult to reconcile with previous models of dread, which focus on the aversive anticipation of pain, a quantity that would be expected to scale with pain intensity. Imagine for instance that you are contemplating either a trivially painful routine dental check‐up or a considerably more painful dental procedure. Our results suggest that the overall disvalue of the very painful procedure would still be greater than the routine check‐up but that the effect of delay on the disvalue of each would be identical.

The superior fit of a nonscaled model suggests that choices to expedite pain might not solely result from a desire to minimize the anticipation of pain, so much as a desire to reduce a generic cost associated with waiting. Notably reframing dread as impatience does not require any change to the form of our model, since the model does not specify the processes underlying a tendency to expedite pain. It is possible that a similar impatience term also contributes to discounting of reward (see for example Gonçalves & Silva, 2015). Such a reframing would make the observed correlation between impatience for pain and social discounting congruent with the correlation between delay discounting and social discounting seen for reward. There follows a strong prediction that impatience for pain and for reward ought to be correlated, indicating an important direction for further research.

Interactions between Dread and Social Discounting of Pain

A further interesting direction for future work concerns how delay and social discounting of pain interact. A pertinent question, for example, is whether effects of dread and social discounting are multiplicative or whether dread is revealed differently when choosing for others. Here we found that a preference for sooner pain was equivalent whether participants chose regarding their own pain, or that of another person at social distance #50. Notably however, in social discounting choices the mean participant showed neither marked social discounting nor hyperaltruism for a person at social distance #50, therefore further exploration is required to establish whether dread interacts with social discounting effects across a range of social distances. We have examined this in an additional study, submitted to this journal, in which we also elicit choices across both domains, for example pain for oneself now, versus for another person in the future, and vice versa (Story et al., 2020).

Factors in the Valuation of Future Pain

The model described here is challenged to disentangle the effects of discounting and dread within a given individual. We are grateful to a reviewer for the suggestion that measuring temporal preferences for past as well as future pain might offer a means to parse the two effects. Prior research has shown temporal discounting of past events to be lawful and also hyperbolic in form (e.g., Yi et al., 2006). Since dread presumably is not contained within events in the past, measuring discounting of past painful events could help to isolate the contribution of dread.

Finally, there are plausible reasons why choices to expedite pain might depend on factors other than dread of pain. Firstly, in many real‐world situations people choose to endure pain or discomfort so as to obtain an associated reward, for example having an immunization to prevent the possibility of illness, doing exercise to improve overall wellbeing, or working to earn a wage. If the rewards accrue at approximately the same time as the pain and outweigh its disvalue, then discounting of the net benefit could motivate speeding‐up the pain–reward combination. Secondly, it is often the case that painful experiences tend to get worse over time, making it rational to face them sooner: For instance in the real world the timing of a dental appointment might be brought forward to relieve worsening dental pain. Although our scenarios attempt to control for these factors, these prior assumptions may nevertheless influence people's experimental choices. Further experimental work is required to disentangle these possibilites. 

We argue that neural representations of memories are best thought of as spatially transformed versions of perceptual representations

Transforming the Concept of Memory Reactivation. Serra E. Favila, Hongmi Lee, Brice A. Kuhl. Trends in Neurosciences, October 8 2020. https://doi.org/10.1016/j.tins.2020.09.006

Highlights

.  A foundational finding in the field of memory is that content-sensitive patterns of neural activity expressed during perceptual experiences are re-expressed when experiences are remembered, a phenomenon termed reactivation. However, reactivation obscures key differences in how perceptual events and memories are represented in the brain.

.  Recent findings suggest systematic, spatial transformations of content-sensitive neural activity patterns from perception to memory retrieval. These transformations occur within sensory cortex and from sensory cortex to frontoparietal cortex.

.  We consider why spatial transformations occur and identify critical questions to be addressed in future research. Understanding the ways in which memory representations differ from perceptual representations will critically inform theoretical accounts of memory and will help clarify how the brain recreates the past.


Abstract: Reactivation refers to the phenomenon wherein patterns of neural activity expressed during perceptual experience are re-expressed at a later time, a putative neural marker of memory. Reactivation of perceptual content has been observed across many cortical areas and correlates with objective and subjective expressions of memory in humans. However, because reactivation emphasizes similarities between perceptual and memory-based representations, it obscures differences in how perceptual events and memories are represented. Here, we highlight recent evidence of systematic differences in how (and where) perceptual events and memories are represented in the brain. We argue that neural representations of memories are best thought of as spatially transformed versions of perceptual representations. We consider why spatial transformations occur and identify critical questions for future research.

Keywords: episodic memoryreactivationreinstatementmemory transformationsensory cortexfrontoparietal cortex


Outstanding Questions

To what extent do changes in information content account for spatial transformation from perception to retrieval? Are certain stimulus features that are present during perception systematically lost or distorted in memory? Do memory representations gain information that is absent or weakly present during perception through integration with other memories or existing knowledge structures (schemas)?

What determines the relative degree of neural reactivation versus transformation across brain regions observed during memory retrieval? For example, is greater transformation observed when memory tasks promote conceptual processing at retrieval? Conversely, is relatively greater reactivation in sensory areas observed when memory tasks promote perceptual processing? Does the relative degree of reactivation versus transformation depend on whether memory tasks involve recall versus recognition judgments? Do reactivation and transformation trade-off or are they independent?

Does the degree of transformation across brain regions depend on the temporal lag between perception and memory retrieval? Transformation potentially occurs in working memory paradigms with delays on the order of seconds, yet there is also considerable work documenting consolidation-related transformations at timescales of hours to years. What are the similarities and differences between transformations that occur across these vastly different timescales?

What is the relationship between transformation within sensory areas and transformation from sensory to frontoparietal regions? These two forms of transformation have been studied separately to date and it is thus unclear whether they are related and, if so, how. Notably, the frontoparietal and sensory regions that exhibit biases toward memory-based representations are functionally connected with the hippocampus. To what extent can connectivity with the hippocampus explain both sets of findings?