Monday, March 4, 2019

If emerging technologies are so impressive, why are interest rates so low, wage growth so slow, investment rates so flat, & total factor productivity growth so lukewarm? Lack of genius.

Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and Growth. Seth G. Benzell, Erik Brynjolfsson. NBER Working Paper No. 25585, February 2019, https://www.nber.org/papers/w25585

Digital versions of labor and capital can be reproduced much more cheaply than their traditional forms. This increases the supply and reduces the marginal cost of both labor and capital. What then, if anything, is becoming scarcer? We posit a third factor, ‘genius’, that cannot be duplicated by digital technologies. Our approach resolves several macroeconomic puzzles. Over the last several decades, both real median wages and the real interest rate have been stagnant or falling in the United States and the World. Furthermore, shares of income paid to labor and capital (properly measured) have also decreased. And despite dramatic advances in digital technologies, the growth rate of measured output has not increased. No competitive neoclassical two-factor model can reconcile these trends. We show that when increasingly digitized capital and labor are sufficiently complementary to inelastically supplied genius, innovation augmenting either of the first two factors can decrease wages and interest rates in the short and long run. Growth is increasingly constrained by the scarce input, not labor or capital. We discuss microfoundations for genius, with a focus on the increasing importance of superstar labor. We also consider consequences for government policy and scale sustainability.

---
Why then, if emerging technologies are so impressive, are interest rates so low, wage growth so slow and investment rates so flat? And why is total factor productivity growth so lukewarm? To resolve this paradox, we propose a model of aggregate production with three inputs. The third factor corresponds to a bottleneck which prevents firms from making full use of digital abundance. Bottlenecks are ubiquitous in economics. This paper is typed on a computer that is over 1000 times faster than those of the past, but our typing is still limited by our interface with the keyboard.
An assembly line that doubles the output, speed or precision of 1, 2 or 99 out of 100 of processes will still be limited by that line’s weakest link. In other words, no matter how much we increase the other inputs, if an inelastically supplied complement remains scarce, it will be the gating factor for growth.

Our model can explain why ordinary labor and ordinary capital haven’t captured the gains from digitization, while a few superstars have earned immense fortunes. Their contributions, whether due to genius or luck, are both indispensable and impossible to digitize. This puts them in a position to capture the gains from digitization.

In our digital economy technology advances rapidly, but humans and their institutions change slowly. Institutional, managerial, technological, and political constraints become bottlenecks (Brynjolfsson et al., 2017). Before a firm can make use of AI decision making, its leaders need to make costly and time-consuming investments in quantifying its business processes; before it can scale rapidly using web services it needs figure out how to codify its systems in software. Therefore, digital advances benefit neither unexceptional labor nor standard capital, at least insofar as they can be replicated digitally (Brynjolfsson et al., 2014). The invisible hand instead favors those who are a scarce complement to these factors.

The inputs in our model are traditional capital and labor and a relatively inelastic complement we dub ‘genius’ or G. When G is relatively abundant, the economy approximates a two-factor one. But as G becomes relatively scarce, it becomes a bottleneck for output and captures an increasing share of national income. We show that when traditional inputs are sufficiently complementary to G, innovations in automation technology can reduce both labor’s share of income and the interest rate.

This theory fits what we know about the limitations of digital technologies, including cutting-edge AI. While general artificial intelligence might someday lead to an economic singularity, contemporary AI technologies have clear limitations, making humans indispensable for many essential tasks. Agrawal et al. (2018a) and Agrawal et al. (2018c) observe that AI is good at prediction tasks, but struggles with judgment – often a close complement. Brynjolfsson et al. (2018) create a rubric for assessing which tasks are suitable for machine learning and use it to evaluate the content of over 18,000 tasks described in O-Net. They find that while the new technology delivers super-human performance for some tasks, it is ineffective for many others. In particular, despite their many strengths, existing computer systems weak or ineffective at tasks that involve significant creativity or large-scale problem solving. Even tasks amenable to automation may require large organizational investments before business processes can be automated.

The only essential feature of G in our model is that it is inelastically supplied, because, in part, it is not subject to digitization. For concreteness, our primary interpretation for G is superstar individuals. They may be exceptionally gifted with the ability to come up with an exciting new idea, sort through bad ideas for a diamond in the rough,3 or effectively manage a business. If these good ideas are owned by and accumulate within firms, they correspond to a kind of alienable genius.

...

Many have the sense that intangible assets and superstar workers are more abundant than ever. Perhaps the most surprising thing then about our result is that these factors are increasingly scarce. We contend that this is due to confusion between the value and importance of these inputs, which are increasing, and their relative abundance, which is decreasing.

Laterality, or left–right discrimination (LRD) is assumed to be innate or acquired early, but in one study, a majority of students scored less than 77% on an objective LRD test

Challenging  assumptions of  innateness –leave nothing unturned . Jason J Han & Neha Vapiwala. Medical Education, Mar 3 2019, https://doi.org/10.1111/medu.13824

It was once common in various academic fields to assume that individuals possess certain fundamental abilities or intuitions (e.g. the assumption of rationality in the fields of economics and social sciences).1 However, the past half-century has overseen a transition towards a different model of human cognition, one which acknowledges the human brain as complex machinery that is vulnerable to systematic errors.

The pioneers of this paradigm shift, Daniel Kahneman and Amos Tversky, attributed this to the co-existence of two processing mechanisms.2 They described the first, aptly named System 1, as the fast, automatic, intuitive, unconscious approach and the second (System 2) as the slower, more deliberate, analytical and conscious mode. The purpose of this categorisation was not to assign a hierarchy, but rather to acknowledge that both systems have their respective pros and cons depending on the task. System 1 is efficient but more error-prone. System 2 is more thorough but requires greater resources and quickly drains our working memory and attention, thereby making it too susceptible to error. In this issue of Medical Education, Gormley et al. juxtapose these two systems in the context of one of the most commonly performed mental tasks –our ability to discern laterality or left–right discrimination (LRD). This ability is particularly critical in medicine, as errors in LRD can lead to wrong diagnoses and interventions, and ultimately patient harm. The authors note that although LRD is often assumed to be innate or acquired during early stages of human development, in reality LRD is a complex neuropsychological process with which 17% of women and 9% of men have reported difficulty.3 Medical students are not exempt from this challenge. In one study, a majority of students scored  less than 77% on an objective LRD test.4 In the interviews conducted by Gormley et al., students who had difficulty with LRD disclosed feelings of inadequacy, which led to greater efforts to conceal this difficulty and even influenced their career trajectories by steering them away from certain specialties. Undoubtedly, these f indings have important implications for the medical education community, suggesting the need to overthrow assumptions that LRD is an  innate human skill and to raise the importance of laterality training in the curriculum.5

This study inspires the realisation that no tacit assumption of innateness or intuitiveness should go unchecked. What else are we assuming is easy, innate or intuitive? The distinction between what is presumably innate and what merits attention and practice is somewhat arbitrary. Observing that we teach correct anatomic spatial orientation, such as anterior from posterior, superior from inferior, Gormley et al. asked, why not also left from right? Extrapolating further, we could apply the same line of questioning to other competencies in medical education, such as our ability to recognise personal cognitive biases or develop ‘soft’ skills such as empathy and clarity of communication. There are undoubtedly circumstances in which we assume we effectively and expertly broke bad news, disclosed error or obtained informed consent, but in the eyes of the patient our performance was lacking. As such, we can all stand to gain important insights into our own abilities with a more conscious and thoughtful approach.6,7 1

Short-run impacts of the 2018 trade war on the U.S. economy: Annual losses from higher costs, $68.8 bn (0.37% of GDP); after tariff revenue & gains to producers, welfare loss is $6.4 bn (0.03% of GDP)

The Return to Protectionism. Pablo D. Fajgelbaum, Pinelopi K. Goldberg, Patrick J. Kennedy, and Amit K. Khandelwal. Working Paper, Mar 2019, http://www.econ.ucla.edu/pfajgelbaum/RTP1.pdf

Abstract: We analyze the short-run impacts of the 2018 trade war on the U.S. economy. We estimate import demand and export supply elasticities using changes in U.S. and retaliatory war tariffs over time. Imports from targeted countries decline 31.5% within products, while targeted U.S. exports fall 9.5%. We find complete pass-through of U.S. tariffs to variety-level import prices, and compute the aggregate and regional impacts of the war in a general equilibrium framework that matches these elasticities. Annual losses from higher costs of imports are $68.8 billion (0.37% of GDP). After accounting for higher tariff revenue and gains to domestic producers from higher prices, the aggregate welfare loss is $6.4 billion (0.03% of GDP). U.S. tariffs favored sectors located in politically competitive counties, suggesting an ex ante rationale for the tariffs, but retaliatory tariffs offset the benefits to these counties. Tradeable-sector workers in heavily Republican counties are the most negatively affected by the trade war.


Check also Krugman on Sep 2018: Trump’s tariffs really are a big, bad deal. Their direct economic impact ***will be modest*** (?!), although hardly trivial https://www.bipartisanalliance.com/2018/09/krugman-trumps-tariffs-really-are-big.html

Sunday, March 3, 2019

Orgasms with a partner were associated with the perception of favorable sleep outcomes; orgasms achieved through masturbation were associated with the perception of better sleep quality & latency

Sex and Sleep: Perceptions of Sex as a Sleep Promoting Behavior in the General Adult Population. Michele Lastella et al. Front. Public Health, March 04 2019. https://doi.org/10.3389/fpubh.2019.00033

Objective: The main aim of this study was to explore the perceived relationship between sexual activities, sleep quality, and sleep latency in the general adult population and identify whether any gender differences exist.

Participants/methods: We used a cross-sectional survey to examine the perceived relationship between sexual activity and subsequent sleep in the general adult population. Seven-hundred and seventy-eight participants (442 females, 336 males; mean age 34.5 ± 11.4 years) volunteered to complete an online anonymous survey at their convenience.

Statistical Analyses: Chi square analyses were conducted to examine if there were any gender differences between sexual activities [i.e., masturbation (self-stimulation), sex with a partner without orgasm, and sex with a partner with orgasm] and self-reported sleep.

Results: There were no gender differences in sleep (quality and onset) between males and females when reporting sex with a partner [χ2(2)
= 2.20, p = 0.332; χ2(2)=5.73, p = 0.057] or masturbation (self-stimulation) [χ2(2) = 1.34, p = 0.513; χ2(2) = 0.89, p = 0.640] involved an orgasm.

Conclusions: Orgasms with a partner were associated with the perception of favorable sleep outcomes, however, orgasms achieved through masturbation (self-stimulation) were associated with the perception of better sleep quality and latency. These findings indicate that the public perceive sexual activity with orgasm precedes improved sleep outcomes. Promoting safe sexual activity before bed may offer a novel behavioral strategy for promoting sleep.

The informal economy share rises after reaching a high GDP... Norway has a bigger shadow economy than the US

Nonlinearity Between the Shadow Economy and Level of Development. Dong Frank Wu, Friedrich Schneider. IMF Working Paper No. 19/48, Mar 2019. https://www.imf.org/en/Publications/WP/Issues/2019/03/01/Nonlinearity-Between-the-Shadow-Economy-and-Level-of-Development-46618

Summary: This paper is the first attempt to directly explore the long-run nonlinear relationship between the shadow economy and level of development. Using a dataset of 158 countries over the period from 1996 to 2015, our results reveal a robust U-shaped relationship between the shadow economy size and GDP per capita. Our results imply that the shadow economy tends to increase when economic development surpasses a given threshold or at least does not disappear. Our findings suggest that special attention should be given to the country’s level of development when designing policies to tackle issues related to the shadow economy.

---
The paper also seeks to identify the potential factors which boost GDP per capita. Consistent with the growth literature, we find that educational attainment plays a vital role in improving GDP per capita, especially a college degree or above. This result helps shed some light on a possible mechanism of a U-shaped pattern at the micro level. From the individual perspective, people work to make themselves better off. When the level of development is low, education helps build up labor productivity and skilled workers with college education or above choose to stay in the formal sector to enjoy benefits from high productivity position and social security net. When the economy advances to a new level at which income of skilled workers becomes high enough and one household member can easily cover the whole family’s daily expenses, demand for informal work is likely to increase due to work flexibility or other desirable perks. Hence the size of shadow economy reverses its downtrend.


Check also the paper that found that taxpayers’ attitudes toward evasion are not predictive of behavior & that tax compliance is not related to trust in government or one’s fellow citizens; Danes are more likely to evade tax than Italians; at the same time, Danes are less tolerant of tax evasion by others:
Willing to Evade: An Experimental Study of Italy and Denmark. Alice Guerra and Brooke Harrington. Copenhagen  Business  School, Department  of  Business  and  Politics. https://www.bipartisanalliance.com/2018/09/found-that-taxpayers-attitudes-toward.html

The organic label leads to an underestimation of caloric value and more consumption; this effect is not moderated by implicit evaluations, maybe by a semantic association between the concepts “organic” & "non-caloric”

The calories underestimation of “organic” food: Exploring the impact of implicit evaluations. Theo Besson et al. Appetite, https://doi.org/10.1016/j.appet.2019.02.019

Abstract: Specific attributes of a food product can cause it to be spontaneously but wrongly perceived as healthier than it really is (i.e., the health halo effect). Notably, there is preliminary evidence that individuals evaluate organic food as less caloric than regular, non-organic food. However, explanations regarding the cognitive mechanisms underlying the health halo effect remain scarce. Drawing from the implicit cognition literature, we hypothesize that this effect could be due to (a) the reactivation in memory of implicit positive evaluations and/or (b) the reactivation of a semantic association between the concepts “organic” and “non-caloric”. We first conducted a 2 (Product label: organic versus non-organic) × continuous (Valence-IAT score) × continuous (Calorie-IAT score) study (N = 151) to test these hypotheses, and conducted a conceptual replication in a second study (N = 269). We computed Bayesian analyses alongside frequentist analyses in order to test for potential null hypotheses, as well as frequencies and Bayesian meta-regression including both datasets. Both methods provided consistent results. First, Bayesian analyses yielded extremely strong evidence in favor of the hypothesis that the organic label leads to an underestimation of caloric value. Second, they provided strong evidence that this effect is not moderated by implicit evaluations. Hence, we replicated the organic halo effect but showed that, surprisingly, it does not arise from implicit associations. We discuss these findings and propose directions for future research regarding the mechanisms underlying calories (under)estimation.

Religions that lose strength... Tyler Cowen's comments on Jana Riess's The Next Mormons: How Millennials are Changing the LDS Church

Jana Riess, The Next Mormons: How Millennials are Changing the LDS Church. 2019. Comments by Tyler Cowen, Mar 02 2019, https://marginalrevolution.com/marginalrevolution/2019/03/the-mormon-asymptote.html:
...compared to some other religions, Mormonism is not doing too badly.  Mormonism's US growth rate of .75 percent in 2017 -- kept in positive territory by still-higher-than-average fertility among Mormons -- is actually somewhat enviable when compared to, for example, the once-thriving Southern Baptists, who have bled out more than a million members in the last ten years.  Mormonism is not yet declining in membership, but it has entered a period of decelerated growth.  In terms of congregational expansion, the LDS Church in the United States added only sixty-five new congregations in 2016, for an increase of half a percentage point.  In 2017, the church created 184 new wards and branches in the United States, but 184 units also closed, resulting in no net gain at all.

By some estimates (p.7), only about 30 percent of young single Mormons in the United States go to church regularly.  The idea of the Mormon mission, however, is rising in import:
More than half of Mormon Millennials have served a full-time mission (55 percent), which is clearly the highest proportion of any generation; among GenXers, 40 percent served, and in the Boomer/Silent generation, it was 28 percent.

In contrast, "returning to the temple on behalf of the deceased" is falling (p.54).

Mormons are about a third more likely to be married than the general U.S. population, 66 to 48 percent.  But note that 23 percent of Mormon Millennials admit to having a tattoo, against a recommended rate of zero (p.162).

And ex-Mormon snowflakes seem to be proliferating.  For GenX, the single biggest reason giving for leaving the church was "Stopped believing there was one church".  For Millennials, it is (sadly) "Felt judged or misunderstood."

Check also Crawfurd, Lee. 2019. “Does Temporary Migration from Rich to Poor Countries Cause Commitment to Development? Evidence from Quasi-random Mormon Mission Assignments.” SocArXiv. January 10. https://www.bipartisanalliance.com/2019/01/assignment-to-region-in-global-south.html

Now we know why we are smart & prone to addiction too: Ants defend plants they fed on more aggressively against herbivore competitors (termites) when the plant is doused with artificial nectaries

Nectar quality affects ant aggressiveness and biotic defense provided to plants. Fábio T. Pacelhe et al. bioTropica, Feb 27 2019, https://doi.org/10.1111/btp.12625

Abstract: Ant–plant mutualisms are useful models for investigating how plant traits mediate interspecific interactions. As plant‐derived resources are essential components of ant diets, plants that offer more nutritious food to ants should be better defended in return, as a result of more aggressive behavior toward natural enemies. We tested this hypothesis in a field experiment by adding artificial nectaries to individuals of the species Vochysia elliptica (Vochysiaceae). Ants were offered one of four liquid foods of different nutritional quality: amino acids, sugar, sugar + amino acids, and water (control). We used live termites (Nasutitermes coxipoensis) as herbivore competitors and observed ant behavior toward them. In 88 hr of observations, we recorded 1,009 interactions with artificial nectaries involving 1,923 individual ants of 26 species. We recorded 381 encounters between ants and termites, of which 38% led to attack. Sixty‐one percent of these attacks led to termite exclusion from the plants. Recruitment and patrolling were highest when ants fed upon nectaries providing sugar + amino acids, the most nutritious food. This increase in recruitment and patrolling led to higher encounter rates between ants and termites, more frequent attacks, and faster and more complete termite removal. Our results are consistent with the hypothesis that plant biotic defense is mediated by resource quality. We highlight the importance of qualitative differences in nectar composition for the outcome of ant–plant interactions.

Tversky & Kahneman, 1973, on systematic biases... Availability: A heuristic for judging frequency and probability

From 1973... Availability: A heuristic for judging frequency and probability. Amos Tversky, Daniel Kahneman. Cognitive Psychology, Volume 5, Issue 2, September 1973, Pages 207-232. https://doi.org/10.1016/0010-0285(73)90033-9

Abstract: This paper explores a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability, i.e., by the ease with which relevant instances come to mind. In general, availability is correlated with ecological frequency, but it is also affected by other factors. Consequently, the reliance on the availability heuristic leads to systematic biases. Such biases are demonstrated in the judged frequency of classes of words, of combinatorial outcomes, and of repeated events. The phenomenon of illusory correlation is explained as an availability bias. The effects of the availability of incidents and scenarios on subjective probability are discussed.

---
Daniel Kahneman – Prize Lecture. NobelPrize.org. Nobel Media AB 2019. Mar 03 2019. https://www.nobelprize.org/prizes/economic-sciences/2002/kahneman/lecture

"Remarkably, the intuitive judgments of these experts did not conform to statistical principles with which they were thoroughly familiar. In particular, their intuitive statistical inferences and their estimates of statistical power showed a striking lack of sensitivity to the effects of sample size. We were impressed by the persistence of discrepancies between statistical intuition and statistical knowledge, which we observed both in ourselves and in our colleagues. We were also impressed by the fact that significant research decisions, such as the choice of sample size for an experiment, are routinely guided by the flawed intuitions of people who know better."

---
Check also From William Niskanen's obituary at the Washington Post, William A. Niskanen Jr., economist and former Cato Institute chairman, dies. By T. Rees Shapiro. Washington Post, November 1, 2011. http://www.washingtonpost.com/local/obituaries/william-a-niskanen-jr-economist-and-cato-institute-chairman-dies/2011/10/31/gIQAuM1RaM_story.html

At Ford, Dr. Niskanen found, conformity was key. But it was a lesson Dr. Niskanen did not learn until 1980, when he was fired for breaking ranks with the executives.

During the 1970s, the nation's car industry was battered by rising gas prices. For Japanese manufacturers, touting smaller cars with fuel-sipping engines, American sales took off.

In late 1979, Ford begged for a government intervention, asking the International Trade Commission to impose quotas on Japanese cars.

[...]

Dr. Niskanen told Ford executives that the government could not cure the company's ills. Japan was not the problem, Dr. Niskanen told his bosses; they were.

[...]

Ford's real issue, Dr. Niskanen said, was bad product decisions.

Upon hearing his advice, Ford executives dismissed Dr. Niskanen.

"I was told, Bill, in general, people who do well in this company wait until they hear their superiors express their view and then contribute something in support of that view,・ Dr. Niskanen said in an 1980 interview with the Wall Street Journal. "That wasn", and isn't, my style."
-

Excellence is not enough: Most UK scientists who publish extremely highly-cited papers do not secure funding from major public and charity funders

Most UK scientists who publish extremely highly-cited papers do not secure funding from major public and charity funders: A descriptive analysis. Charitini Stavropoulou, Melek Somai, John P. A. Ioannidis. PLOS, February 27, 2019. https://doi.org/10.1371/journal.pone.0211460

Abstract: The UK is one of the largest funders of health research in the world, but little is known about how health funding is spent. Our study explores whether major UK public and charitable health research funders support the research of UK-based scientists producing the most highly-cited research. To address this question, we searched for UK-based authors of peer-reviewed papers that were published between January 2006 and February 2018 and received over 1000 citations in Scopus. We explored whether these authors have held a grant from the National Institute for Health Research (NIHR), the Medical Research Council (MRC) and the Wellcome Trust and compared the results with UK-based researchers who serve currently on the boards of these bodies. From the 1,370 papers relevant to medical, biomedical, life and health sciences with more than 1000 citations in the period examined, we identified 223 individuals from a UK institution at the time of publication who were either first/last or single authors. Of those, 164 are still in UK academic institutions, while 59 are not currently in UK academia (have left the country, are retired, or work in other sectors). Of the 164 individuals, only 59 (36%; 95% CI: 29–43%) currently hold an active grant from one of the three funders. Only 79 (48%; 95% CI: 41–56%) have held an active grant from any of the three funders between 2006–2017. Conversely, 457 of the 664 board members of MRC, Wellcome Trust, and NIHR (69%; 95% CI: 65–72%) have held an active grant in the same period by any of these funders. Only 7 out of 655 board members (1.1%) were first, last or single authors of an extremely highly-cited paper. There are many reasons why the majority of the most influential UK authors do not hold a grant from the country’s major public and charitable funding bodies. Nevertheless, the results are worrisome and subscribe to similar patterns shown in the US. We discuss possible implications and suggest ways forward.

Saturday, March 2, 2019

Political partisans disagreed about the importance of conditional probabilities; highly numerate partisans were more polarized than less numerate partisans

It depends: Partisan evaluation of conditional probability importance. Leaf Van Boven et al. Cognition, Mar 2 2019, https://doi.org/10.1016/j.cognition.2019.01.020

Highlights
•    Political partisans disagreed about the importance of conditional probabilities.
•    Supporters of restricting immigration and banning assault weapons favored uninformative “hit rates”.
•    Policy opponents favored normatively informative base rates and inverse conditionals.
•    Highly numerate partisans were more polarized than less numerate partisans.
•    Adopting an expert’s perspective reduced partisan differences.

Abstract: Policies to suppress rare events such as terrorism often restrict co-occurring categories such as Muslim immigration. Evaluating restrictive policies requires clear thinking about conditional probabilities. For example, terrorism is extremely rare. So even if most terrorist immigrants are Muslim—a high “hit rate”—the inverse conditional probability of Muslim immigrants being terrorists is extremely low. Yet the inverse conditional probability is more relevant to evaluating restrictive policies such as the threat of terrorism if Muslim immigration were restricted. We suggest that people engage in partisan evaluation of conditional probabilities, judging hit rates as more important when they support politically prescribed restrictive policies. In two studies, supporters of expelling asylum seekers from Tel Aviv, Israel, of banning Muslim immigration and travel to the United States, and of banning assault weapons judged “hit rate” probabilities (e.g., that terrorists are Muslims) as more important than did policy opponents, who judged the inverse conditional probabilities (e.g., that Muslims are terrorists) as more important. These partisan differences spanned restrictive policies favored by Rightists and Republicans (expelling asylum seekers and banning Muslim travel) and by Democrats (banning assault weapons). Inviting partisans to adopt an unbiased expert’s perspective partially reduced these partisan differences. In Study 2 (but not Study 1), partisan differences were larger among more numerate partisans, suggesting that numeracy supported motivated reasoning. These findings have implications for polarization, political judgment, and policy evaluation. Even when partisans agree about what the statistical facts are, they markedly disagree about the relevance of those statistical facts.

Check also: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://www.bipartisanalliance.com/2017/08/biased-policy-professionals-world-bank.html

And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. https://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html

And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://www.bipartisanalliance.com/2017/08/wisdom-and-how-to-cultivate-it-review.html

And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, https://www.bipartisanalliance.com/2017/09/individuals-with-greater-science.html

And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. https://www.bipartisanalliance.com/2017/06/expert-ability-can-actually-impair.html

One conclusion that can be drawn from cognitive psychology is that human beings generally perform poorly when thinking in probabilistic terms; acknowledging these human frailties, how can we compensate?

Collective Intelligence for Clinical Diagnosis—Are 2 (or 3) Heads Better Than 1? Stephan D. Fihn. JAMA Network Open. 2019;2(3):e191071, doi:10.1001/jamanetworkopen.2019.1071

nce upon a time, medical students were taught that the correct approach to diagnosis was to collect a standard, complete set of data and then, based on those data elements, create an exhaustive list of potential diagnoses. The final and most difficult step was then to take this list and engage in a systemic process of deductive reasoning to rule out possibilities until the 1 final diagnosis was established. Master clinicians modeled this process of differential diagnosis in the classic clinicopathologic conferences (CPCs) that were regularly held in most teaching hospitals and published regularly in medical journals. During the past several decades, the popularity of the CPC has faded under criticism that cases discussed were often atypical and the setting was artificial because bits of data were doled out to discussants in a sequential fashion that did not mirror actual clinical practice. Moreover, they came to be seen more as theatrical events than meaningful teaching exercises.

The major reason for the demise of the CPC, however, was that it became apparent that master clinicians did not actually think in this manner at all. Medical educators who carefully observed astute clinicians found that the clinicians began generating hypotheses during the first few moments of an encounter and iteratively updated them while limiting the number of possibilities being entertained to no more than 5 to 7.1 They also found that even the notion of a master clinician is often illusory because diagnostic accuracy is largely a function of knowledge and experience within a specific domain (or set of domains) as opposed to general brilliance as a diagnostician.

This shift in understanding how physicians think developed in parallel with the growth of cognitive psychology, which focuses on how we process and respond to information. As we confront similar situations over time, the brain develops shortcuts known as heuristics that simplify problems and facilitate prompt and efficient responses. Without these heuristics, we would be forced to adopt a CPC approach to the myriad decisions we all face in everyday life, which would be exhausting and paralyzing. Because they are simplifications, these heuristics are subject to error. Research during the past several decades has revealed that although we maintain a Cartesian vision of ourselves as logical creatures, we are all, in fact, subject to a host of biases that distort our perceptions and lead us to make irrational decisions. Many of these have been cataloged by Amos Tversky, PhD, and Daniel Kahneman, PhD, such as recency bias (overweighting recent events compared with distant ones), framing effects (drawing different conclusions from the same information, depending on how it is presented), primacy bias (being influenced more by information presented earlier than later), anchoring (focusing on a piece of information and discounting the rest), and confirmation bias (placing undue emphasis on information consistent with a preconception).2 These perceptual misrepresentations lead to predictable mistakes such as overestimating the frequency of rare events when they are highly visible; underestimating the frequency of common, mundane events; and seeing patterns where none exist. Understanding these quirks underpins the emerging field of behavioral economics, which helps to explain how markets behave but also enables commercial and political entities to manipulate our opinions, sometimes in perverse ways.

One conclusion that can be drawn from cognitive psychology is that human beings generally perform poorly when thinking in probabilistic terms. Naturally, this has grave implications for our ability to function as good diagnosticians. A growing literature suggests that diagnostic error is common and can lead, not unexpectedly, to harm.3

Acknowledging these human frailties, how can we compensate? One potential solution is to harness the power of computers. [...]


Check also: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://www.bipartisanalliance.com/2017/08/biased-policy-professionals-world-bank.html

And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. https://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html

And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://www.bipartisanalliance.com/2017/08/wisdom-and-how-to-cultivate-it-review.html

And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, https://www.bipartisanalliance.com/2017/09/individuals-with-greater-science.html

And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. https://www.bipartisanalliance.com/2017/06/expert-ability-can-actually-impair.html

Gay men seem more satisfied with their job than other men; lesbians appear less satisfied with their job than other women; reason could be discrimination, who may lead gay men to have low expectations

(I can’t get no) jobsatisfaction? Differences by sexual orientation in Sweden. Lina Alden et al. Linnaeus University, 2018. http://www.diva-portal.org/smash/get/diva2:1291798/FULLTEXT01.pdf

Abstract: We present results from aunique nationwide survey conducted in Sweden on sexual orientation and job satisfaction. Our results show that gay men, on average, seem more satisfied with their job than heterosexual men; lesbians appear less satisfied with their job than heterosexual women. However, the issue of sexual orientation and job satisfaction is complex since gay men, despite their high degree of job satisfaction, like lesbians find their job more mentally straining than heterosexuals. We conclude that gay men and lesbians are facing other stressers at work than heterosexuals do.We also conclude that discrimination and prejudice may lead gay men to have low expectations about their job; these low expectations may translate into high job satisfaction. In contrast, prejudice and discrimination may hinder lesbians from realizing their career plans, resulting in low job satisfaction.

Keywords: Job satisfaction,sexual orientation

People believe they value their minds more than other people value theirs, and that they value their bodies less

Jordan, M. R., Gebert, T., & Looser, C. E. (2019). Perspective taking failures in the valuation of mind and body. Journal of Experimental Psychology: General, 148(3), 407-420. http://dx.doi.org/10.1037/xge0000571

Abstract: Accurately inferring the values and preferences of others is crucial for successful social interactions. Nevertheless, without direct access to others’ minds, perspective taking errors are common. Across 5 studies, we demonstrate a systematic perspective taking failure: People believe they value their minds more than others do and often believe they value their bodies less than others do. The bias manifests across a variety of domains and measures, from judgments about the severity of injuries to preferences for new abilities to assessments of how much one is defined by their mind and body. This perspective taking failure was diminished—but still present—when participants thought of a close other. Finally, we assess and find evidence for the notion that this perspective taking failure is a function of the fact that others’ minds are less salient than others’ bodies. It appears to be the case that people believe the most salient cue from a target is also the best indicator of their values and preferences. This bias has implications for the ways in which we create social policy, judge others’ actions, make choices on behalf of others, and allocate resources to the physically and mentally ill.

Asymmetry in individuals’ willingness to venture into cross-cutting spaces, with conservatives more likely to follow media and political accounts classified as left-leaning than the reverse

How Many People Live in Political Bubbles on Social Media? Evidence From Linked Survey and Twitter Data. Gregory Eady et al. SAGE Open, February 28, 2019. https://doi.org/10.1177/2158244019832705

Abstract: A major point of debate in the study of the Internet and politics is the extent to which social media platforms encourage citizens to inhabit online “bubbles” or “echo chambers,” exposed primarily to ideologically congenial political information. To investigate this question, we link a representative survey of Americans with data from respondents’ public Twitter accounts (N = 1,496). We then quantify the ideological distributions of users’ online political and media environments by merging validated estimates of user ideology with the full set of accounts followed by our survey respondents (N = 642,345) and the available tweets posted by those accounts (N ~ 1.2 billion). We study the extent to which liberals and conservatives encounter counter-attitudinal messages in two distinct ways: (a) by the accounts they follow and (b) by the tweets they receive from those accounts, either directly or indirectly (via retweets). More than a third of respondents do not follow any media sources, but among those who do, we find a substantial amount of overlap (51%) in the ideological distributions of accounts followed by users on opposite ends of the political spectrum. At the same time, however, we find asymmetries in individuals’ willingness to venture into cross-cutting spaces, with conservatives more likely to follow media and political accounts classified as left-leaning than the reverse. Finally, we argue that such choices are likely tempered by online news watching behavior.

Keywords: media consumption, media & society, mass communication, communication, social sciences, political communication, new media, communication technologies, political behavior, political science

Friday, March 1, 2019

Partisans overestimate the negative affect that results from exposure to opposing views; the affective forecasting error derives from underestimation of agreement

Selective exposure partly relies on faulty affective forecasts. Charles A. Dorison, Julia A. Minson, Todd Rogers. Cognition, https://doi.org/10.1016/j.cognition.2019.02.010

Highlights
• Partisans overestimate the negative affect that results from exposure to opposing views.
• The affective forecasting error derives from underestimation of agreement.
• Faulty affective forecasts partially underpin selective exposure.

Abstract: People preferentially consume information that aligns with their prior beliefs, contributing to polarization and undermining democracy. Five studies (collective N = 2455) demonstrate that such “selective exposure” partly stems from faulty affective forecasts. Specifically, political partisans systematically overestimate the strength of negative affect that results from exposure to opposing views. In turn, these incorrect forecasts drive information consumption choices. Clinton voters overestimated the negative affect they would experience from watching President Trump’s Inaugural Address (Study 1) and from reading statements written by Trump voters (Study 2). Democrats and Republicans overestimated the negative affect they would experience from listening to speeches by opposing-party senators (Study 3). People’s tendency to underestimate the extent to which they agree with opponents’ views drove the affective forecasting error. Finally, correcting biased affective forecasts reduced selective exposure by 24–34% (Studies 4 and 5).

Keywords: Selective exposureAffective forecastingFalse polarizationEmotion

---
However, despite the benefits of holding accurate beliefs, the phenomenon of selective exposure to agreeing information has been well documented in social psychology (Frey, 1986), political science (Iyengar & Hahn, 2009; Sears & Freedman, 1967), and communications (Stroud, 2008). For example, one of the earliest studies on selective exposure demonstrated that mothers were more likely to listen to arguments that supported their beliefs regarding hereditary and environmental factors in childrearing than arguments that contradicted their beliefs (Adams, 1961). More recently, in the domain of political communication, conservatives in an experiment preferred to read articles from the conservative site Fox News, whereas liberals preferred to read articles from more liberal sources such as CNN and NPR (Iyengar & Hahn, 2009). These effects persist even with financial incentives on the line (Frimer, Skitka, & Motyl, 2017). Recent research has also examined how presentation order and structure moderate this phenomenon (Fischer et al., 2011; Jonas, Schulz-Hardt, Frey, & Thelen, 2001).

Implicit Association Test showed a self-other asymmetry, that people perceived a desirable IAT result to be more valid when it applied to themselves than to others, & the opposite held for undesirable IAT results

Mendonça, C., Mata, A., & Vohs, K. D. (2019). Self-other asymmetries in the perceived validity of the implicit association test. Journal of Experimental Psychology: Applied, http://dx.doi.org/10.1037/xap0000214

Abstract: The Implicit Association Test (IAT) is the most popular instrument in implicit social cognition, with some scholars and practitioners calling for its use in applied settings. Yet, little is known about how people perceive the test’s validity as a measure of their true attitudes toward members of other groups. Four experiments manipulated the desirability of the IAT’s result and whether that result referred to one’s own attitudes or other people’s. Results showed a self-other asymmetry, such that people perceived a desirable IAT result to be more valid when it applied to themselves than to others, whereas the opposite held for undesirable IAT results. A fifth experiment demonstrated that these self-other differences influence how people react to the idea of using the IAT as a personnel selection tool. Experiment 6 tested whether the self-other effect was driven by motivation or expectations, finding evidence for motivated reasoning. All told, the current findings suggest potential barriers to implementing the IAT in applied settings.


Since most drivers believe they are better than average drivers, the benchmark of achieving automation that is safer than an average human driver is not acceptably safe performance for most

Safer than the average human driver (who is less safe than me)? Examining a popular safety benchmark for self-driving cars. Michael A. Nees. Journal of Safety Research, https://doi.org/10.1016/j.jsr.2019.02.002

Highlights
•    The criterion of being safer than a human driver has become pervasive in the discourse on vehicle automation.
•    Most drivers perceive themselves to be safer than the average driver (the better-than-average effect).
•    This study replicated the better than average effect and showed that most drivers stated a desire for self-driving cars that are safer than their own perceived ability to drive safely.
•    Since most drivers believe they are better than average drivers, the benchmark of achieving automation that is safer than a human driver (on average) may not represent acceptably safe performance of self-driving cars for most drivers.

Abstract: Although the level of safety required before drivers will accept self-driving cars is not clear, the criterion of being safer than a human driver has become pervasive in the discourse on vehicle automation. This criterion actually means “safer than the average human driver,” because it is necessarily defined with respect to population-level data. At the level of individual risk assessment, a body of research has shown that most drivers perceive themselves to be safer than the average driver (the better-than-average effect). Using an online sample of U.S. drivers, this study replicated the better than average effect and showed that most drivers stated a desire for self-driving cars that are safer than their own perceived ability to drive safely before they would: (1) feel reasonably safe riding in a self-driving vehicle; (2) buy a self-driving vehicle, all other things (cost, etc.) being equal; and (3) allow self-driving vehicles on public roads. Since most drivers believe they are better than average drivers, the benchmark of achieving automation that is safer than a human driver (on average) may not represent acceptably safe performance of selfdriving cars for most drivers.

Thursday, February 28, 2019

Effect of olfactory disgust: Disgust might hamper behavioral actions motivated by sexual arousal (e.g., poor judgment, coercive sexual behavior)

The influence of olfactory disgust on (Genital) sexual arousal in men. Charmaine Borg, Tamara A. Oosterwijk, Dominika Lisy, Sanne Boesveldt, Peter J. de Jong. PLOS, February 28, 2019. https://doi.org/10.1371/journal.pone.0213059

Abstract
Background: The generation or persistence of sexual arousal may be compromised when inhibitory processes such as negative emotions, outweigh sexual excitation. Disgust particularly, has been proposed as one of the emotions that may counteract sexual arousal. In support of this view, previous research has shown that disgust priming can reduce subsequent sexual arousal. As a crucial next step, this experimental study tested whether disgust (by means of odor) can also diminish sexual arousal in individuals who are already in a state of heightened sexual excitation.

Methodology: In this study, participants were all men (N = 78). To elicit sexual arousal, participants watched a pornographic video. Following 4.30 minutes from the start of the video clip, they were exposed to either a highly aversive/disgusting odor (n = 42), or an odorless diluent/solvent (n = 36), that was delivered via an olfactometer, while the pornographic video continued. In both conditions the presentation of the odor lasted 1 second and was repeated 11 times with intervals of 26 seconds. Sexual arousal was indexed by both self-reports and penile circumference.

Principal findings: The disgusting odor (released when the participants were already sexually aroused) resulted in a significant decrease of both subjective and genital sexual arousal compared to the control (odorless) condition.

Significance: The finding that the inhibitory effect of disgust was not only expressed in self-report but also expressed on the penile response further strengthens the idea that disgust might hamper behavioral actions motivated by sexual arousal (e.g., poor judgment, coercive sexual behavior). Thus, the current findings indicate that exposure to an aversive odor is sufficiently potent to reduce already present (subjective and) genital sexual arousal. This finding may also have practical relevance for disgust to be used as a tool for self-defence (e.g., Invi Bracelet).

Nations that scored higher on democracy indices, especially emerging ones, experienced increased mortality due to violence; women possessed higher rates of homicide & suicide in democracies

Government political structure and gender differences in violent death: A longitudinal analysis of forty-three countries, 1960–2008. Morkeh Blay-Tofey et al. Aggression and Violent Behavior, Feb 28 2019. https://doi.org/10.1016/j.avb.2019.02.011

Highlights
• The purpose of this study is to examine the effect of democracy on violent death rates (homicide, suicide, and combined) by gender (men and women).
• Multi-level regression analyses examined associations between regime-type characteristics and logged rates of violent deaths using homicide and suicide. Models were adjusted for unemployment and economic inequality
• Violent deaths appear to be more prevalent even in stable democracies, and women are more affected than men.
• Although the analysis provided depicts a strong picture anchored in regime type changes and violent death rates, violence is inherently complex and more research is needed to determine what aspects within democracies may lead to increased violent death rates.

Abstract
Objectives: Little global and longitudinal scholarship exists on the relationship between regime type and mortality on a global level. The purpose of this study is to examine the effect of democracy on violent death rates (homicide, suicide, and combined) by gender (men and women).

Methods: Three measures of democracy were used to quantify regime type. Homicide and suicide rates were obtained from the World Health Organization. Multi-level regression analyses examined associations between regime characteristics and logged rates of homicide, suicide, and violent deaths. Models were adjusted for unemployment and economic inequality.

Results: Nations that scored higher on democracy indices, especially emerging democracies, experienced increased mortality due to violence. Women possessed higher rates of homicide and suicide in democracies compared to men.

Conclusions: Violent deaths appear to be more prevalent even in stable democracies, and women are more affected than men. This overturns the common assumption that democracies bring greater equality, and therefore lower death rates over long-term. Future analyses might examine the aspects of democracies that lead to higher rates of violent death so as to help mitigate them.

Keywords: Homicide suicide violence democracy autocracy regime gender

Males from Drosophila m. populations with higher competitive mating success produce sons with lower fitness; male investment in enhanced mating success comes at the cost of reduced offspring quality

Males from populations with higher competitive mating success produce sons with lower fitness. Trinh T. X. Nguyen  Amanda J. Moehring. Journal of Evolutionary Biology, Feb 27 2019, https://doi.org/10.1111/jeb.13433

Abstract: Female mate choice can result in direct benefits to the female or indirect benefits through her offspring. Females can increase their fitness by mating with males whose genes encode increased survivorship and reproductive output. Alternatively, male investment in enhanced mating success may come at the cost of reduced investment in offspring fitness. Here, we measure male mating success in a mating arena that allows for male‐male, male‐female, and female‐female interactions in Drosophila melanogaster. We then use isofemale line population measurements to correlate male mating success with sperm competitive ability, the number of offspring produced, and the indirect benefits of the number of offspring produced by daughters and sons. We find that males from populations that gain more copulations do not increase female fitness through increased offspring production, nor do these males fare better in sperm competition. Instead, we find that these populations have a reduced reproductive output of sons, indicating a potential reproductive trade‐off between male mating success and offspring quality.

The wrong belief in the exceptionalism of human cortex has caused to prematurely assign functions distributed widely in the brain to the cortex, & to fail to explore subcortical sources of brain evolution, inter alia

Human exceptionalism, our ordinary cortex and our research futures. Barbara L. Finlay. Developmental Psychobiology, February 27 2019, https://doi.org/10.1002/dev.21838

Abstract: The widely held belief that the human cortex is exceptionally large for our brain size is wrong, resulting from basic errors in how best to compare evolving brains. This misapprehension arises from the comparison of only a few laboratory species, failure to appreciate differences in brain scaling in rodents versus primates, but most important, the false assumption that linear extrapolation can be used to predict changes from small to large brains. Belief in the exceptionalism of human cortex has propagated itself into genomic analysis of the cortex, where cortex has been studied as if it were an example of innovation rather than predictable scaling. Further, this belief has caused both neuroscientists and psychologists to prematurely assign functions distributed widely in the brain to the cortex, to fail to explore subcortical sources of brain evolution, and to neglect genuinely novel features of human infancy and childhood.


“Dysrationalia” Among University Students: Intelligence & rational thinking, although related, represent two fundamentally different constructs; the intelligent have the same inability to think rationally

“Dysrationalia” Among University Students: The Role of Cognitive Abilities, Different Aspects of Rational Thought and Self-Control in Explaining Epistemically Suspect Beliefs
Nikola Erceg, Zvonimir Galić, Andreja Bubić. Europe's Journal of Psychology, Vol 15, No 1 (2019), https://ejop.psychopen.eu/article/view/1696

Abstract: The aim of the study was to investigate the role that cognitive abilities, rational thinking abilities, cognitive styles and self-control play in explaining the endorsement of epistemically suspect beliefs among university students. A total of 159 students participated in the study. We found that different aspects of rational thought (i.e. rational thinking abilities and cognitive styles) and self-control, but not intelligence, significantly predicted the endorsement of epistemically suspect beliefs. Based on these findings, it may be suggested that intelligence and rational thinking, although related, represent two fundamentally different constructs. Thus, deviations from rational thinking could be well described by the term “dysrationalia”, meaning the inability to think rationally despite having adequate intelligence. We discuss the implications of the results, as well as some drawbacks of the study.

Keywords: dysrationalia; epistemically suspect beliefs; cognitive abilities; rational thinking; self-control

Low replicability damages public trust in psychology; neither information about increased transparency nor explanations for low replicability, nor recovered replicability repaired public trust

Wingen, Tobias, Jana Berkessel, and Birte Englich. 2019. “No Replication, No Trust? How Low Replicability Influences Trust in Psychology.” OSF Preprints. February 22. doi:10.31219/osf.io/4ukq5

Abstract: In the current psychological debate, low replicability of psychological findings is the central topic. While this discussion about the replication crisis has a huge impact on psychological research, we know less about how it impacts lay people’s trust in psychology. In the current paper, we examine whether low replicability damages public trust in psychology and whether this damaged trust can be repaired. Study 1 and 2 provide correlational and experimental evidence that low replicability reduces public trust in psychological science. Additionally, Studies 3, 4, and 5 evaluate whether and how damaged trust in psychological science could be repaired. Critically, neither information about increased transparency (Study 3), nor explanations for low replicability (either QRPs or hidden moderators; Study 4), nor recovered replicability (Study 5) repaired public trust. Overall, our studies highlight the crucial importance of replicability for public trust, as well as the importance of balanced communication of low replicability.

It is unlikely that we will find strong relationships between what individuals are reporting about themselves and how they objectively behave

The Challenges and Opportunities of Small EffectsThe New Normal in Academic Psychiatry. Martin P. Paulus, Wesley K. Thompson. JAMA Psychiatry. February 27, 2019. doi:10.1001/jamapsychiatry.2018.4540

Full text in the link above.

Explanations and accurate predictions are the fundamental deliverables for a mechanistic or pragmatic approach that academic psychiatric research can provide to stakeholders. Starting with this issue, we are publishing a series of Viewpoints describing the research boundaries and challenges to progress in our field. In this issue, Simon1 raises the need for better explanatory model using data from electronic health records. This Viewpoint acknowledges an important issue: variables or constructs that are used to help explain the current state of individuals or to generate predictions need to account for a substantial proportion of the variance of the dependent variable or outcome measure to be clinically useful. However, similar to findings from genetics literature, systems neuroscience approaches using brain imaging are beginning to show that variability in structural and functional brain imaging only accounts for a small percentage of the explained variance when considering a variety of clinical phenotypes, especially in large population-representative samples.2 For example, in a 2016 analysis of UK Biobank data,3 the functional activation related to a face processing task, which activated the fusiform gyrus and amygdala, accounted for a maximum of 1.8% of the variance of 1100 nonimaging variables. These findings are in line with emerging results from the Adolescent Brain Cognitive Development study4 focused on the association between screen media behavior and structural MRI characteristics. Importantly, these large-scale studies have used robust and reliable estimators to reduce false-positive discoveries. Thus, similar to genetics literature, it appears that individual processing differences as measured by neuroimaging account for little symptomatic or behavioral variance.

There is evidence that the association between individual variation on self-assessed symptoms and behavioral performance on neurocognitive tasks is weak.5,6 Moreover, many behavioral tasks show limited test-retest reliability and little agreement between task conceptualization and actual agreement with emerging latent variables of these tasks. Therefore, it is unlikely that we will find strong relationships between what individuals are reporting about themselves and how they objectively behave. It seems that the individual experience of a person with a mental health condition, which has been proposed to be an important end point for explanatory approaches,7 is not well approximated by the behavioral probes that are currently available.

These and other findings have profound implications for our theoretical understanding of psychiatric diseases. Specifically, small effect sizes make it unlikely that psychiatric disorders can be explained by unicausal or oligocausal theories. In other words, there is not going to be a unifying glutamatergic or inflammatory disease model of mood disorders. What’s more, even if there is a relationship between markers for these disease processes and the state of a psychiatric disorder, as currently conceived, it may not be sufficiently strong to be used by itself to make useful person-level predictions. This is not to say that these processes are not contributing to the etiology or pathophysiology of the disorder but rather that their impact is likely to be small so as to not be individually useful in helping patients and other stakeholders explain their current disease state. As a consequence, there is a low probability of a generic disease process for a group of psychiatric disorders or a final common pathway for a disease.

One possible reason for the lack of a strong relationship between units of analyses, ie, between brain circuits and behavior or behavior and symptoms, is many-to-one and one-to-many mapping. In other words, the brain has many ways of producing the same symptoms, and very similar brain dysfunctions can produce a number of different clinical symptoms. An example of one-to-many mapping is the phenotypic heterogeneity of Huntington disease, which, as an autosomal dominant disorder, has a simple genetic basis but enormous clinical variability via the modulation of multiple biochemical pathways.8 In comparison, the clinical homogeneity of motor neuron disease is betrayed by a significant genetic variability, leading to similar symptoms.9 Therefore, it is quite possible that phenotypically similar groups result from different processes and phenotypically heterogeneous individuals actually share broadly similar underlying pathophysiology.

These many-to-one and one-to-many mappings put a profound strain on case-control studies, ie, comparing individuals diagnosed with a particular psychiatric disease with controls that are matched on a limited number of variables. Case-control designs have very limited explanatory depth and are fundamentally uninformative of the disease process because they are correlational, provide little specificity and questionable sensitivity, and have questionable generalizability to populations.10 Single-case designs together with hierarchical inferential procedures might provide a reasonable alternative.11 Single-case designs use individuals as their own control, can use controlled interventions to examine causality, and are well suited to uncover individual differences across phenotypically similar participants. However, care must be taken not to subdivide studies so finely that defects of small sample sizes, including elevated rates of type I and II errors, become problematic even for large epidemiologically informed samples.

Latent variable approaches, such as principal components or factor analyses, can be useful unsupervised statistical methods to uncover relationships between variables within and across units of analyses. However, the underlying assumption is that these latent variables reflect common relationships among all individuals. Instead, it is more likely that relationships differ across individuals and may even differ across states within an individual. Recent approaches to this problem use both latent variable and mixture approaches to differentiate different subgroups of individuals with depression.12 Others have used deviation from normative regression models to identify heterogeneity in schizophrenia and bipolar disorder.13 Both sets of approaches support the hypothesis that there are no generic depressive, bipolar, or schizophrenia diseases. At the other extreme, considering that psychiatric diseases emerge from causal factors that vary across units of analyses ranging from molecular to social,7 one might hypothesize that each individual patient with a mental health condition is an exemplar of a rare disease model. In this case, no generalizable model might be possible, and useful individual-level predictions would be elusive.

Thus, we are facing the classical problem of variance-bias trade-off,14 which has been examined in great detail in the statistical literature. Specifically, how do we arbitrate between generating a few generic models with useful explanatory or predictive values vs multiple models that may tend to overexplain and overfit individual patient’s disease etiology, pathophysiology, and clinical course? This decision cannot be arbitrated solely on statistical grounds but will need to judiciously incorporate expert knowledge about the disease and candidate processes on different units of analyses because the permutational complexity of the variables to be considered is so large that even data sets with thousands of individuals may not provide a sufficient sample size to approach this using exploratory techniques resistant to overfitting.

At this time, we are standing at a precipice: our explanatory disease models are woefully insufficient, and our predictive approaches have not yielded robust individual-level predictions that can be used by clinicians. Yet there is room for hope. Larger data sets will be widely available, multilevel data sets that span assessments from genes to social factors are being released, new statistical tools are being developed, within-subject statistical designs are being rediscovered, and attempts to include expert knowledge into latent variable approaches might help arbitrating the variance-bias trade-off. Fundamentally, academic psychiatry cannot continue to move forward with small n case-control studies to provide tangible results to stakeholders.

References
1.
Simon  GE.  Big data from health records in mental health care: hardly clairvoyant but already useful  [published online February 27, 2019].  JAMA Psychiatry. doi:10.1001/jamapsychiatry.2018.4510ArticleGoogle Scholar
2.
Boyle  EA, Li  YI, Pritchard  JK.  An expanded view of complex traits: from polygenic to omnigenic.  Cell. 2017;169(7):1177-1186. doi:10.1016/j.cell.2017.05.038PubMedGoogle ScholarCrossref
3.
Miller  KL, Alfaro-Almagro  F, Bangerter  NK,  et al.  Multimodal population brain imaging in the UK Biobank prospective epidemiological study.  Nat Neurosci. 2016;19(11):1523-1536. doi:10.1038/nn.4393PubMedGoogle ScholarCrossref
4.
Paulus  MP, Squeglia  LM, Bagot  K,  et al.  Screen media activity and brain structure in youth: evidence for diverse structural correlation networks from the ABCD study.  Neuroimage. 2019;185:140-153. doi:10.1016/j.neuroimage.2018.10.040PubMedGoogle ScholarCrossref
5.
Eisenberg  IW, Bissett  PG, Enkavi  AZ,  et al.  Uncovering mental structure through data-driven ontology discovery  [published online December 12, 2018].  PsyArXiv. doi:10.31234/osf.io/fvqejGoogle Scholar
6.
Thompson  WK, Barch  DM, Bjork  JM,  et al.  The structure of cognition in 9 and 10 year-old children and associations with problem behaviors: findings from the ABCD study’s baseline neurocognitive battery  [published online December 13, 2018].  Dev Cogn Neurosci. doi:10.1016/j.dcn.2018.12.004PubMedGoogle Scholar
7.
Kendler  KS.  Levels of explanation in psychiatric and substance use disorders: implications for the development of an etiologically based nosology.  Mol Psychiatry. 2012;17(1):11-21. doi:10.1038/mp.2011.70PubMedGoogle ScholarCrossref
8.
Ross  CA, Aylward  EH, Wild  EJ,  et al.  Huntington disease: natural history, biomarkers and prospects for therapeutics.  Nat Rev Neurol. 2014;10(4):204-216. doi:10.1038/nrneurol.2014.24PubMedGoogle ScholarCrossref
9.
Dion  PA, Daoud  H, Rouleau  GA.  Genetics of motor neuron disorders: new insights into pathogenic mechanisms.  Nat Rev Genet. 2009;10(11):769-782. doi:10.1038/nrg2680PubMedGoogle ScholarCrossref
10.
Sedgwick  P.  Case-control studies: advantages and disadvantages.  BMJ. 2014;348:f7707. doi:10.1136/bmj.f7707Google ScholarCrossref
11.
Smith  JD.  Single-case experimental designs: a systematic review of published research and current standards.  Psychol Methods. 2012;17(4):510-550. doi:10.1037/a0029312PubMedGoogle ScholarCrossref
12.
Drysdale  AT, Grosenick  L, Downar  J,  et al.  Resting-state connectivity biomarkers define neurophysiological subtypes of depression.  Nat Med. 2017;23(1):28-38. doi:10.1038/nm.4246PubMedGoogle ScholarCrossref
13.
Wolfers  T, Doan  NT, Kaufmann  T,  et al.  Mapping the heterogeneous phenotype of schizophrenia and bipolar disorder using normative models.  JAMA Psychiatry. 2018;75(11):1146-1155. doi:10.1001/jamapsychiatry.2018.2467ArticlePubMedGoogle ScholarCrossref
14.
James  G, Witten  D, Hastie  T, Tibshirani  R.  An Introduction to Statistical Learning With Applications in R. New York, NY: Springer-Verlag New York; 2013.

Wednesday, February 27, 2019

When an NHL team has an opportunity to win a playoff series, there appears to be an advantage for visiting teams—not home teams—in winning an overtime game

A home advantage? Examining 100 years of team success in National Hockey League playoff overtime games. Desmond McEwan. Psychology of Sport and Exercise, https://doi.org/10.1016/j.psychsport.2019.02.010

Highlights
• Examination of team success in professional hockey (NHL) playoff overtime games.
• There was an away team advantage when they had a chance to win a playoff series.
• No home team advantage was found when they had a chance to win a series.
• Home and away teams were equally likely to win final games that went to overtime.

Abstract
Objectives: To examine a potential home (dis)advantage in various types of playoff overtime games in the National Hockey League (NHL).

Design: Archival.

Method: Success rates for home and away teams in win-imminent overtime games (i.e., wherein a team has an opportunity to win the playoff series) were compared to their respective success in non-imminent overtime games (i.e., the outcome of the game does not determine the outcome of the series).

Results: When away teams had an opportunity to win a series, they were significantly more likely to win an overtime game compared to home teams. No such advantage was evident for home teams when they had an opportunity to win a series.

Conclusions: When an NHL team has an opportunity to win a playoff series, there appears to be an advantage for visiting teams—not home teams—in winning an overtime game.

Keywords: ChampionshipChokeClutchHome advantagePressureSelf-attention

Do Equal Employment Opportunity Statements Backfire?: Evidence from a Natural Field Experiment on Job-Entry Decisions

Do Equal Employment Opportunity Statements Backfire?: Evidence from a Natural Field Experiment on Job-Entry Decisions. Andreas Leibbrandt and John A. List. Cato Institute, February 27, 2019. https://www.cato.org/publications/research-briefs-economic-policy/do-equal-employment-opportunity-statements-backfire

Sweeping changes in the 1960s potentially altered employment and lifetime opportunities in the United States in ways that were unprecedented and that transformed every aspect of the employer-employee relationship. In the past half century, for example, Equal Employment Opportunity (EEO) statements were added as a requirement in the Code of Federal Regulations, and nearly every U.S. employer has grappled with how to provide equal opportunities. Even with such policies and affirmative action programs in place, racial inequalities remain ubiquitous in labor markets. Relative to whites, blacks in the United States are twice as likely to be unemployed and earn 20 percent less or lower. A critic of EEO regulations might interpret such data patterns as stark evidence of a policy gone awry, whereas a supporter of EEO regulations might view such data under an optimistic lens, noting that such comparisons would be even more highly skewed absent the sweeping EEO policies enacted in the 20th century.

Rather than turning back the clock and examining how EEO regulations in totality have influenced labor-market patterns over the past several decades, we present initial insights into how an important element of EEO regulations affects labor markets today. In this sense, we aim to provide initial empirical evidence on how EEO statements currently affect racial minorities and their labor-market choices. Such an exercise is important for several reasons. First, several states and the U.S. federal government require EEO statements in job advertisements. Second, aside from these cases, employers have to decide whether they want to include an EEO statement in their job advertisement. Third, many public and private employers in the United States and elsewhere still use EEO statements in job advertisements. Fourth, there are broad recommendations and regulations surrounding their inclusion. Finally, because racial minorities remain disadvantaged in many labor markets, it is of utmost importance to evaluate common practices and policies that aim to reduce labor-market inequalities. To our best knowledge, causal estimates of actual EEO statements do not exist despite their pervasiveness and arguments that they could discourage minorities.

We use a large-scale natural field experiment aimed at exploring the causal impact of EEO statements in job advertisements to provide a first step into understanding the effects of EEO policy. To investigate how EEO statements affect the job-applicant pool, we advertise real jobs and investigate more than 2,300 job-entry decisions across various labor-market settings. Our working hypothesis is that EEO statements encourage minorities to apply for a job. Our experiment renders it possible to investigate interesting heterogeneities because we post the job advertisements in 10 large U.S. cities with substantially different racial compositions.

We find that EEO statements do affect job-entry decisions. However, the statement that all job applicants receive equal consideration irrespective of race leads to unexpected outcomes. In particular, we find that EEO statements discourage racial minorities from applying for jobs in important ways. Educated nonwhites are less likely to apply if the job description includes an EEO statement, and the discouragement effect is particularly pronounced in cities with white-majority populations. The impact of EEO statements on job applications from minorities is economically significant because their application likelihood drops by up to 30 percent.

To explore the underlying mechanism at work, we conduct complementary surveys with job seekers drawn from the same subject pool. We find that the inclusion of EEO statements significantly affects anticipated discrimination, stereotype threat, and tokenism. That is, we observe that the inclusion of the EEO statement in the studied job advertisements decreases the likelihood with which job seekers anticipate discrimination during hiring and career advancement and that it lowers stereotype threat. At the same time, however, we observe that the inclusion of the EEO statement significantly increases the perception of tokenism. This effect is particularly pronounced in cities with white-majority populations, where more than two thirds of job seekers believe that the inclusion of the EEO statement signals that there will be token hires.

Our survey findings augment the field experimental results and provide insights into the mechanism underlying the observed discouragement effect of EEO statements. They suggest that racial minorities prefer not to apply for jobs where there is a high likelihood that they are token hires. These tokenism concerns are so strong that they outweigh other desirable effects of EEO statements, such as lower anticipated discrimination and stereotype threat.

Combined with the insights from Marianne Bertrand and Sendhil Mullainathan and from Sonia Kang, Katherine DeCelles, András Tilcsik, and Sora Jun, who report that employers who use EEO statements are not less likely to discriminate against racial minorities, our findings paint a rather bleak picture of current EEO policies aimed to have a positive impact on minority labor-market representation. This does not imply that EEO statements have never had their intended effects, that EEO policies requiring the mandatory inclusion of EEO statements across the board cannot have their intended effects, or that differently formulated statements cannot have their intended effects. Rather, the results suggest that there is little support for the inclusion of standard EEO statements in job ads in today’s labor market and even evidence that important deleterious effects arise from such statements.

NOTE: This research brief is based on Andreas Leibbrandt and John A.List, “Do Equal Employment Opportunity Statements Backfire?Evidence from a Natural Field Experiment on Job-Entry Decisions,”NBER Working Paper No. 25035, September 2018, http://www.nber.org/papers/w25035

Valuing Facebook: 'Superendowment effect' may be signalling that social media are Wasting Time Goods – goods on which people spend time, but for which they are not willing to pay much (if anything)

Valuing Facebook. Cass R Sunstein. Behavioural Public Policy, Feb 27 2019. https://doi.org/10.1017/bpp.2018.34

Abstract: In recent years, there has been a great deal of discussion of the welfare effects of digital goods, including social media. A national survey, designed to monetize the benefits of a variety of social media platforms (including Facebook, Twitter, YouTube and Instagram), found a massive disparity between willingness to pay (WTP) and willingness to accept (WTA). The sheer magnitude of this disparity reflects a ‘superendowment effect’. Social media may be Wasting Time Goods – goods on which people spend time, but for which they are not, on reflection, willing to pay much (if anything). It is also possible that in the context of the WTP question, people are giving protest answers, signaling their intense opposition to being asked to pay for something that they had formerly enjoyed for free. Their answers may be expressive, rather than reflective of actual welfare effects. At the same time, the WTA measure may also be expressive, a different form of protest, telling us little about the actual effects of social media on people's lives and experiences. It may greatly overstate those effects. In this context, there may well be a sharp disparity between conventional economic measures and actual effects on experienced well-being.




Olfaction During Pregnancy and Postpartum Period: Did not differ compared to controls, although identified some odors less well than did the controls

Olfaction During Pregnancy and Postpartum Period. Marco Aurelio Fornazieri et al. Chemosensory Perception, Feb 27 2019. https://link.springer.com/article/10.1007/s12078-019-09259-7

Abstract
Introduction: Studies of the effect of pregnancy on olfactory function are contradictory—some report reduced function, others hypersensitivity, and still others no change at all. Our objectives were to quantify olfactory function in women during gestational and puerperal periods, to compare the olfactory test scores to those of non-pregnant women, and to explore the potential influence of rhinitis on olfactory function during these periods.

Methods: We evaluated olfactory function in 206 women with and without rhinitis—47 in the first trimester of pregnancy, 33 in the second, 44 in the third, 32 in the postpartum period, and 50 who were non-pregnant. Olfactory assessment was performed using the University of Pennsylvania Smell Identification Test (UPSIT) and ratings of the pleasantness and intensity of four common odors.

Results: Although total UPSIT scores did not differ among the study groups, pregnant and postpartum women identified some odors less well than did the controls. Pregnant women, especially in the first trimester, tended to consider some smells less pleasant. Rhinitis was adversely associated with the olfactory test scores of the pregnant and postpartum women.

Conclusions: The overall olfactory function of postpartum and pregnant women did not differ compared to controls; however, detection of some individual UPSIT items was adversely impacted (e.g., menthol, gingerbread, gasoline). Rhinitis was associated with reduced olfaction during pregnancy and puerperium.

Implications: These findings support the view that pregnancy-related alterations in smell are idiosyncratic, present only for some odorants, and may be impacted by the presence of rhinitis that commonly occurs during pregnancy.

Keywords: Olfaction disorders Smell Pregnancy Postpartum period Olfactory perception Rhinitis

Tuesday, February 26, 2019

When acting through autonomous machines, the way people solve social dilemmas changes: participants program their autonomous vehicles to act more cooperatively than if they were driving themselves

Human Cooperation When Acting Through Autonomous Machines. Celso M. de Melo, Stacy Marsella, and Jonathan Gratch. Proceedings of the National Academy of Sciences, February 26, 2019 116 (9) 3482-3487. https://doi.org/10.1073/pnas.1817656116

Significance: Autonomous machines that act on our behalf—such as robots, drones, and autonomous vehicles—are quickly becoming a reality. These machines will face situations where individual interest conflicts with collective interest, and it is critical we understand if people will cooperate when acting through them. Here we show, in the increasingly popular domain of autonomous vehicles, that people program their vehicles to be more cooperative than they would if driving themselves. This happens because programming machines causes selfish short-term rewards to become less salient, and that encourages cooperation. Our results further indicate that personal experience influences how machines are programmed. Finally, we show that this effect generalizes beyond the domain of autonomous vehicles and we discuss theoretical and practical implications.

Abstract: Recent times have seen an emergence of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). Here we show that acting through machines changes the way people solve these social dilemmas and we present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We show that this happens because programming causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals. We also show that the programmed behavior is influenced by past experience. Finally, we report evidence that the effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society.

Keywords: autonomous vehiclescooperationsocial dilemmas

Reconstructing meaning: Fragmented information is combined into a complete semantic representation of an object and to identify brain regions associated with object meaning

Reconstructing meaning from bits of information. Sasa L. Kivisaari, Marijn van Vliet, Annika Hultén, Tiina Lindh-Knuutila, Ali Faisal & Riitta Salmelin. Nature Communications, volume 10, Article number: 927 (2019). https://www.nature.com/articles/s41467-019-08848-0

Abstract: Modern theories of semantics posit that the meaning of words can be decomposed into a unique combination of semantic features (e.g., “dog” would include “barks”). Here, we demonstrate using functional MRI (fMRI) that the brain combines bits of information into meaningful object representations. Participants receive clues of individual objects in form of three isolated semantic features, given as verbal descriptions. We use machine-learning-based neural decoding to learn a mapping between individual semantic features and BOLD activation patterns. The recorded brain patterns are best decoded using a combination of not only the three semantic features that were in fact presented as clues, but a far richer set of semantic features typically linked to the target object. We conclude that our experimental protocol allowed us to demonstrate that fragmented information is combined into a complete semantic representation of an object and to identify brain regions associated with object meaning.

The “Furry” Phenomenon: Characterizing Sexual Orientation, Sexual Motivation, and Erotic Target Identity Inversions in Male Furries

The “Furry” Phenomenon: Characterizing Sexual Orientation, Sexual Motivation, and Erotic Target Identity Inversions in Male Furries. Kevin J. Hsu, J. Michael Bailey. Archives of Sexual Behavior, Feb 26 2019, https://link.springer.com/article/10.1007/s10508-018-1303-7

Abstract: Furries are individuals who are especially interested in anthropomorphic or cartoon animals (e.g., Bugs Bunny). They often strongly identify with anthropomorphic animals and create fursonas, identities of themselves as those anthropomorphic animals. Some practice fursuiting, or wearing costumes that resemble anthropomorphic animals. Furries have been portrayed as sexually motivated in the media and popular culture, although little empirical research has addressed this issue. If some furries are sexually motivated, they may be motivated by an erotic target identity inversion (ETII): sexual arousal by the fantasy of being the same kinds of individuals to whom they are sexually attracted. Furries with ETIIs would experience both sexual attraction to anthropomorphic animals and sexual arousal by fantasizing about being anthropomorphic animals, because they often change their appearance and behavior to become more like anthropomorphic animals. We surveyed 334 male furries recruited from the Internet about their sexual orientation, sexual motivation, and sexual interests. A large majority of our sample reported non-heterosexual identities (84%) and some degree of sexual motivation for being furries (99%). Male furries also tended to report a pattern of sexual interests consistent with an ETII involving anthropomorphic animals. Both sexual attraction to anthropomorphic animals and sexual arousal by fantasizing about being anthropomorphic animals were nearly universal. Furthermore, male furries tended to be sexually aroused by fantasizing about being the same kinds of anthropomorphic animals to whom they were sexually attracted, with respect to gender and species. This sexual motivation and these unusual sexual interests do not justify discrimination or stigmatization.

Keywords: Furries Sexual orientation Sexual motivation Erotic target identity inversions Autogynephilia Paraphilias

Acute stress: Considering one’s belief in God or science did not mitigate stress responses; under acutely stressful circumstances, reflecting on one’s beliefs may not confer immediate benefits

Farias, M., & Newheiser, A.-K. (2019). The effects of belief in God and science on acute stress. Psychology of Consciousness: Theory, Research, and Practice, http://dx.doi.org/10.1037/cns0000185

Abstract: It is widely assumed that belief in God allows people to better cope with life’s stresses. This stress-buffering effect is not limited to religion; when faced with stress, nonreligious people cling on to other belief systems, notably belief in science. We report an experimental test of whether people are able to down-regulate an acute stress experience by reflecting on their beliefs. We used the Trier Social Stress Test to induce stress in religious and scientist participants from the United Kingdom by having them discuss arguments for and against the United Kingdom leaving the European Union (“Brexit”). Prior to stress induction, participants were or were not reminded of their belief in God or science. We included subjective, cardiovascular, and cortisol stress measures at multiple time points. At both subjective and cardiovascular levels, participants reliably experienced stress. However, considering one’s belief in God or science did not mitigate stress responses. Religious participants were somewhat less reactive to stress induction than scientists. Despite the large correlational literature on the stress-buffering effects of faith, under acutely stressful circumstances, reflecting on one’s beliefs may not confer immediate benefits.

Bad Science May Banish Paper Receipts: California lawmakers seek a ban, based on a scare over BPA that was debunked two decades ago

Bad Science May Banish Paper Receipts. Steve Milloy. The Wall Street Journal, February 26, 2019. https://www.wsj.com/articles/bad-science-may-banish-paper-receipts-11551137837

California lawmakers seek a ban, based on a scare over BPA that was debunked two decades ago

Having vanquished plastic straws, the California Legislature is now considering a bill to ban paper cash-register receipts. One reason offered for the ban is to reduce carbon-dioxide emissions. The other is to reduce public exposure to bisphenol A, or BPA, a chemical used to coat receipts.

Whatever one’s opinion about climate science, it’s clear that eliminating the carbon footprint of California’s paper receipts won’t affect the global climate. Some 1,200 new coal plants are being planned or built around the world, and oil and gas production and use are rising through the roof. Even a global ban on paper would have no significant impact on atmospheric carbon-dioxide levels.

The more interesting reason for the ban is the BPA argument, which is part of a broader trend of misuse of science in public policy. The alarm behind the California bill arises from the notion that BPA is an “endocrine disrupter”: a chemical that, even at low doses, can disrupt human hormonal systems. Such disruptions theoretically could cause a variety of ailments, from cancer to reproductive problems to attention-deficit disorder.

Like the panic over DDT that followed the 1962 publication of Rachel Carson’s “Silent Spring,” the endocrine-disrupter scare made its public debut with a book, “Our Stolen Future” (1996). Written by three activist authors and including a foreword by Al Gore, the book lays out a case for regulating various pollutants.

“Our Stolen Future” was followed the same year by a highly publicized Tulane University study that reported certain combinations of pesticides and other chemicals in the environment were much more potent endocrine disrupters than the individual chemicals themselves. Within weeks, this study prompted Congress to pass a bill directing the Environmental Protection Agency to develop a program to test chemicals for their potential harm to hormonal systems.

In the months that followed, the Tulane study began to fall apart. Independent laboratories around the world reported that they could not replicate its results. By July 1997, the original study was retracted. Federal investigators concluded in 2001 that the Tulane researchers had committed scientific misconduct by falsifying their results.

Yet the law and regulatory programs spawned by the false study remained in place. The endocrine-disrupter scare gained steam through the 2000s, and BPA became its biggest villain. Generous federal funding led to the publication of hundreds of BPA studies. A movement to ban BPA was joined by several cities, states such as California, and foreign nations including Canada, resulting in the elimination of the substance from plastic bottles in those regions. Regulators at the Food and Drug Administration and the European Food Safety Authority pushed back against the scare, to little avail.

Finally in 2012, the FDA decided to launch Clarity, a large $8 million study of BPA to be conducted according to regulatory guidelines known as the Good Laboratory Practices standard. Researchers, including those who had published studies claiming that low-dose exposures to BPA posed health risks, were provided with coded, pre-dosed animals to avoid bias and cheating. Researchers were required to upload their raw data to a government database before the identity of each dose group was disclosed to them.

The results of Clarity were published in 2018. The FDA concluded that the study failed to demonstrate adverse health effects from exposure to BPA in low doses—like the amount one might be exposed to by handling a paper receipt.

Yet despite its birth in scientific misconduct, its dismissals along the way by international regulators and science and public-health groups like the National Academy of Sciences and the World Health Organization, and finally its debunking by the FDA’s Clarity study, the BPA scare survives. Thanks to Congress, it lives on at the EPA, where a 22-year-old endocrine-disrupter screening program peddles merrily along despite producing no results of interest.

It is a sad state of affairs when actual science cannot vanquish adjudicated science fraud in public policy.

Mr. Milloy publishes JunkScience.com, served on the Trump EPA transition team, and is author of “Scare Pollution: Why and How to Fix the EPA.”

People believe that they are above average but also hold themselves to standards of comparison that are well above average due to the increased mental availability of such high-performing standards of comparison

Davidai, S., & Deri, S. (2019). The second pugilist’s plight: Why people believe they are above average but are not especially happy about it. Journal of Experimental Psychology: General, 148(3), 570-587. http://dx.doi.org/10.1037/xge0000580

Abstract: People’s tendency to rate themselves as above average is often taken as evidence of undue self-regard. Yet, everyday experience is occasioned with feelings of inadequacy and insecurity. How can these 2 experiences be reconciled? Across 12 studies (N = 2,474; including 4 preregistered studies) we argue that although people do indeed believe that they are above average they also hold themselves to standards of comparison that are well above average. Across a host of domains, we find that people’s typical standards of comparison are significantly above the level of the “average” person (Studies 1A, 1B, 2A, and 3). We further show that people’s tendency to measure themselves against above-average others is due to the increased mental availability of such high-performing standards of comparison (Studies 4A and 4B). Finally, we present evidence that this is not simply the result of self-enhancement by showing that people measure themselves against above-average others even when they feel subjectively inadequate (Study 5A), receive objective information about their poor performance (Study 5B), or evaluate themselves on domains in which they chronically underperform (Study 5C). Even in domains where being above average is undesirable (e.g., rudeness), people bring to mind and compare themselves with above average targets (Studies 2B and 2C). We discuss the implications for self-enhancement research and the importance of examining who people compare themselves to in addition to how people believe they compare with others.


Who watches an ISIS beheading—and why

Redmond, S., Jones, N. M., Holman, E. A., & Silver, R. C. (2019). Who watches an ISIS beheading—and why. American Psychologist, http://dx.doi.org/10.1037/amp0000438

Abstract: In the wake of collective traumas and acts of terrorism, media bring real graphic images and videos to TV, computer, and smartphone screens. Many people consume this coverage, but who they are and why they do so is poorly understood. Using a mixed-methods design, we examined predictors of and motivations for viewing graphic media among individuals who watched a beheading video created by the terrorist group Islamic State of Iraq and Syria (ISIS). A representative national sample of U.S. residents (N = 3,294) reported whether they viewed a video and why (or why not) via an anonymous survey administered during a 3-year longitudinal study. Accounting for population weights, about 20% of the sample reported watching at least part of a beheading video, and about 5% reported watching an entire video. Increased likelihood of watching a video was associated with demographics (male, unemployed, and Christian), frequency of typical TV watching, and both prior lifetime exposure to violence and fear of future terrorism. Watching at least part of a beheading video was prospectively associated with fear of future negative events and global distress approximately 2 years after the beheading videos went viral. The most common reasons respondents reported for watching a beheading video were information seeking and curiosity. Results suggest attentional vigilance: Preexisting fear and history of violent victimization appear to draw individuals to graphic coverage of violence. However, viewing this coverage may contribute to subsequent fear and distress over time, likely assisting terrorists in achieving their goals.


Are People Trained in Economics “Different”? In certain situations, there appear differences between the behavior of people trained in economics & other groups, but as the existing evidence is mostly ambiguous

Are People Trained in Economics “Different,” and if so, Why? A Literature Review. Simon Niklas Hellmich. The American Economist, Feb 22, 2019. https://doi.org/10.1177/0569434519829433

Abstract: Some argue that frequent confrontation with the homo economicus actor-concept motivates economists to adjust their behavior to that paradigm. Another thesis is that economists are different because the discipline attracts individuals with preferences that differ from those of noneconomists. This article discusses survey, experimental, and field evidence collected during this debate. In certain situations, there appear differences between the behavior of people trained in economics and other groups, but as the existing evidence is mostly ambiguous, a comprehensive picture of the nature and sources of these differences has not yet emerged. The article concludes that economics teachers and researchers should pay more attention to the influence the normative statements inherent in basic neoclassical economics can have on cognitive frames and interindividual processes in moral decision making.

Keywords: education, economic man, preferences, self-interest

That addictions are rooted in brain dysfunction is essentially unfalsifiable and devoid of scientific content; there is overwhelming scientific evidence that other key presuppositions of the brain disease model are false

Is addiction a brain disease? Scott O. Lilienfeld, Sally Satel. Chapter 2 in Casting Light on the Dark Side of Brain Imaging, 2019, Pages 13-17. https://doi.org/10.1016/B978-0-12-816179-1.00014-1

Abstract: Over the past two decades the brain disease model has become the prevailing scientific narrative for explaining substance addictions. This model, buoyed by brain imaging data, posits that addictions are rooted in brain dysfunctions, and are chronic, relapsing conditions that largely eradicate individuals’ capacity to control substance use. We argue that the assertion that addictions are rooted in brain dysfunction is essentially unfalsifiable and devoid of scientific content. Further, there is overwhelming scientific evidence that other key presuppositions of the brain disease model are false. Finally, this model has been of questionable utility; there is minimal evidence that it leads to effective intervention, reduces stigma, or accounts for recent large-scale societal changes in the prevalence of addictions. It is high time to abandon this model and to adopt a pluralistic approach to addiction that acknowledges the value of neuroimaging evidence in conjunction with other lenses of analysis.