Wednesday, January 22, 2020

Moderating uncivil comments leads to lower task accuracy, & to more emotional exhaustion & lower task satisfaction; emotional exhaustion leads to lower comment moderation accuracy

The downsides of digital labor: Exploring the toll incivility takes on online comment moderators. Martin J.Riedl, Gina Masullo Chen, Kelsey N. Whipple. Computers in Human Behavior, January 22 2020, 106262. https://doi.org/10.1016/j.chb.2020.106262

Highlights
•    Comment moderation, due to incivility, is a task prone to emotional exhaustion.
•    Moderating uncivil comments leads to lower task accuracy.
•    It also leads to more emotional exhaustion and lower task satisfaction.
•    Mediation effect of emotional exhaustion leads to lower comment moderation accuracy.
•    No effect of moderating uncivil comments on flow, an immersive (work) experience.

Abstract: This study sought to understand the effects of moderating uncivil online comments on the people who do this task. Results from an experiment (N = 747) show that moderating only uncivil comments made moderators less accurate at that task, more emotionally exhausted, and more dissatisfied with the task, relative to moderating only civil comments or a mix of civil and uncivil comments. In addition, results show evidence of a mediation effect. Specifically, moderating all uncivil comments made people more emotionally exhausted, and this exhaustion in turn led people to be less accurate in picking which comments to reject or accept for publication on a news site comment thread. However, moderating comments had no effect on perceptions of flow, an immersive experience, conceptually borne out of the field of positive psychology. Results suggest breaking up strenuous online labor tasks, such as comment moderation, and alternating comment moderation with other types of work to reduce the deleterious effects of the task.

Keywords: Content moderation, Online incivility, Flow theory, Online news comments, Emotional exhaustion, Comment moderation

From 2008: Is it a boy or a girl? The father’s family might provide a clue

From 2008: Trends in Population Sex Ratios May be Explained by Changes in the Frequencies of Polymorphic Alleles of a Sex Ratio Gene. Corry Gellatly. Evolutionary Biology volume 36, pages190–200, December 10 2008. https://link.springer.com/article/10.1007%2Fs11692-008-9046-3

Abstract: A test for heritability of the sex ratio in human genealogical data is reported here, with the finding that there is significant heritability of the parental sex ratio by male, but not female offspring. A population genetic model was used to examine the hypothesis that this is the result of an autosomal gene with polymorphic alleles, which affects the sex ratio of offspring through the male reproductive system. The model simulations show that an equilibrium sex ratio may be maintained by frequency dependent selection acting on the heritable variation provided by the gene. It is also shown that increased mortality of pre-reproductive males causes an increase in male births in following generations, which explains why increases in the sex ratio have been seen after wars, also why higher infant and juvenile mortality of males may be the cause of the male-bias typically seen in the human primary sex ratio. It is concluded that various trends seen in population sex ratios are the result of changes in the relative frequencies of the polymorphic alleles of the proposed gene. It is argued that this occurs by common inheritance and that parental resource expenditure per sex of offspring is not a factor in the heritability of sex ratio variation.

Popular writing: Is it a boy or a girl? The father’s family might provide a clue. Robyn Horsager-Boehrer. The University of Texas Southwestern Medical Center, June 25, 2019. https://utswmed.org/medblog/it-boy-or-girl-fathers-family-might-provide-clue/
Researchers in England set out to determine whether this is true. They downloaded family trees from the Genealogy Forum, then eliminated data they felt weren’t accurate – for instance, people reported as having more than two parents or a discrepancy in an individual's sex. This left researchers with 927 family trees that had at least three generations and included over half a million individuals dating back to 1600.

Their findings were telling. In the computer models, when researchers removed men from the population data before they had a chance to start families, there was an increase in the number of male babies born in the next generation. The researchers also found that the sex ratio for families followed the father's side, not the mother's side. For example, if a man had more brothers, his own children were more likely to be male; if he had more sisters, he was more likely to have daughters. This was not found to be the case for women.

According to this study, the explanation might be due to a gene that controls the balance of X- and Y-carrying sperm. Men carrying a gene that leads to their sperm having more Y chromosomes have more sons. During times of war and large casualties of male soldiers, those families are more likely to have more surviving sons. And when those men have children, they, like their fathers, might be more likely to have baby boys. This could account for the temporary increase in the sex ratio for that time period.

Optimal Subjective Age Bias: Feeling Younger by a Set Amount, but No More, Is Beneficial for Life Satisfaction

Blöchl, Maria, Steffen Nestler, and David Weiss. 2020. “An Optimal Margin of Subjective Age Bias: Feeling Younger by a Set Amount, but No More, Is Beneficial for Life Satisfaction.” PsyArXiv. January 22. doi:10.31234/osf.io/pfxqh

Abstract: The majority of adults feels considerably younger than their chronological age. Numerous studies suggest that maintaining a youthful subjective age promotes successful ageing, but the extent to which feeling younger promotes life satisfaction is not well understood. Here, we use polynomial regression models and response surface methodology to accurately model and test the relationships between subjective age, chronological age, and life satisfaction in in a large sample spanning adulthood (N = 7,356; 36 – 89 years). We find that people who feel younger by a certain amount, but not more (or less), are most satisfied with their life. In addition, our findings suggest that the optimal discrepancy between subjective and chronological age increases across adulthood. These findings support an optimal margin perspective of the subjective age bias and highlight that beyond a certain point, distancing oneself from one’s own age may be psychologically harmful.

The past is regularly the only thing that can determine present social realities such as commitments, entitlements, & obligations; episodic memory may have developed only once humans were able to represent the social effects of events

Witnessing, Remembering, and Testifying: Why the Past Is Special for Human Beings. Johannes B. Mahr, Gergely Csibra. Perspectives on Psychological Science, January 21, 2020. https://doi.org/10.1177/1745691619879167

Abstract: The past is undeniably special for human beings. To a large extent, both individuals and collectives define themselves through history. Moreover, humans seem to have a special way of cognitively representing the past: episodic memory. As opposed to other ways of representing knowledge, remembering the past in episodic memory brings with it the ability to become a witness. Episodic memory allows us to determine what of our knowledge about the past comes from our own experience and thereby what parts of the past we can give testimony about. In this article, we aim to give an account of the special status of the past by asking why humans have developed the ability to give testimony about it. We argue that the past is special for human beings because it is regularly, and often principally, the only thing that can determine present social realities such as commitments, entitlements, and obligations. Because the social effects of the past often do not leave physical traces behind, remembering the past and the ability to bear testimony it brings is necessary for coordinating social realities with other individuals.

Keywords: episodic memory, testimony, commitments




Inferences of Parenting Ability from Bodily Cues: High-fat female targets were perceived to have more positive & less negative parenting abilities; breast size did not influence perceptions of female parenting ability

Dad and Mom Bods? Inferences of Parenting Ability from Bodily Cues. Donald F. Sacco, Kaitlyn Holifield, Kelsey Drea, Mitch Brown & Alicia Macchione. Evolutionary Psychological Science, Jan 22 2020. https://link.springer.com/article/10.1007/s40806-020-00229-x

Abstract: Though much research has explored how facial and bodily features connote heritable fitness, particularly in the context of short-term mating, such cues similarly may influence perceptions of potential parenting ability. The current study explored how body fat variation and breast size in female targets and body fat and muscularity variation in male targets influence men’s and women’s perceptions of targets’ positive (e.g., nurturance) and negative (e.g., hostility) parenting capacities. Participants viewed 4 female targets orthogonally manipulated along dimensions of adiposity (high vs. low) and breast size (small vs. large), and 4 male targets orthogonally manipulated along similar adiposity dimensions and muscularity (small vs. large) before indicating targets’ inferred parenting ability. High-fat female targets were perceived to have more positive and less negative parenting abilities relative to low-fat female targets, an effect that was most pronounced among women; breast size did not influence perceptions of female parenting ability. For male targets, high fat and small muscles were perceived as more indicative of positive parenting abilities and less indicative toward negative abilities; the low body fat/high muscle male target was perceived to have especially negative parenting abilities. These results suggest body cues often associated with good genes and short-term mating success also systematically influence perceptions of parenting ability.

Declines in Religiosity Predict Increases in Violent Crime—but Not Among Countries With Relatively High Average IQ

Declines in Religiosity Predict Increases in Violent Crime—but Not Among Countries With Relatively High Average IQ. Cory J. Clark et al. Psychological Science, January 21, 2020. https://doi.org/10.1177/0956797619897915

Abstract: Many scholars have argued that religion reduces violent behavior within human social groups. Here, we tested whether intelligence moderates this relationship. We hypothesized that religion would have greater utility for regulating violent behavior among societies with relatively lower average IQs than among societies with relatively more cognitively gifted citizens. Two studies supported this hypothesis. Study 1, a longitudinal analysis from 1945 to 2010 (with up to 176 countries and 1,046 observations), demonstrated that declines in religiosity were associated with increases in homicide rates—but only in countries with relatively low average IQs. Study 2, a multiverse analysis (171 models) using modern data (97–195 countries) and various controls, consistently confirmed that lower rates of religiosity were more strongly associated with higher homicide rates in countries with lower average IQ. These findings raise questions about how secularization might differentially affect groups of different mean cognitive ability.

Keywords: IQ, intelligence, self-control, religion, religiosity, crime, violence, open data, open materials, preregistered


Not replicable: Self-objectified women might themselves contribute to the maintenance of the patriarchal status quo, for instance, by participating less in collective action

Two Preregistered Direct Replications of “Objects Don’t Object: Evidence That Self-Objectification Disrupts Women’s Social Activism”. Matthias De Wilde et al. Psychological Science, January 21, 2020. https://doi.org/10.1177/0956797619896273

Abstract: Self-objectification has been claimed to induce numerous detrimental consequences for women at the individual level (e.g., sexual dysfunction, depression, eating disorders). Additionally, at the collective level, it has been proposed that self-objectified women might themselves contribute to the maintenance of the patriarchal status quo, for instance, by participating less in collective action. In 2013, Calogero found a negative link between self-objectification and collective action, which was mediated by the adoption of gender-specific system justification. Here, we report two preregistered direct replications (PDRs) of Calogero’s original study. We conducted these PDRs after three failures to replicate the positive relation between self-objectification and gender-specific system-justification belief in correlational studies. Results of the two PDRs, in which we used a Bayesian approach, supported the null hypothesis. This work has important theoretical implications because it challenges the role attributed to self-objectified women in the maintenance of patriarchy.

Keywords: self-objectification, system justification, reproducibility, mini meta-analysis, open data, open materials, preregistered


Forgetfulness contributes to the maintenance of a positive and coherent self-image (“Guardian”), the facilitation of efficient cognitive function ("Librarian"), & the development of a creative and flexible worldview (“Inventor”)

The Many Faces of Forgetting: Toward a Constructive View of Forgetting in Everyday Life. Jonathan M.Fawcett, Justin C.Hulbert. Journal of Applied Research in Memory and Cognition, January 21 2020.
https://doi.org/10.1016/j.jarmac.2019.11.002

Abstract: Forgetting is often considered a fundamental cognitive failure, reflecting the undesirable and potentially embarrassing inability to retrieve a sought-after experience or fact. For this reason, forgetfulness has been argued to form the basis of many problems associated with our memory system. We highlight instead how forgetfulness serves many purposes within our everyday experience, giving rise to some of our best characteristics. Drawing from cognitive, neuroscientific, and applied research, we contextualize our findings in terms of their contributions along three important (if not entirely independent) roles supported by forgetting, namely (a) the maintenance of a positive and coherent self-image (“Guardian”), (b) the facilitation of efficient cognitive function (“Librarian”), and (c) the development of a creative and flexible worldview (“Inventor”). Together, these roles depict an expanded understanding of how forgetting provides memory with many of its cardinal virtues.
The Many Faces of Forgetting: Toward a Constructive View of Forgetting in Everyday Life. Jonathan M.Fawcett, Justin C.Hulbert. Journal of Applied Research in Memory and Cognition, January 21 2020.
https://doi.org/10.1016/j.jarmac.2019.11.002

Tuesday, January 21, 2020

UK: Wind farms paid up to £3 million per day to not produce electricity last week, between 25 pct & 80 pct more than the firms, which own giant wind farms in Scotland, would have received had they been producing electricity

Wind farms paid up to £3 million per day to switch off turbines. Edward Malnick. The Telegraph, January 19 2020. https://www.telegraph.co.uk/politics/2020/01/19/wind-farms-paid-3-million-per-day-switch-turbines

Wind farms were paid up to £3 million per day to switch off their turbines and not produce electricity last week, The Telegraph can disclose.

Excerpts:

Energy firms were handed more than £12 million in compensation following a fault with a major power line carrying electricity to England from turbines in Scotland.

The payouts, which will ultimately be added onto consumer bills,were between 25 per cent and 80 per cent more than the firms, which own giant wind farms in Scotland, would have received had they been producing electricity, according to an analysis of official figures.

The payments have prompted questions in Parliament, as one charity warned that consumers were having to fund the consequences of an “excessive” number of onshore wind farms, which can overwhelm the electricity grid.

In December an analysis by the Renewable Energy Foundation, a charity that monitors energy use, revealed that the operators of 86 wind farms in Britain were handed more than £136 million in so-called “constraint payments” last year – a new record.

REF has warned that consumers are left to foot the bill for wind farm operators having to reduce their output as a result of an “excessive” number of turbines in Scotland leaving the electricity grid unable to cope on occasions such as when there are strong winds.

The Western Link, a 530-mile high-voltage cable  running from the west coast of Scotland to the north coast of Wales, was built to help overcome the problem by providing more capacity to transport green energy from onshore wind farms in Scotland, to England and Wales.

But the line, which became fully operational in 2018, has been dogged by difficulties.

In the latest incident, it “tripped” on Jan 10, prompting a spike in the number of wind farms being asked to shut down temporarily because they were producing more energy than could be transported to consumers’ homes.

On the following day  – last Saturday – 50 wind farms were asked to stop producing electricity, and given a total of £2.5 million in compensation to do so. Last Wednesday, the figure was as high as £3.3 million, which was paid out to £3.3 million wind farms by National Grid’s Electricity System Operator (ESO) arm. 


The Puritans of the Left are nostalgic of the Prohibition era: The Atlantic wishes that "we treated booze more like we treat cigarettes"

America’s Favorite Poison... Whatever happened to the anti-alcohol movement? Olga Khazan. The Atlantic, January 14, 2020. https://www.theatlantic.com/health/archive/2020/01/why-there-no-anti-alcohol-movement/604876/

Excerpts:

Americans would be justified in treating alcohol with the same wariness they have toward other drugs. Beyond how it tastes and feels, there’s very little good to say about the health impacts of booze. The idea that a glass or two of red wine a day is healthy is now considered dubious. At best, slight heart-health benefits are associated with moderate drinking, and most health experts say you shouldn’t start drinking for the health benefits if you don’t drink already. As one major study recently put it, “Our results show that the safest level of drinking is none.”

Alcohol’s byproducts wreak havoc on the cells, raising the risk of liver disease, heart failure, dementia, seven types of cancer, and fetal alcohol syndrome. Just this month, researchers reported that the number of alcohol-related deaths in the United States more than doubled in two decades, going up to 73,000 in 2017. As the journalist Stephanie Mencimer wrote in a 2018 Mother Jones article, alcohol-related breast cancer kills more than twice as many American women as drunk drivers do.

During World War II, the brewing industry recast beer as a “moderate beverage” that was good for soldiers’ morale. One United States Brewers’ Foundation ad from 1944 depicts a soldier writing home to his sweetheart and dreaming of enjoying a glass of beer in his backyard hammock. “By the end of the war, the wine industry, the distilled-spirits industry, and the brewing industry had really defined themselves as part of the American fabric of life,” says Lisa Jacobson, a history professor at the University of California at Santa Barbara.

In later decades, beer companies created the Alcoholic Beverage Medical Research Foundation, now called the Foundation for Alcohol Research, which proceeded to give research grants to scientists, some of whom found health benefits to drinking. More recently, the National Institutes of Health shut down a study on the effects of alcohol after The New York Times reported that it was funded by alcohol companies. (George Koob, the director of the National Institute on Alcohol Abuse and Alcoholism, told the Times that the foundation through which the funds were channeled is a type of “firewall” that prevents interference from donors.)

Regardless of how much Americans love to drink, the country could be safer and healthier if we treated booze more like we treat cigarettes. The lack of serious discussion about raising alcohol prices or limiting its sale speaks to all the ground Americans have ceded to the “good guys” who have fun. And judging by the health statistics, we’re amusing ourselves to death.

Do all mammals dream?

Do all mammals dream? Paul R. Manger, Jerome M. Siegel. Journal of Comparative Neurology, January 20 2020. https://doi.org/10.1002/cne.24860

Abstract: The presence of dreams in human sleep, especially in REM sleep, and the detection of physiologically similar states in mammals has led many to ponder whether animals experience similar sleep mentation. Recent advances in our understanding of the anatomical and physiological correlates of sleep stages, and thus dreaming, allow a better understanding of the possibility of dream mentation in non‐human mammals. Here we explore the potential for dream mentation, in both non‐REM and REM sleep across mammals. If we take a hard‐stance, that dream mentation only occurs during REM sleep, we conclude that it is unlikely that monotremes, cetaceans, and otariid seals while at sea, have the potential to experience dream mentation. Atypical REM sleep in other species, such as African elephants and Arabian oryx, may alter their potential to experience REM dream mentation. Alternatively, evidence that dream mentation occurs during both non‐REM and REM sleep, indicates that all mammals have the potential to experience dream mentation. This non‐REM dream mentation may be different in the species where non‐REM is atypical, such as during unihemispheric sleep in aquatic mammals (cetaceans, sirens and Otariid seals). In both scenarios, the cetaceans are the least likely mammalian group to experience vivid dream mentation due to the morphophysiological independence of their cerebral hemispheres. The application of techniques revealing dream mentation in humans to other mammals, specifically those that exhibit unusual sleep states, may lead to advances in our understanding of the neural underpinnings of dreams and conscious experiences.

Flynn: Nine PISA countries study suggests that there is evidence of substantial decrease in students competencies and literacy in Language (writing) and Math beyond possible economical, and national factors

The Reversal of the Flynn Effect and Its Reflection in the Educational Arena: Data Comparison and Possible Directions for Future Research and Action. Leehu Zysberg. Roczniki Pedagogiczne, Vol 11(47) No 3 (2019). https://ojs.tnkul.pl/index.php/rped/article/view/9586

Abstract: For years indicators of cognitive abilities and academic competencies suggested that humans’ ability to effective cope with their environment is improving (dubbed the Flynn effect). Recent evidence suggests that this trend may be turning. This study explores data obtained from the Program for International Student Assessment for an intentional sample of 9 countries over the last 6 years and suggests that indeed there is evidence of substantial decrease in students competencies and literacy in Language (writing) and Math beyond possible economical, and national factors. The relevance of the results to education and its potential implications are discussed.

Keywords: student competencies, academic skills, Flynn effect, PISA, education

DISCUSSION

Leaders, educators and researchers in the field of education have addressed
the evidence from both intelligence research and its educational derivative—
academic literacies—suggesting we may be approaching a crisis:
can our abilities be lagging behind what’s required for effectively adapting
to an increasingly complex and challenging world? (Waldrop, 2016; Zysberg,
2018).
In this paper, data from the PISA tests, for an intentional sample of 9
OECD countries representing various types of developed countries, indicates
that at the very least, the growth trend suggested by the Flynn effect is not
taking place in the PISA results in general and especially so in the chosen
sample, which did not include developing countries, the decrease (with one
exception) was quite dramatic in both Math and Language literacies.
How can we account for such results and how alarming are they after all?
The most popular voices suggest that this is merely a symptom of a much
broader process: Popular voices suggest that the rise of the so called ‘smart
technology’ and its availability, cultural changes especially regarding the
value of learning and knowledge, the deteriorating quality of education systems
and teachers, and even nutrition and health issues that plague younger
generations compared to their parents (Vyas, 2019). Most authors tend to attribute
the phenomenon to environmental factors: Changes in life style (e.g.:
a more sedentary life style), nutrition (e.g. consuming more industrial
foods), even different games played in childhood (e.g.: action shooter computer
games) were mentioned as possible factors (Dockrill, 2018).
An additional line of this discussion focuses on state level systems, such as
the allocation of resources to education: General government spending on education
and even more specifically, spending on education per student associates
with student achievement (OECD, 2015). While economic factors
have been consistently associated with academic performance in most education
systems (Bakker, Denessen, & Brus-Laeven, 2007), it is interesting to
note that some of the larger decreases in PISA scores were observed in robust
economies (e.g.: S. Korea, the USA). However this line of evidence may still
suggest that social and cultural priorities regarding education may play an important
role here.
Last but not least is looking at the results from a methodological point of
view and what we know of the measurement of human competencies: Longitudinal
measurement of human potentials and performance often show a bias
called regression toward the mean (Rocconi & Ethington, 2009). This may
mean that countries that were either very high or very low on PISA grades
may show decline (for high scores) and ‘improvement’ (for low scores) just
as an artifact of repeated measurements. While this is a compelling option,
we did see similar trends also in countries that are more or less around the
OECD’s mean score (e.g.: USA, Poland).
Do we need to prepare for the end of our civilization as we know it due to
the erosion of basic human competencies? Are we indeed drowning ourselves
in technology and information that we can use less and less effectively?
While it may still be too early to reliably tell (Stillman, 2019), it is becoming
clearer that we face a dramatic change in how human competencies
and literacies express themselves and how we use them. Of existing possible
explanations the ones that stress the roles of culture and effective resource
investment in the competencies and literacy of future generations (Coburn
& Penuel, 2016) is the most likely in light of the nature of the data.
Study Limitations and Directions for future thought
Though the results reflect worldwide trends emerging from various empirical
sources, the data chosen here emphasizes education related settings
and is limited in scope and the level of analysis applied to it. The attempt to
control potential intervening factors through the choice of intentional sample
can only be partially effective, and the patterns should be read with care.
That being said, should future evidence corroborate our interim proposals
and conclusions, policy makers and educators will have to team up to prevent
a dangerous downslide. We live in a world that will require more and
more of our ability to make sense of data and information and make effective
decisions. So far we seem to fail miserably (e.g.: Lockie, 2017; Zysberg,
2018), showing a growing tendency to avoid complex information in decisions,
fail to differentiate bogus facts, facts and opinions, and find it more
and more difficult to represent our perceptions and insights in an effective
manner. Will saving the human race from itself be the next task at hand for
educators? Only the future can tell.


Check also Another nation in which the Flynn effect (IQ in Romania was increasing with approximately 3 IQ points/decade) seems to reverse: The continuous positive outlook is in question as modern generations show signs of IQ “fatigue”
Time and generational changes in cognitive performance in Romania. George Gunnesch-Luca, Dragoș Iliescu. Intelligence, Intelligence, Volume 79, March–April 2020, 101430https://www.bipartisanalliance.com/2020/01/another-nation-in-which-flynn-effect-iq.html

A recent review implies that people judge their own true selves, or their authentic and fundamental nature, to be no better than that of others, which conflicts with self-enhancement perspectives


A Perspective-Dependent View on the True Self. Yiyue Zhang. MSc Thesis, College of Arts and Sciences, Ohio University, Dec 2019. https://etd.ohiolink.edu/!etd.send_file?accession=ohiou1572777883003345&disposition=inline

Abstract: A recent review implies that people judge their own true selves, or their authentic and fundamental nature, to be no better than that of others (Strohminger, Knobe, & Newman, 2017), which conflicts with self-enhancement perspectives that assume that people tend to view their characteristics and life prospects more favorably than those of others (Sedikides & Alicke, 2012). However, this assumption has not yet been directly assessed. The current five studies explored whether self-enhancement operates in comparative true-self judgments of trait and morally-relevant behaviors. Study 1 to 3 showed that people rated positive and moral traits to be more characteristic of their true selves (vs. an average person’s and a close friend’s true selves). The pattern reversed for negative traits. Using hypothetical and actual moral behaviors, Study 4 and 5 indicated that although moral decisions were generally more characteristic of own versus others’ true-selves, people considered immoral decisions to be more characteristic of other people’s true selves than of their own. Together, the findings demonstrate that true self judgments are subject to self-enhancing tendencies, and therefore, is perspective-dependent.

General Discussion
The goal of this paper was to investigate whether people self-enhance in true self comparisons. The true self refers to a person’s true nature or his or her authentic identity. It assumes the existence of an underlying component of a person’s identity that defines them as an individual (Christy et al., 2019). Specifically, essentialists believe that individuals possess an innate personal essence (i.e. a true self) that explains their shared similarities in psychological and behavioral resemblances across cultural and individual differences. Stemming from this essentialist perspective that individuals have immutable and inherent essences (i.e. true selves), researchers argue that true self evaluations tend to be “perspective-independent,” in which people believe that every individual is morally good deep down (Strohminger et al., 2019). So far, prior research seems to support this conclusion. For example, studies have demonstrated that people tend to attribute their own as well as others’ moral, rather than immoral, behaviors to the true self (Jongman-Sereno & Leary, 2016; Newman et al., 2014).However, my findings that self-enhancement influences people’s true-self judgments and comparisons contrast with this commonly held notion, and suggest that the true-self assessment is perspective-dependent. Specifically, in the first two studies, I addressed the question of whether true self comparisons are subject to self-enhancing tendencies at the general personality trait level. By asking the participants to compare their true selves with those of their average peer’s and their close friend’s, I obtained strong evidence of comparative self-enhancement in which participants rated positive traits more characteristic of their own true selves than those of others’; negative traits were considered as more characteristic of others’, with the exception of their close friend’s, true selves rather than of their own true selves. Moreover, I replicated previous findings (e.g., Jongman-Sereno & Leary, 2016; Newman et al., 2014) that positive attributes are more likely to be viewed as expressions of the true self.In the third study, I extended my previous findings to morality-related personality traits. Morality, arguably, is considered as the constitutional feature of the true self (Strohminger et al., 2017). Thus, by showing that people view moral traits as more reflective of their own true selves rather than those of average peers’, I, again, found compelling evidence of self-enhancement in true-self comparative judgments. Study 3, in addition, incorporated judgments regarding the selfand the potential self. I found that the potential self is viewed more morally than the true self, suggesting that assessing the true self isnot completely basedon personal fantasies or future self-projections (Bargh et al., 2002; Rogers, 1961) but requires a certain level of self-knowledge (Jongman-Sereno & Leary, 2018). I also found that people believe that their true selves are more moral than their actual selves, replicating previous findings that the true self is perceived distinctly from the self (Christy et al., 2019; Strohminger et al., 2017), and moral goodness is the core of the true self (De Freitas et al., 2018; Newman et al., 2014). In the last two studies, I tested my previous findings in a moral behavioral context. Specifically, Study 4 used hypothetical moral dilemmas, and Study 5 employed actual behaviors that participants have committed in the past. In both studies, I found that people view immoral behaviors as more characteristics of others’ true selves than of their own. Moreover, moral behaviors are considered as more reflective of participants’ own true selves rather than of others’ true selves (Study 5). The lack of significant difference in immoral behavioral comparisons in Study 4 might be due to the perceptions of hypothetical scenarios being unrealistic. These two studies together demonstrated that comparative self-enhancement functions in the true-self judgments regarding moral information processing. Self-Enhancement in the True SelfThe value of authenticity, or being true to oneself, has been studied in many intellectual traditions. For instance, the modern concept of “self” derives arguably, from the emerging notion in the seventeenth century that people have natural rights (Taylor,1989), which, in turn, provide one basis for the belief in being true, or untrue, to one’s nature. From a philosophical stance, authenticity or the true self implies an underlying true nature, or psychological essence, within individuals that makes them who they truly are (Kierkegaard, 1954; Rogers, 1961). It seems clear, though, that people believe they have a true self, or at least endorse true self beliefs, when queried in psychological experiments (e.g. Christy et al., 2019). Recent research suggests that true self beliefs reflect “psychological essentialism,” which,as the name implies, is an aspect of self that remains invariant through surface changes (Christy et al., 2019). Some of the most interesting applications of the true-self construct in empirical research has been to show that people believe that their true selves are morally superior to their actual behavior (e.g., Newman et al., 2015). Research findings suggest that when people fall short of their behavioral ideals, they believe that there is a superior essence within that reflects their true selves more accurately.

Thus, the question pursued in the five studies described in this article can be interpreted as whether all essences, or true selves, are considered equal. If people believe in an essence that characterizes all human species, then there is little reason to expect one person’s essence to be better than that of another. Accordingly, by contrast, the extensive literature on self-enhancement in general, and comparative bias in particular, provides ample reason to question whether true-self judgments are immune from the ubiquitous self-serving tendencies that are reflected in many trait and behavior judgments (Alicke & Sedikides, 2011).The present studies call into question the strongest claim that has been made for true selves, namely, that people evaluate them just as favorably regardless of whether they belong to themselves or others. The findings from those five studies suggest the opposite –the true self assessments are subject to self-enhancement, in which people view their own true selves more favorably. Here I list two potential reasons that account for these findings.First, individuals might be more motivated to enhance their true selves because the true self is the core and the essential aspect of the self. From a self-enhancement perspective, the belief in a true self allows individuals to claim an arguably more favorable self that exists within their surface self, especially when their actual self is less socially desirable. The tendency to see oneself in a flattering fashion is stronger in the domains that are more relevant to a person’s self-image (Pedregon et al., 2012). Thus, by construing the true self to their own advantages, individuals are able to express a skewed, often a more positive, representation of their core identity that tells who they really are.

Moreover, self-enhancement in the true self tendsto be easier to achieve because of the hidden nature of the true self. Past research has shown that self-enhancement is facilitated when the judgment dimensions are more abstract as opposed to objective or concrete (Sedikides & Strube, 1997). Researchers have pointed out that understanding the true self is extremely subjective becausethe true-self judgments and comparisons are outside the boundaries of objective measurement tests (Strohminger et al., 2017).Thus, the invisibility of the true-self judgments might promote the chances of self-enhancement, because the possibility for invalidation is low (Alicke & Govorun, 2005; Alicke & Sedikides, 2009).The findings in this article not only suggest a perspective-dependent viewon the true-self judgments, but also challenge the common notion of an unbiased processing of authenticity. Kernis and Goldman (2006) argued that authenticity reflects the relative absence of self-serving bias or interpretive distortions, such as defensiveness and self‐aggrandizement, in the processing of self‐relevant information. Accordingly, individuals should objectively accept one’s strengths and weaknesses. Increasing literature, however, questions this assumption. For example, Jongman-Sereno and Leary (2016) demonstrated that positive events are judged to be more authentic than negative ones. Similarly, Christy and colleagues (2016, 2017) have shown that thinking about one's past moral behaviors increased participants' ratings of self-knowledge (as measured by the Self-Awareness Subscale of Kernis & Goldman’s, 2006, the Authenticity Inventory), whereas contemplating one's past immoral behaviors decreased these ratings. By showing that this positivity bias extends to self-other comparisons, those five studies provide strong support for the argument that authenticity isa biased construct.True Self vs. Other SelvesThe true self, by its nature, presumably differs from the actual self. This distinction is implied in previous research that asks participants to compare their actual behavior with that of their true or authentic selves (Jongman-Sereno & Leary, 2016). To myknowledge, however, direct comparisons of true and actual selves have not yet been effected, although Christy et al. (2019) have found that participants view true selves as more essential than actual selves. The findings of Study 3 directly confirmed theelevation of the true self over the actual self, thereby supporting investigators’ assumption that true selves are evaluated more favorably than actual selves (e.g., Strohminger et al., 2017).Most individuals have a vested interest in believing that there is a better self within than the one that is outwardly manifested. Even the moral peopleamong us have presumably, on occasion, done things they regretted, or failed to live up to their expectations. Both theory (Strohminger et al., 2017)and the empirical results of Study 3suggest that the true self is perceived as an improvement to the actual self. Although participants’ precise interpretation of this comparison standard will require further research, Christy et al. (2019) have made important strides in suggesting that people construe the true self as an enduring and essential aspect of identity. In comparing their actual selves to their true ones, therefore, participants may be thinking of a core essence that is better in most respects to its surface appearances.

In Study 3, I was interested in exploring whether another self construction—the potential self—would be even more favorably evaluated than the true self. Because potential selves point to a hypothetical future, they provide considerable latitude for construction. In essence, people are free to fantasize, and self-enhance, at will about how events will unfold in the future, with no immediate chance of invalidation. Consistent with this reasoning, I found that the potential self was evaluated more favorably than any other comparison standard.

Limits and Future Directions

Although I demonstrated that self-enhancing tendency still operates in the true self comparative judgment, it is still unclear what the underlying mechanisms are. In other words, are individuals believe that their true selves, by nature, are fundamentally more positive than others’true selves? Or is it that individuals are motivated to aggrandize their true selves? From a motivational stance, self-enhancement concerns more with the latter, as it implies that people are constantly seeking positive self-regard that is sometimes mismatched with the objective reality (Alicke & Sedikides, 2011). That is, individuals are aware of their personal strengths and weaknesses to some extent, but actively construe illusionary positive identities. This is consistent with the notion that self-knowledge is required to experience subjective authenticity (Kernis & Goldman, 2006; Jongman-Sereno & Leary, 2018). Therefore, the distortedly favorable views of one’s true self can be viewed as a result of a process of exaggerating strengths and overlooking shortcomings. From an essentialist perspective, however, the true self, as the essence of one’s identity, is shown to be immutable and inherent (Christy et al., 2019).

Thus, it is also reasonable to argue that the enhanced true self comes from the belief that people think their true selves are innately better than those of others.In addition to investigating whether people are enhancing their true selves by believing their true selves are superior or by actively viewing their true selves more favorably, researchers should conduct studies that examine the effect of self-enhancement on perceived or subjectivelyexperienced authenticity. Research has shown that positive affect, such as feeling competent, prosocial, and self-compassionate, increases the subjective feelings of being authentic (Lenton, Bruder, Slabu, & Sedikides, 2013; Sedikides et al., 2018; Zhang et al., 2019). Nonetheless, to my knowledge, there is no research that directly explores whether induced self-enhancement increases authenticity judgments. It is possible that self-enhancement improves the accessibility of positive self-views that potentially lead to enhanced feelings of authenticity.Resolving the role of self-enhancement in true and authentic self judgments will require further research, but I close by speculating that essential selves, and true or authentic selves, may be distinct constructs. Previous findings clearly establish that humans believe that their nature tends toward the good, and the findings here show that people believe that “my good is better than yours.” Asking people to evaluate their “true” or “authentic” abilities, or goodness, or to compare their true characteristicto others’, seems destined to prime self-enhancement concerns. Further research will hopefully help to clarify the nature of true and authentic selves, both in terms of their precise interpretation by individuals, and their implications for social judgment and behavior.

Failed replication of Vohs & Schooler 2008: Manipulating free will beliefs in a robust way is more difficult than has been implied by prior work, & the proposed link with immoral behavior may not be as consistent as suggested

Nadelhoffer, Thomas, Jason Shepard, Damien Crone, Jim A. C. Everett, Brian D. Earp, and Neil Levy. 2019. “Does Encouraging a Belief in Determinism Increase Cheating? Reconsidering the Value of Believing in Free Will.” OSF Preprints. May 3. doi:10.31219/osf.io/bhpe5

Abstract: A key source of support for the view that challenging people’s beliefs about free will may undermine moral behavior is two classic studies by Vohs and Schooler (2008). These authors reported that exposure to certain prompts suggesting that free will is an illusion increased cheating behavior. In the present paper, we report several attempts to replicate this influential and widely cited work. Over a series of four high-powered studies (sample sizes of N = 162, N = 283, N = 268, N = 804) (three preregistered) we tested the relationship between (1) anti-free-will prompts and free will beliefs and (2) free will beliefs and immoral behavior. Our primary task was to closely replicate the findings from Vohs and Schooler (2008) using the same or similar manipulations and measurements as the ones used in their original studies. Our efforts were largely unsuccessful. We suggest that manipulating free will beliefs in a robust way is more difficult than has been implied by prior work, and that the proposed link with immoral behavior may not be as consistent as previous work suggests.


4. General Discussion
The free will debate has gone mainstream in recent years in the wake of scientific advances that on some accounts seem to undermine free will. Given the traditional associations between free will and moral responsibility, a great deal may hang on this debate. In a high-profile paper on the relationship between free will beliefs and moral behavior, Vohs and Schooler (2008) cautioned against public pronouncements disputing the existence of free will, based on their findings concerning the relationship between free will beliefs and cheating. Our goal in this paper was to replicate their landmark findings. Across four studies, we had mixed results. While we were able to influence people’s beliefs in free will in one of the four studies, we failed in our efforts to find a relationship between free will beliefs and behavior. When coupled with the work of other researchers who have had difficulty replicating the original findings by Vohs and Schooler, we think this should give us further pause for concern.
That said, there are four primary limitations of our studies. First, in light of the results from Study 4, it is possible that there is a link between free will belief and moral behavior—we just failed to detect it because our two behavioral studies were not high powered enough. Perhaps a very high-powered (800+ participants) behavioral experiment would replicate Vohs and Schooler’s original findings. That is certainly possible, but we are doubtful that simply running another high-powered experiment would yield the desired effect. After all, our pooled data analyses have 1,089 and 551 pooled participants, respectively. Moreover, Monroe, Brady, and Malle (2016) had mixed results manipulating free will beliefs in very high-powered studies. And even when they did manage to decrease free will beliefs, they did not find any behavioral differences. So, we are not convinced that insufficient power explains our failures to replicate—especially given that in Vohs and Schooler’s original studies were underpowered (N = 15-30 per cell) and yet they found very large effects both with respect to manipulating free will beliefs (d = 1.20) and influencing cheating behavior (d = 0.88). By our lights, we have done enough in this paper—when coupled with the other mixed results from attempts to replicate Vohs and Schooler (2008)—to weaken our collective confidence in the proposed relationship between free will beliefs and moral behaviors. That is not to say there is no relationship, however, it suggests that if there is one, it likely not a relationship we should be especially worried about from the dual standpoints of morality and public policy.
The second potential problem with our studies is that we ran them online rather than using a convenience sample, as Vohs and Schooler did. While we tried to ensure that we mimicked their original work as much as possible, follow up studies with a convenience sample would certainly be valuable. However, the differences in sample should not deflate the importance of our replication attempts. After all, the effect (and its societal implications) are claimed to be pervasive. If directly communicating skepticism about free will barely undermined people's beliefs and (going beyond our own data) at most resulted in only a trivial increase in bad behavior (or affected behavior in a very limited range of contexts), then the effect is arguably unimportant and unworthy of the substantial attention it has received so far. A third limitation is that we only used American participants. However, this limitation is an artifact of our goal of trying to replicate the work by Vohs and Schooler. Because they used an American sample, we used an American sample. Figuring out whether their work replicates in a non-American sample is a task for another day. That said, we would obviously welcome cross-cultural studies that implemented our paradigms to see whether our findings are cross-culturally stable.
The fourth and final limitation our experimental design is the possibility that MTurk participants may not be as attentive as in-lab participants. To guard against this, we used an attention check and excluded any participants who failed it. We also used two items designed to encourage participants to pay attention by reminding them that they would be asked to write about the content of the vignette they read. While these measures can obviously not guarantee participants are paying attention, we’d like to think that they reduce the likelihood of inattention. Additionally, many lab tasks that are particularly susceptible to lapses in attention have been replicated using MTurk populations, including tasks that depend on difference in reaction times on the scale of milliseconds (e.g., Erikson Flanker tasks) and memory tasks that are heavily attention-dependent (see Woods, et al., 2015 for a review).
Setting these limitations aside, we nevertheless think we have made a valuable contribution to the literature on the relationship between free will beliefs and moral behavior. Minimally, our findings serve as a cautionary tale for those who fret that challenging free will beliefs might undermine public morality. Future research on this front will have to take into consideration the difficulty of replicating both standard manipulations of belief in free will and the purported link between free will skepticism and morality. Contrary to our initial expectations, the association between free will beliefs and moral behavior appears to be elusive. As such, worries about the purported erosion of societal mores in the wake of recent advances in neuroscience are likely to be misplaced. The belief in free will appears to be more stable, robust, and resistant to challenge than earlier work suggests. While some scientists may think that their research undermines the traditional picture of agency and responsibility, public beliefs on this front are likely to be relatively slow to change. Even if beliefs about free will were to incrementally change, given the lack of association between dispositional free will beliefs and moral behavior reported by Crone and Levy (2018), it is unclear that people would have difficulty integrating such beliefs into a coherent worldview that permits the same level of moral behavior.

Monday, January 20, 2020

Extended Penfield’s findings of the primary somatosensory cortex’s homunculus to the higher level of somatosensory processing suggest a major role for somatosensation in human cognition

The "creatures" of the human cortical somatosensory system: Multiple somatosensory homunculi. Noam Saadon-Grosman, Yonatan Loewenstein, Shahar Arzy Author Notes. Brain Communications, fcaa003, January 17 2020, https://doi.org/10.1093/braincomms/fcaa003

Abstract: Penfield’s description of the “homunculus”, a “grotesque creature” with large lips and hands and small trunk and legs depicting the representation of body-parts within the primary somatosensory cortex (S1), is one of the most prominent contributions to the neurosciences. Since then, numerous studies have identified additional body-parts representations outside of S1. Nevertheless, it has been implicitly assumed that S1’s homunculus is representative of the entire somatosensory cortex. Therefore, the distribution of body-parts representations in other brain regions, the property that gave Penfield’s homunculus its famous “grotesque” appearance, has been overlooked. We used whole-body somatosensory stimulation, functional MRI and a new cortical parcellation to quantify the organization of the cortical somatosensory representation. Our analysis showed first, an extensive somatosensory response over the cortex; and second, that the proportional representation of body-parts differs substantially between major neuroanatomical regions and from S1, with, for instance, much larger trunk representation at higher brain regions, potentially in relation to the regions’ functional specialization. These results extend Penfield’s initial findings to the higher level of somatosensory processing and suggest a major role for somatosensation in human cognition.



The division-of-labor may result in modular & assortative social network of strong associations among those performing the same task: DOL & political polarization may share a common mechanism

Social influence and interaction bias can drive emergent behavioural specialization and modular social networks across systems. Christopher K. Tokita and Corina E. Tarnita. Journal of The Royal Society Interface, January 8 2020. https://doi.org/10.1098/rsif.2019.0564

Abstract: In social systems ranging from ant colonies to human society, behavioural specialization—consistent individual differences in behaviour—is commonplace: individuals can specialize in the tasks they perform (division of labour (DOL)), the political behaviour they exhibit (political polarization) or the non-task behaviours they exhibit (personalities). Across these contexts, behavioural specialization often co-occurs with modular and assortative social networks, such that individuals tend to associate with others that have the same behavioural specialization. This raises the question of whether a common mechanism could drive co-emergent behavioural specialization and social network structure across contexts. To investigate this question, here we extend a model of self-organized DOL to account for social influence and interaction bias among individuals—social dynamics that have been shown to drive political polarization. We find that these same social dynamics can also drive emergent DOL by forming a feedback loop that reinforces behavioural differences between individuals, a feedback loop that is impacted by group size. Moreover, this feedback loop also results in modular and assortative social network structure, whereby individuals associate strongly with those performing the same task. Our findings suggest that DOL and political polarization—two social phenomena not typically considered together—may actually share a common social mechanism. This mechanism may result in social organization in many contexts beyond task performance and political behaviour.

4. Discussion

Our main result demonstrates that, in the presence of homophily with positive influence, the feedback between social influence and interaction bias could result in the co-emergence of DOL and modular social network structure. These results reveal that self-organized specialization could give rise to modular social networks without direct selection for modularity, filling a gap in our knowledge of social organization [55] and mirroring findings in gene regulatory networks, which can become modular as genes specialize [56]. The co-emergence requires both social influence and interaction bias but, if the level of social influence is too high, its pressure leads to conformity, which homogenizes the society. Because this feedback between social influence and interaction bias has also been shown to drive political polarization [2225], our results suggest a shared mechanism between two social phenomena—polarization and DOL—that have not traditionally been considered together and raise the possibility that this mechanism may structure social systems in other contexts as well, such as in the case of emergent personalities [11,2931]. Furthermore, the ubiquity of this mechanism may help explain why social systems often have a common feature—modular network structure—that is shared with a range of other biological and physical complex systems [57].
Intriguingly, although our results suggest that diverse forms of behavioural specialization—and the associated modular, assortative social networks—might arise from a common mechanism, depending on their manifestation, they can be either beneficial or detrimental for the group. For example, DOL and personality differences have long been associated with beneficial group outcomes in both animal [5,5860] and human societies [61] (although it can sometimes come at the expense of group flexibility [62]). Moreover, the modularity that co-occurs in these systems is also often framed as beneficial, since it can limit the spread of disease [63] and make the social system more robust to perturbation [55]. On the contrary, political polarization is typically deemed harmful to democratic societies [64]. Thus, an interesting question for future research arises: if a common mechanism underlies the emergence of behavioural specialization and the co-emergence of a modular social network structure in multiple contexts, why would group outcomes differ so dramatically? Insights may come from studying the frequency of co-occurrence among various forms of behavioural specialization. If the same mechanism underlies behavioural specialization broadly, then one would expect multiple types of behavioural specialization (e.g. in task performance, personality, decision-making) to simultaneously arise and co-occur in the same group or society, as is the case in some social systems, where certain personalities consistently specialize on particular tasks [9,10] or in human society, where personality type and political ideology appear correlated [65]. Then, the true outcome of behavioural specialization for the group is the net across the different types co-originating from the same mechanism and cannot be inferred by investigating any one specific instantiation of behavioural specialization.
While DOL emerged when homophily was combined with positive influence, other combinations of social influence and interaction bias may nevertheless be employed in societies to elicit other group-level phenomena. For instance, under certain conditions, a society might benefit from uniform rather than divergent, specialized behaviour. This is the case when social insect colonies must relocate to a new nest, a collective decision that requires consensus-building [66]. To produce consensus, interactions should cause individuals to weaken their commitment to an option until a large majority agrees on one location. Heterophily with positive influence—preferential interactions between dissimilar individuals that reduce dissimilarity—achieves this dynamic and is consistent with the cross-inhibitory interactions observed in nest-searching honeybee swarms [67]: scouts interact with scouts favouring other sites and release a signal that causes them to stop reporting that site to others. One could imagine that similar dynamics might also reduce political polarization.
Recent work has shown that built environments—physical or digital—can greatly influence collective behaviour [16,18,6870], but the mechanisms underlying this influence have remained elusive. By demonstrating the critical role of interaction bias for behavioural outcomes, our results provide a candidate mechanism: structures can enhance interaction bias among individuals and thereby amplify the behavioural specialization of individuals. For example, nest architecture in social insect colonies alter collective behaviour [68] and social organization [18] possibly because the nest chambers and tunnels force proximity to individuals performing the same behaviour and limit interactions with individuals performing other behaviours. Similarly, the Internet and social media platforms have changed the way individuals interact according to interest or ideology [16,69,70]: selective exposure to certain individuals or viewpoints creates a form of interaction bias that our results predict would increase behavioural specialization, i.e. political bias. Thus, our model predicts that built environments should increase behavioural specialization beyond what would be expected in more ‘open’, well-mixed environments. This prediction has evolutionary consequences: a nest can increase behavioural specialization without any underlying genetic or otherwise inherent, diversity. Such consequences would further consolidate the importance of built environments—specifically, nests—for the evolution of complex societies. It has been previously argued that the construction of a nest may have been a critical step in the evolution of stable, highly cooperative social groups [71]. Subsequent spatial structuring of the nest would then, according to our findings, bring further benefits to nascent social groups in the form of increased behavioural specialization, e.g. DOL, even in the absence of initial behavioural and/or trait heterogeneity.
Finally, our results shed light on how plastic traits can result in scaling effects of social organization with group size, a finding that tightens theoretical links between the biological and social sciences. Founding sociological theorist, Emile Durkheim, posited that the size of a society would shape its fundamental organization [3]: small societies would have relatively homogeneous behaviour among individuals, but DOL would naturally emerge as societies grew in size and individuals differentiated in behaviour due to social interactions. Similar to Durkheim's theoretical framing, John Bonner famously posited that complexity, as measured by the differentiated types of individuals (in societies) or cells (in multicellular aggregations), would increase as groups grew in size [72]. Bonner argued that the differentiation among individuals was not due to direct genetic determinism but was instead the result of plasticity that allowed individuals to differ as groups increased in size. Our model supports these qualitative predictions and even predicts a rapid transition in organization as a function of group size that results from socially influenced plasticity at the level of the individual. Previous theoretical work showed that DOL could exhibit group size scaling effects even with fixed traits, but these increases in DOL quickly plateaued past relatively small group sizes [5,39]. Our model, along with models of self-reinforced traits [38], demonstrates how DOL could continue to increase at larger group sizes, a pattern observed empirically in both animal [49,73] and human societies [74,75]. For other forms of behavioural specialization, such as emergent personalities or political polarization, the effect of group size is understudied; however, our results suggest similar patterns. Our model further demonstrated that group size can affect social network structure, a dynamic that has only been preliminarily investigated empirically so far [76]. Leveraging new technologies—such as camera-tracking algorithms and social media—that can simultaneously monitor thousands of individuals and their interactions to investigate the effect of group size on societal dynamics could have significant implications as globalization, urbanization and technology increase the size of our social groups and the frequency of our interactions.

---
Modularity is a form of community structure within a group in which there are clusters of strongly connected nodes that are weakly connected to nodes in other clusters. Using each simulation's time-aggregated interaction matrix A, we calculated modularity with the metric developed by Clauset et al. [77]. A modularity value of 0 indicates that the network is a random graph and, therefore, lacks modularity; positive values indicate deviations from randomness and the presence of some degree of modularity in the network.

Frequency of non-random interactions reveals the degree to which individuals are biasing their interactions towards or away from certain other individuals. For a random, well-mixed population, the expected frequency of interactions between any two individuals is pinteract = 1 − (1 − 1/(n − 1))2. For our resulting social networks, we compared this expected well-mixed frequency to the value of each entry aik in the average interaction matrix resulting from the 100 replicate simulations per group size. To determine whether the deviation from random was statistically significant, we calculated the 95% confidence interval for the value of each entry aik in the average interaction matrix. If the 95% confidence interval for a given interaction did not cross the value pinteract, that interaction was considered significantly different than random.

Assortativity is the tendency of nodes to attach to other nodes that are similar in some trait (e.g. here, threshold bias). We measured assortativity using the weighted assortment coefficient [78]. This metric takes values in the range [− 1, 1], with positive values indicating a tendency to interact with individuals that are similar in traits and negative values indicating a tendency to interact with individuals that are different. A value of 0 means random traits-based mixing among individuals.

US: The share of job vacancies requiring a bachelor’s degree increased by more than 60 percent between 2007 and 2019, with faster growth in professional occupations and high-wage cities

Structural Increases in Skill Demand after the Great Recession. Peter Q. Blair, David J. Deming. NBER Working Paper No. 26680. January 2020. https://www.nber.org/papers/w26680

Abstract: In this paper we use detailed job vacancy data to estimate changes in skill demand in the years since the Great Recession. The share of job vacancies requiring a bachelor’s degree increased by more than 60 percent between 2007 and 2019, with faster growth in professional occupations and high-wage cities. Since the labor market was becoming tighter over this period, cyclical “upskilling” is unlikely to explain our findings.

1 Introduction

The yearly wage premium for U.S. workers with a college degree has grown rapidly in
recent decades: from 40 percent in 1980 to nearly 70 percent in 2017 (Autor, Goldin, and
Katz 2020). Over the same period, the share of adults with at least a four-year college
degree doubled, from 17 to 34 percent (Snyder, de Brey, and Dillow, 2019) (Digest of
Education Statistics, 2020). In the “education race” model of Tinbergen (1974), these two
facts are explained by rapidly growing relative demand for college-level skills. If the
college premium grows despite a rapid increase in the supply of skills, this must mean
that the demand for skills is growing even faster.
The education race model provides a parsimonious and powerful explanation of US
educational wage differentials over the last two centuries (Katz and Murphy 1992; Goldin
and Katz 2008; Autor, Goldin, and Katz 2020). Yet one key limitation of the model is that
skill demand is not directly observed, but rather inferred as a residual that fits the facts
above. How do we know that the results from the education race model are driven by
rising employer skill demand, as opposed to some other unobserved explanation?
We study this question by using detailed job vacancy data to estimate the change in
employer skill demands in the years since the Great Recession. Our data come from the
labor market analytics firm Burning Glass Technologies (BGT), which has collected data
on the near-universe of online job vacancy postings since 2007.
Our main finding is that skill demand has increased substantially in the decade following the Great Recession. The share of online job vacancies requiring a bachelor’s degree
increased from 23 percent in 2007 to 37 percent in 2019, an increase of more than 60 percent. Most of this increase occurred between 2007 and 2010, consistent with the finding
that the Great Recession provided an opportunity for firms to upgrade skill requirements
in response to new technologies (Hershbein and Kahn 2018).
We present several pieces of evidence suggesting that the increase in skill demand is
structural, rather than cyclical. We replicate the findings of Hershbein and Kahn (2018)
and Modestino, Shoag, and Ballance (2019), who show that skill demands increased more
in labor markets that were harder hit by the Great Recession. However, when we extend
the sample forward and adjust for differences in the composition of online vacancies, we
find that this cyclical “upskilling” fades within a few years. In its place, we find longrun structural increases in skill demand across all labor markets. In fact, we show that
the increase in skill demand post-2010 is larger in higher-wage cities. We also find much
larger increases in the demand for education in professional, high-wage occupations such
as management, business, science and engineering.
Previous work using the BGT data has found that it is disproportionately comprised of
high-wage professional occupations, mostly because these types of jobs were more likely
to be posted online (e.g., Deming and Kahn 2018). As online job advertising has become
more common, the BGT sample has become more representative, and the firms and jobs
that are added later in the sample period are less likely to require bachelor’s degrees and
other advanced skills.
We adjust for the changing composition of the sample in two ways. First, we weight
all of our results by the employment share of each occupation as well as the size of the
labor force in each city in 2006. This ensures that our sample of vacancies is roughly
representative of the national job distribution in the pre-sample period. Second, our preferred empirical specification controls for occupation-by-MSA-by-firm fixed effects. This
approach accounts for compositional changes over time in the BGT data.
Our results suggest that increasing demand for educated workers is likely a persistent
feature of the U.S. economy post-recession. Recent work has documented a slowdown
in the growth of the college wage premium since 2005 (Beaudry, Green, and Sand 2016;
Valletta 2018; Autor, Goldin, and Katz 2020). Yet this slowdown has occurred during a
period of rapid expansion in the supply of skills. We find rapid expansion in the demand
for skills, suggesting that education and technology are “racing” together to hold the
college wage premium steady.1

Non-conscious prioritization speed is not explained by variation in conscious cognitive speed, decision thresholds, short-term visual memory, and by the three networks of attention (alerting, orienting and executive)

Sklar, Asael, Ariel Goldstein, Yaniv Abir, Ron Dotsch, Alexander Todorov, and Ran Hassin. 2020. “Did You See It? Robust Individual Variance in the Prioritization of Contents to Conscious Awareness.” PsyArXiv. January 20. doi:10.31234/osf.io/hp7we

Abstract: Perceptual conscious experiences result from non-conscious processes that precede them. We document a new characteristic of the human cognitive system: the speed with which the non-conscious processes prioritize percepts to conscious experiences. In eight experiments (N=375) we find that an individual’s non-conscious prioritization speed (NPS) is ubiquitous across a wide variety of stimuli, and generalizes across tasks and time. We also find that variation in NPS is unique, in that it is not explained by variation in conscious cognitive speed, decision thresholds, short-term visual memory, and by the three networks of attention (alerting, orienting and executive). Finally, we find that NPS is correlated with self-reported differences in perceptual experience. We conclude by discussing the implications of variance in NPS for understanding individual variance in behavior and the neural substrates of consciousness.


NPS=non-conscious prioritization speed


---
And then, you suddenly become aware: it might be of a child running into the road in front of your car, your friend walking on the other side of the street, or a large spider in your shoe. On the timeline that stretches between non-conscious processes and the conscious experiences that emerge from them, this paper focuses on the moment in which your conscious experiences begin: just when you become aware of the child, your friend or the spider. Before this point in time processing is strictly non-conscious, after this moment conscious processing unfolds.

For many, the idea that non-conscious processing generates visual awareness is unintuitive. Imagine suddenly finding yourself in Times Square. You may imagine opening your eyes and immediately experiencing busy streets, flashing ads and moving people, all at once. Intuitively, we feel our experience of the world is immediate and detailed. Yet this intuition is wrong; the literature strongly suggests that conscious experiences are both limited in scope (e.g., Cohen, Dennett, & Kanwisher, 2016; Elliott, Baird, & Giesbrecht, 2013; Wu & Wolfe, 2018) and delayed (e.g., Dehaene, Changeux, Naccache, & Sergent, 2006; Libet, 2009; Sergent, Baillet, & Dehaene, 2005). The feeling that we consciously experience more than we actually do is perhaps the most prevalent psychological illusion, as it is omnipresent in our every waking moment (e.g., Cohen et al., 2016; Kouider, De Gardelle, Sackur, & Dupoux, 2010)1.

Measurements of the “size” or “scope” of conscious experience indicate a rather limited number of objects can be experienced at any given time (e.g., Cohen, Dennett, & Kanwisher, 2016; Elliott, Baird, & Giesbrecht, 2013; Wu & Wolfe, 2018). Other objects, the ones not consciously experienced, are not necessarily entirely discarded. Such objects may be partially experienced (Kouider et al., 2010) or integrated into a perceptual ensemble (Cohen et al., 2016) yet neither constitutes fully conscious processing.

Considerable research effort has identified what determines which visual stimuli are prioritized for conscious experience. This work found that both low-level features (e.g. movement, high contrast) and higher-level features (e.g., expectations, Stein, Sterzer, & Peelen, 2012; emotional value, Zeelenberg, Wagenmakers, & Rotteveel, 2006) influence the prioritization of stimuli for awareness.

Evidently, the process that begins with activation patterns in the retina and ends with a conscious percept has a duration (e.g., Dehaene, Changeux, Naccache, & Sergent, 2006; Libet, 2009; Sergent, Baillet, & Dehaene, 2005). Considering the above examples, clearly this duration may have important consequences. If you become aware quickly enough, you are more likely to slam the brakes to avoid running over the child, call out to your friend, or avoid a painful spider bite.

Here, we focus on this aspect of how conscious experiences come about using a novel perspective. Specifically, we examine individual variability in the speed with which our non-conscious processes prioritize information for conscious awareness (i.e., do some individuals become aware of stimuli more quickly than others?). Examination of individual differences provides rich data for psychological theories (for a recent example see de Haas, Iakovidis, Schwarzkopf, & Gegenfurtner, 2019), an acknowledgement that has recently gained renewed interest (e.g., Bolger, Zee, Rossignac-Milon, & Hassin, 2019). We report 8 experiments documenting robust differences, and examine possible mechanisms that may bring these differences about.

To examine non-conscious prioritization speed (NPS) we use two long-duration masking paradigms. The main paradigm we employ is breaking continuous flash suppression (bCFS; Tsuchiya & Koch, 2005). In bCFS, a stimulus is presented to one eye while a dynamic mask is presented to the other eye (see Figure 1). This setup results in long masking periods, which may last seconds. Participants are asked to respond when they become conscious of any part of the target stimulus. This reaction time, the duration between the initiation of stimulus presentation and its conscious experience, is our measure of participants' NPS.

Like many others (e.g., Macrae, Visokomogilski, Golubickis, Cunningham, & Sahraie, 2017; Salomon, Lim, Herbelin, Hesselmann, & Blanke, 2013; Yang, Zald, & Blake, 2007), we hold that bCFS is particularly suited for assessing differences in access to awareness for two reasons. First, CFS allows for subliminal presentations that can last seconds. Thus, unlike previous masking techniques, it allows for a lengthy non-conscious processing. Second, bCFS allows one to measure spontaneous emergence into awareness, focusing on the moment in which a previously non-conscious stimulus suddenly becomes conscious2.

Crucially, to overcome the limitations associated with using just one paradigm, we use another long duration masking technique that has the same advantage, Repeated Mask Suppression (RMS; Abir & Hassin, in preparation). Using two different paradigms allow us to generalize our conclusions beyond the specific characteristics and limitations of each of the paradigms.

In eight experiments, we document large differences between individuals in NPS. Across experiments, we show that some people are consistently faster than others in becoming aware of a wide variety of stimuli, including words, numbers, faces, and emotional expressions.

Moreover, this individual variance is general across paradigms: Participants who are fast prioritizers in one paradigm (CFS; Tsuchiya & Koch, 2005) are also fast when tested using a different suppression method (RMS; Abir & Hassin, in preparation; see Experiment 3), a difference which is stable over time (Experiment 7). We extensively examined possible sources of this individual trait. Our experiments establish that NPS cannot be explained by variation in conscious cognitive speed (Experiment 4), detection threshold (Experiment 5), visual short-term memory (Experiment 6), and alerting, orienting and executive attention (Experiment 7). Finally, we find that differences in NPS are associated with self-reported differences in the richness of experience (Experiment 8). Based on these results we conclude that NPS is a robust trait and has subjectively noticeable ramifications in everyday life. We discuss possible implications of this trait in the General Discussion.


Discussion

Overall, the current findings paint a clear picture. In eight experiments we discovered a highly consistent, stable and strong cognitive characteristic: NPS. NPS manifested in a large variety of stimuli – from faces and emotional expressions, through language to numbers. It was stable over time (20 minutes) as well as measurement paradigm (bCFS vs. bRMS). We additionally found NPS to be independent of conscious speed, short-term visual memory, visual acuity and three different attentional functions and largely independent of conscious detection thresholds.

In previous research differences in suppression time between stimuli (e.g. upright faces, Stein et al., 2012; primed stimuli, Lupyan & Ward, 2013) have been used as a measure of stimuli’s priority in access to awareness. In such research, individual variance in participants' overall speed of becoming aware of stimuli is treated, if it is considered at all, as nuisance variance during analysis (e.g., Gayet & Stein, 2017). A notable exception to this trend is a recent article (Blake, Goodman, Tomarken, & Kim, 2019) that documented a relationship between individual differences in the masking potency of CFS and subsequent binocular rivalry performance. Here, we greatly extend this recent result as we show that individual variance in NPS is highly consistent across stimuli and time, generalizes beyond bCFS, and is not explained by established individual differences in cognition.

Because of its effect on conscious experience, it is easy to see how NPS may be crucial for tasks such as driving or sports, and in professions such as law enforcement and piloting, where the duration required before conscious processing initiates can have crucial and predictable implications. In fact, NPS may be an important factor in any task that requires both conscious processing and speeded reaction. Understanding NPS, its underlying processes and downstream consequences, is therefore a promising avenue for further research.

Another promising direction would be to examine NPS using neuroscience tools, especially with respect to the underpinnings of conscious experience. First, understanding what neural substrates underpin individual differences in NPS may shed new light on the age-old puzzle of what determines our conscious stream. Second, understanding NPS may shed new light on some of the currently intractable problems in the field of consciousness research, such as separating neural activity that underlies consciousness per se, from neural activity that underlies the non-conscious processes that precede or follow it (De Graaf, Hsieh, & Sack, 2012). Thus, understanding NPS may provide missing pieces for many puzzles both in relation to how conscious experience arises and in relation to how it may differ between individuals, and what the consequences of such differences might be.