Sunday, December 29, 2019

Surprise: Consuming 1–5 cups of coffee/day was related to lower mortality among never smokers; they forgot to discount/adjust for pack-years of smoking, healthy & unhealthy foods, & added sugar

Dietary research on coffee: Improving adjustment for confounding. David R Thomas, Ian D Hodges. Current Developments in Nutrition, nzz142, December 26 2019. https://doi.org/10.1093/cdn/nzz142

Abstract: Meta-analyses have reported higher levels of coffee consumption to be associated with lower mortality. In contrast, some systematic reviews have linked coffee consumption to increased risks for lung cancer and hypertension. Given these inconsistencies, this narrative review critically evaluated the methods and analyses of cohort studies investigating coffee and mortality. A specific focus was adjustment for confounding related to smoking, healthy and unhealthy foods and alcohol. Assessment of 36 cohort samples showed many did not adequately adjust for smoking. Consuming 1–5 cups of coffee per day was related to lower mortality among never smokers, in studies which adjusted for pack-years of smoking, and studies adjusting for healthy and unhealthy foods. Possible reduced health benefits for coffee with added sugar have not been adequately investigated. Research on coffee and health should report separate analyses for never smokers, adjust for consumption of healthy and unhealthy foods, and for sugar added to coffee.

Keywords: Coffee, diet, methods, confounding variables, covariates, adjustment, smoking, healthy foods, cohort studies, added sugar

[Check also the study after the discussion below]

Discussion

Cohort studies provide a crucial source of evidence for investigating the long-term effects of
specific foods and dietary patterns on health. Critical analyses can help improve the quality
of cohort research designs. The focus of this narrative review was to assess the quality of
adjustment for potential confounding in research on coffee. Although this report is based on
a small number of published articles, as far as we are aware, it is the first to systematically
examine the relationship between adequacy of adjustment for smoking and food as
covariates, and the significance of these findings for research on coffee and health
outcomes such as mortality. Evidence from 34 published studies supported the view that
inadequate adjustment for confounding for both smoking and unhealthy foods reduced the
likelihood of finding a significant health protective effect for coffee. The review also noted
that the potentially negative health effects of sugar added to coffee have not been
adequately investigated.
Inadequate adjustment for confounding between coffee consumption and smoking has led to
misleading findings in both cohort studies and meta-analyses, particularly for the association
between coffee and lung cancer and pancreatic cancer. Two meta-analyses reported a
significant association between higher coffee consumption and increased risk of lung cancer
(15, 79). Both these meta-analyses did not assess the effectiveness of adjustment for
smoking in the individual studies included in the meta-analyses. An association has been
reported between coffee consumption and pancreatic cancer, but this association becomes
non-significant among non-smokers and studies which have adjusted for smoking (13). As
noted earlier, the Grosso et al. meta-analysis reported a significant linear inverse trend
between coffee intake and mortality rates among never-smokers (8). The differences
between never and ever smokers were most evident in cancer deaths. Never smokers
showed significantly lower cancer death rates with increasing coffee consumption. In
contrast, among former and current smokers, increasing coffee consumption did not reduce
risks for cancer mortality.
Given the inadequacy of adjustments commonly used for smoking, studies reporting on the
health effects of coffee, and other exposures that may be linked to smoking, should report
relative risks separately for never smokers (5, 80). As some groups show large differences in
smoking rates between men and women, separate analyses by sex should also be reported.
Future studies may also need to include the use of e-cigarettes (vaping).
These findings have implications for other dietary studies making adjustments for smoking
status, where smoking may be associated with other food variables or lifestyle patterns. For
example, an extensive review on risk thresholds for alcohol consumption based on 83
prospective studies, used a binary variable (current smoker versus non-smoker) to adjust for
smoking (81). Adjustment using binary variables is more likely to be linked to misleading
assessments of relative risks, especially where an exposure variable (alcohol) and the
potential confounder (smoking) have a non-linear association. Many published reports which
have used binary covariates to adjust for smoking may have residual confounding resulting
from smoking-associated health risks.
When smoking adjustments are used, authors should report explicitly how the variable or
variables were constructed for smoking adjustment and how these variables were entered
into regression analyses. Only one article included in the current review reported this detail
(52). Another concern is avoiding use of the term ‘non-smokers.’ This term is ambiguous,
and has been used to refer to both ‘never smokers’ and ‘non-current smokers.’ It is better to
use the terms for which the meaning is clear such as; never, former (past, previous) and
current smokers.
Higher levels of coffee intake were commonly associated with consumption of unhealthy
foods in the studies reviewed. Additional evidence for this association is evident in studies
on dietary patterns using factor analysis. Six systematic reviews focussed on food patterns
were found where coffee was reported among the specific foods related to healthy and
unhealthy eating patterns (82-87). From the six reviews, 101 individual studies using factor
analysis were examined. Among the individual studies, 14 reported the association of coffee
with the primary factors. Eleven out of 14 studies reported coffee as loading on factors
commonly labelled as ‘western’ or unhealthy among samples from nine countries. This
‘unhealthy’ pattern consisted of red meat, processed meat, refined grains, alcohol, sweet
foods and coffee (82, 84, 86, 87). These findings are consistent with the importance of
adjusting for food groups. Where potential covariation between coffee and unhealthy foods
has not been adjusted, it is less likely that higher coffee consumption will be associated with
reduced mortality and morbidity.
Current research indicates that added sugar is a risk factor for health problems such as
obesity, cardiovascular disease and diabetes (42, 88, 89). Taking coffee with added sugar,
and flavoured coffees with sugar as a sweetener, are likely to reduce the health benefits of
coffee (90). A literature search for studies investigating the association between coffee and
health outcomes found few which reported the proportion of coffee drinkers who added
sugar and none which reported the amount of sugar added.
The omission of sugar as a potential confounder in research on coffee may be based on the
assumption that sugar intake has negligible health effects. The continued omission of added
sugar is likely to be a legacy from dietary questionnaires constructed prior to 2000, which are
unlikely to have included questions to measure sugar added to coffee. The influential
NHANES question set used for repeated national surveys in the US illustrates this problem.
In NHANES, sugar added to coffee was measured by the following questions;
123 How many cups of coffee, caffeinated or decaffeinated, did you drink? (over the
last 12 months)
Ten response categories were provided from ‘None’ to ‘6 or more cups per day.’
126 How often did you add sugar or honey to your coffee or tea?
Ten response categories were provided from ‘Never’ to ‘6 or more times per day.’
The question on added sugar or honey does not provide a quantity estimate for added sugar
(e.g. teaspoons). Reports on tea and coffee consumption based on the NHANES surveys
have ignored added sugar in the profiles of groups consuming tea or coffee. For example, a
2016 paper reported around 75% of adults in the US drinking coffee in the past 12 months,
and around 49% drank coffee daily (1). No mention was made of the proportion of coffee
drinkers who added sugar.
In contrast, the more recently constructed UK Biobank question set, used in a cohort for
which recruitment started in 2003, does allow for calculation of added sugar (91). This
survey included the following question;
How much sugar did you add to your coffee (per drink)?
Six responses categories were provided, from none to 3+ teaspoons.
One of the few studies which mentioned sugar in coffee was a report on the US NIH-AARP
Diet and Health case-control study of older people (50-71 years at recruitment) (92). Among
242,171 tea and coffee drinkers in the control group, 49% did not add sugar or honey to tea
or coffee, 25% added sugar or honey and 26% added other sweeteners. Those who did not
add sweeteners to tea or coffee had a lower risk for depression than people who added any
type of sweetener (92). A Korean study reported that instant coffee mixes with added sugar
were associated with an increased risk of metabolic syndrome, compared to other types of
coffee (93).
There has been sufficient evidence at least since 2010 to justify the inclusion of added sugar
as a potential confounder in studies of the association between coffee and health outcomes.
In two umbrella reviews on coffee and health, one did not mention sugar at all (6) and the
other mentioned it as a possible limitation of the existing research (5). Given the practice of
ignoring added sugar in studies of coffee and health, nearly all the published findings on
health outcomes from drinking coffee may reflect unadjusted confounding which could
reduce the likelihood of finding health benefits from coffee. Confounding is most likely for
health outcomes where sugar has been reported as a risk factor, such as weight gain,
obesity, metabolic syndrome, diabetes and blood lipids. Confounding with added sugar may
be most likely to occur among people drinking three or more cups of coffee per day and who
add more than 1 teaspoon of sugar per cup. For example, a person drinking 5 cups per day
with 2 teaspoons of sugar per cup would have an added sugar intake of around 50 grams
per day (1 teaspoon ≈ 5 grams) just from their coffee consumption. It is possible that part of
the reduced protective effect of coffee for consumption of 5+ cups/day, which some studies
have reported, may be due to added sugar.
In terms of implications for further research related to the health effects of coffee, there were
several topics for which no research was found. These include studies investigating the
association of coffee consumption with other dietary and lifestyle patterns. A particular
pattern of interest is the use of coffee as a substitute for other beverages such as alcohol or
SSB, especially in social settings. In addition, no studies were found which included selfreporting of reactions to coffee, especially among occasional drinkers of coffee. Some
people may be allergic to coffee or have adverse reactions to caffeine. Research is needed
on the strategies people use to self-manage coffee consumption at comfortable levels. One
RCT, examining the effects of coffee, required participants to drink one litre of coffee every
day for two weeks. Negative reactions (‘palpitations and tremor during the first days of
drinking the cafetière coffee’) were reported as being sufficiently severe for one participant to
drop out of the study. However, as was evident in this report (94), RCT studies are unlikely
to gather information about participants’ reactions to exposures, in this case drinking more
coffee than usual.
For some people, coffee drinking is associated with social contact and social support (95,
96). Studies linking coffee consumption with, for example, reduced risk of depression, may
have confounding due to increased social contact and support being associated with coffee
consumption (97). More research is needed on the social contexts associated with coffee
consumption and the extent to which these contexts may have beneficial effects on health.
A pattern evident among Japanese men, and which may occur in other societies, is that
some people may consume coffee instead of alcohol in settings where both types of drinks
are available. This pattern of substitution does not appear to have been investigated. As
well, changes in coffee consumption over time, and reasons for change, appear not to have
been investigated. Only a few studies have reported consistency of coffee consumption over
a period of several years (72). A pattern needing further research is the extent to which
people take up, or increase, their coffee consumption as a substitute for drinking alcohol or
sugar-sweetened beverages.
Research using self-report measures of coffee consumption should clearly describe the
questions used to measure coffee and should note whether added sugar was measured,
including flavourings which include sugar. No information was found about the various types
of milks added to coffee. For some consumers, milk may be used instead of sweeteners to
reduce the bitterness of coffee and is likely to be a healthier option than sugar.
More reviews are needed to document which research studies on food groups and patterns
have included coffee as a food variable, and which included coffee but did not report it
because it was not associated with outcomes of interest. Research using cohort samples
should assess whether coffee drinking is associated with unhealthy eating patterns and if so,
allow for this association when adjusting for potential confounders.
This review has several limitations. It was restricted to cohort or observational studies.
Cohort studies may have unadjusted confounding which is a limitation for attributing a causal
relationship between exposures and health outcomes. The findings which have been
reported here were dependent on the assumptions made when assessing the quality of
smoking adjustment and adjustment for healthy and unhealthy foods. The assessment of the
quality and levels of significance between coffee consumption and mortality was dependent
on the information reported in each of the 34 articles reviewed. If this information was
inaccurate or incomplete, it could affect the findings reported.

---
Check also Caffeine extends life span, improves healthspan, and delays age-associated pathology in Caenorhabditis elegans. George L Sutphin, Emma Bishop, Melana E Yanos, Richard M Moller & Matt Kaeberlein. Longevity & Healthspan volume 1, December 1 2012. https://longevityandhealthspan.biomedcentral.com/articles/10.1186/2046-2395-1-9. This is very important because: .1 makes clearer this may be shared with other species; .2 makes more difficult that the root cause is more healthy people drinking coffee, and easier that the coffee is what is beneficial; .3 there are less confounders (no tobacco, no sugar, & diet can be optimized).

Abstract
Background: The longevity of an organism is influenced by both genetic and environmental factors. With respect to genetic factors, a significant effort is being made to identify pharmacological agents that extend life span by targeting pathways with a defined role in the aging process. On the environmental side, the molecular mechanisms responsible for the positive influence of interventions such as dietary restriction are being explored. The environment experienced by humans in modern societies already contains countless compounds that may influence longevity. Understanding the role played by common compounds that substantially affect the aging process will be critical for predicting and interpreting the outcome of introducing new interventions. Caffeine is the most widely used psychoactive drug worldwide. Prior studies in flies, worms, and mice indicate that caffeine may positively impact age-associated neurodegenerative pathology, such as that observed in Alzheimer’s disease.

Results: Here we report that caffeine is capable of extending life span and improving healthspan in Caenorhabditis elegans, a finding that is in agreement with a recently published screen looking for FDA-approved compounds capable of extending worm life span. Life span extension using caffeine displays epistatic interaction with two known longevity interventions: dietary restriction and reduced insulin signaling. Caffeine treatment also delays pathology in a nematode model of polyglutamine disease.

Conclusions: The identification of caffeine as a relevant factor in aging and healthspan in worms, combined with prior work in both humans and rodents linking caffeine consumption to reduced risk of age-associated disease, suggests that caffeine may target conserved longevity pathways. Further, it may be important to consider caffeine consumption when developing clinical interventions, particularly those designed to mimic dietary restriction or modulate insulin/IGF-1-like signaling. The positive impact of caffeine on a worm model of polyglutamine disease suggests that chronic caffeine consumption may generally enhance resistance to proteotoxic stress and may be relevant to assessing risk and developing treatments for human diseases like Alzheimer’s and Huntington’s disease. Future work addressing the relevant targets of caffeine in models of aging and healthspan will help to clarify the underlying mechanisms and potentially identify new molecular targets for disease intervention.

Persistent effects of cohort size & nonmarital births on cohort-specific homicide rates: These effects follow Black birth cohorts across the life course, leading to higher rates of homicides (victims & perpetrators)

The Enduring Influence of Cohort Characteristics on Race-Specific Homicide Rates. Matt Vogel, Kristina J Thompson, Steven F Messner. Social Forces, soz127, October 30 2019. https://doi.org/10.1093/sf/soz127

Abstract: This study extends research on cohort effects and crime by considering how bifurcated population dynamics and institutional constraints explain variation in homicide rates across racial groups in the United States. Drawing upon the extensive research on racial residential segregation and institutional segmentation, we theorize how the criminogenic influences of cohort characteristics elucidated in prior work will be greater for Black cohorts than for White cohorts. We assess our hypothesis by estimating Age-Period-Cohort Characteristic models with data for the total population and separately for the Black and White populations over the 1975–2014 period. The results reveal persistent effects of relative cohort size and nonmarital births on Black cohort-specific homicide rates but null effects among the White population. These effects follow Black birth cohorts across the life course, leading to higher rates of both homicide arrest and homicide victimization.

Summary and Discussion

Building on the seminal work of Richard Easterlin (1987), this study provides a
novel extension of the empirical literature on population dynamics and criminal
homicide. Consistent with the Easterlin thesis, our research is guided by the
assumption that large birth cohorts strain the regulatory capacity of social
institutions (e.g., families and schools), decrease informal social control, amplify
cross-cohort socialization, and increase competition for entry-level positions,
ultimately contributing to higher rates of cohort-specific violence. We depart
from prior scholarship by arguing that the study of cohort effects remains incomplete because scholars have yet to consider how the pernicious consequences
of residential segregation and labor segmentation entrenched within American
society have conditioned the influence of cohort characteristics on homicide
rates. Insofar as the key mechanism linking relative cohort size to criminal
conduct is the ability of social institutions to effectively integrate large birth
cohorts, it follows that the strongest cohort effects should operate within racial
groups over time. As such, the present study examined the relationships among
relative cohort size, nonmarital births, and age-by-race specific rates of homicide
for the years spanning 1975–2014.
The results from the empirical models generally support our racially bifurcated
perspective on cohort characteristics and crime. We find consistent evidence that
the proportion of nonmarital childbirths is positively associated with overall
rates of homicide arrest and victimization but no evidence of an effect of
relative cohort size on overall cohort-specific homicide rates over the past four
decades. As we elaborate in greater detail below, the discrepancy between our
findings and some prior research (e.g., O’Brien et al. 1999; Savolainen 2000) is
likely attributed to necessary differences in data source and observation period
between our work and others. When we turn to the APCC models separated by
race, the results are striking. The proportion of nonmarital births and relative
cohort size exert strong, positive effects on cohort-specific homicide arrest and
victimization rates among Blacks. The highest rates of homicide victimization
and arrest are observed among Blacks born during periods of relatively high
fertility and those born during times of high nonmarital births. We interpret
these findings in line with our key theoretical argument—the effects of cohort
composition on criminal homicide reveal themselves most strongly among the
Black population. As evidenced by the discrepancy in findings between the first
and third models in Tables 2 and 3, focusing on aggregated cohort effects for
the total population obscures important nuances in the racialized nature of the
influence of population dynamics on homicide trends in recent decades.
We speculate that the most likely culprit driving these differential effects is
a labor market that allows large White birth cohorts to edge Blacks out of
low-skilled positions. Such a mechanism does not require a far stretch of the
imagination. All else equal, labor market shocks, such as those associated with
large birth cohorts, portend that a large number of young adults will be vying
for a proportionately fewer of entry-level positions. When labor supply exceeds
demand, employers can be more discriminating in staffing decisions. Given the
storied history of discriminatory hiring practices in the United States and a labor
market clearly differentiated by race, it seems reasonable to expect that Blacks
will be hit especially hard during times of labor surplus. From this vantage
point, Black Americans may find themselves at a disadvantage when they are
born during a time of high fertility because they will encounter greater levels of
competition with other young African Americans and greater competition with
Whites who may edge into traditionally segmented positions. The deleterious
effects of cohort size follow Black birth cohorts across the life course, translating
into elevated rates of homicide victimization and arrest. This finding helps shed
light on the lack of an association between relative White cohort size and
homicide rates—because social institutions are better able to accommodate large
White cohorts by further disadvantaging Blacks, the otherwise criminogenic
influence of cohort size is mitigated among the White population.
Importantly, the relative cohort size findings are robust in the presence of
race-specific nonmarital fertility, which emerges as an independent predictor
of homicide for the age-specific total and Black crime rates, but not for agespecific White crime rates. This suggests that differences in the impact of relative
cohort size on race-specific crime rates cannot be attributed to differences in
supervision that arise from disparate nonmarital fertility trends. Additionally, it
hints at further ways in which the supervision capacities of institutions may be
more adaptive for White cohorts compared to Black cohorts.
To be clear, our intention is not to refute the rich scholarship stressing the
importance of neighborhood deprivation, subcultural norms, or persistent structural inequalities that are often invoked to explain differences in violence across
racial and ethnic groups. Instead, we hope to illuminate an often overlooked
demographic component of violent crime trends. Rather than supplant prior
explanations, we adopt the view that the racialized nature of cohort effects on
crime complements contemporary thinking on criminal violence. The inveterate
legacy of segregation and discrimination in American society has generated
vastly different social institutions clearly delineated along racial lines (Peterson
and Krivo 2010). Insofar as the mechanisms linking cohort characteristics to
criminal violence involve the capacity of such institutions to assimilate successive
generations, it follows that generations of Blacks and Whites have been raised
by families living in segregated neighborhoods, attended segregated schools, and
entered into segmented labor markets. The ability of such segregated institutions
to effectively socialize large cohorts of children is a defining factor in the
perpetuation of social inequalities, which have generated profound differences
in homicide rates across racial groups in the United States.
We would be remiss not to acknowledge several caveats with our analyses
that limit our ability to engage more directly with prior scholarship in this
area. For one, our empirical models necessitate age-by-race specific measures
of homicide and victimization. The SHR provides the longest-running source of
this information, but these data only extend back as far as 1975. While there
are clear advantages to using the SHR, our models do not directly align with
previous research (which has often relied on measures gleaned from the UCR
arrest records). Accordingly, the discrepancies between our findings for total
homicide offending and prior work may be due to broader issues with official
data collection over time. Because our analyses span different time frames,
we speculate that the differences in results for total homicide offending rates
may arise from differences in arrest reporting and subsequent disaggregation.
Regardless, our goal was not to dispute the pioneering work of others but rather
to explore more nuanced pathways through which race, population dynamics,
and segregation have influenced trends in criminal homicides in the United
States.
Relatedly, a growing body of literature demonstrates how exogenous shocks
and large-scale social changes can influence long-term crime trends (Rosenfeld
2018; Baumer, Vélez, and Rosenfeld 2018). When considered in the context
of the current study, we might anticipate such period effects to influence not
only homicide rates but to indirectly contribute to cohort characteristics by
differentially influencing the relative size and rates of nonmarital birth characterizing Black and White cohorts overtime. The most obvious examples are
the crack cocaine epidemic and the differential impact of mass incarceration on
communities of color. Indeed, both mass incarceration and the homicide boom of
the late 1980s had a disproportionate effect on specific cohorts of young, Black
men. While beyond the purview of the current study, it is entirely possible that
these period effects reshaped the long-term life chances of this cohort, far beyond
the historical moment which such events occurred.
We are also limited by our inability to examine the more proximate mechanisms linking cohort characteristics and criminal homicide, such as indicators
of school crowding, educational attainment, or labor market outcomes. And
indeed, the threats posed by omitted variable bias remain a considerable hurdle
in APCC models (O’Brien 2014). The most glaring omission in this regard is our
lack of measures of age-by-race specific unemployment rates, which would allow
us to examine directly whether labor market edging explains the strong effect of
cohort size on Black homicide rates. To our knowledge, no such measures exist
for the full period of observation used in our analyses. Rather than view this as
a critical limitation, we echo Robert K. Merton (1987) who argued that “before
one proceeds to explain or to interpret a phenomenon, it is advisable to establish
that the phenomenon actually exists, that it is enough of a regularity to require
and to allow explanation” (3). We view the empirical contributions described
here as a necessary first step toward confirming the differential impact of
population dynamics on violence within racial groups, thus laying the foundation
for further research into the mechanisms driving the disproportionate influence
of cohort characteristics on criminal homicide over the past 60 years.
Despite these caveats, our findings reaffirm the importance of systematically
incorporating demographic processes into criminological research. Social control
theories have long extolled the central role played by social institutions in
suppressing violent crime, but these theories have devoted little attention to the
ways in which rapid population growth might strain institutional capacity. Our
work further underscores the inexorable linkages between population dynamics
and institutional constraints in the propagation of racial inequality in the United
States. To paraphrase Richard Easterlin, year of birth indeed marks a generation
for life. In the context of criminal homicide in the United States, it is clear
that the enduring consequences of cohort characteristics for homicide offending
and victimization unfolded differently depending on race. Consistent with a
growing body of scholarship, the results presented here suggest that crime,
violence, and the perpetuation of racial inequality in the United States can
be best viewed through a historicized understanding of bifurcated population
dynamics.

Review of Jennifer A. Jones's The Browning of the New South

Review of Jennifer A. Jones's The Browning of the New South. Angel Adams Parham. Social Forces 1–4, soz141, Dec 2019, https://doi.org/10.1093/sf/soz141

Jones’s main argument is that context matters—place, class, racial composition of the area—all of these play a role in shaping migrants’ social and racial adjustment. [...] Jones shows how Latinx immigrants were welcomed at first and then, as the local economic and political context shifted, were abruptly un-welcomed. The term she uses to describe this un-welcoming is “reverse incorporation”. The two dimensions of reverse incorporation are: institutional closure and the souring of public opinion (69). As Latinx immigrants find themselves in the antagonistic process of being formally unwelcomed, they begin to cement ties with black Americans.

It is indeed striking how Janus-faced was the reception/rejection whites in Winston-Salem meted out to immigrant community members, most of whom were from Mexico. When immigrants first began to arrive, the welcome could not have been more enthusiastic: businesses were ready and willing to hire them—with or without documentation; a local bank engaged in a concerted campaign to make it as easy as possible for the newcomers to use their services; non-immigrant community members were profiled in the press as going the extra mile to ease the transition of newcomers into the community; and it was easy to obtain a driver license even without legal papers. In addition to all of this, Winston-Salem’s leaders took a trip to Guanajuto, Mexico with the expressed purpose of gaining ‘a deeper understanding of the culture of Mexico’s immigrants’ (58).

Then, beginning about 2005, the welcome mat began to be slowly and then rudely yanked from beneath the feet of these immigrants. Key changes included the emergence of the post-9/11 state after 2001 and the drastic weakening of the economy beginning in 2008 with the recession. The drying up of the labor market made the presence of immigrants far less attractive than it had been when employers were scrambling for workers. In addition, the post-9/11 state introduced new legal restrictions and a heightened amount of surveillance that was devastating to the immigrant community—both documented and undocumented. As this process of reverse incorporation proceeded, Latinx immigrants engaged in conscious attempts to join forces with black Americans who they knew to have suffered ongoing discrimination at the hands of the white majority. Churches and non-profit agencies held meetings and events to foster the strengthening of these ties between Latinos and blacks. In addition, Jones found that most of her interviewees held very positive views of blacks but harbored relatively cool feelings toward whites who they perceived to be socially distant.  On the whole, Jones’s findings are compelling: blacks and Latinos did band together in what she terms ‘minority linked fate’ and it is clear that the local context mattered quite significantly in shaping the ways Latinos evaluated and responded to the racial terrain of Winston-Salem. [...].

First, the moderate critique. Early on Jones enjoins us to be sensitive to the varieties of local context immigrants find when they settle into different parts of the United States. She notes that while there are some broad patterns, important differences between the configuration of settlement in Los Angeles and New York versus Charlotte and Atlanta, should lead us to an analysis that frames racial change as a rapidly shifting patchwork of race relations, rather than a unifying framework .... how groups relate to one another and access resources is fluid and context dependent. (8). While all of this is certainly true, one suspects that it would be possible to advance a working analytical framework that would help us to test out in future research which factors may be more or less likely to result in distancing from blackness, and which might be more likely to result in the strategy of minority linked fate Jones finds in Winston-Salem. Although the book is based on one in-depth case study, Jones has a command of the literature conducive to drawing some stronger conclusions about which patterns are likely to lead to one outcome and which to another. As it is, we are left with an analysis of one case that challenges the mainstream immigration literature but does little to help us to understand how her case might be profitably linked to others.  Indeed, even in the closing pages of the book she continues to assert that “new racial formation patterns will not be represented by national color lines, but by patchwork quilts of race relations determined by local conditions” (196).  These opening and closing declarations make it seem that racial formations will be completely random. I do not think, however, that Jones believes this.  Even in the four cases she mentions above: Los Angeles and New York versus Charlotte and Atlanta, we see clear distinctions between major coastal cities with an established immigrant history compared to smaller southern cities that are newer immigrant destinations. Distinct regional racial histories may, perhaps, be part of the patterned difference we should examine. There are, moreover, other axes of difference that could be used to at least tentatively propose a useful comparative framework to guide our thinking about immigrants’ racial integration into various kinds of communities.

In addition to this request for greater boldness in proposing patterns that could be examined comparatively by future researchers, I also propose an alternative reading of Jones’s Winston-Salem data. I must state at the outset that this alternative reading makes no challenge to the research findings per se. Rather, it suggests a way of looking at the data that reveals a different kind of picture— much like the classic case of a drawing that can be seen either as a vase or as two people in profile facing each other.

As the argument is currently framed, Jones counter-poses narratives of incorporation versus reverse incorporation and immigrant distancing versus immigrant bonding with black Americans. According to the mainstream account in the literature, immigrants of all kinds find it advantageous to distance themselves from black Americans. If they have capital in light skin, they may invest this in whiteness, if not, then they engage in cultural options that symbolically distance them from blackness. Jones claims that this is only one option and that, in certain contexts, friendly relations between blacks and Latinx immigrants is quite possible and even likely.

The alternative reading proposed here, however, is that both the mainstream account and the one Jones offers in her book are but two sides of the same coin where immigrants respond carefully to the default settings of racism and white supremacy they encounter in the United States. In some cases, the tools that sustain these twin rails are latent, lying in wait beneath the surface of everyday life, while in others they are aggressively deployed. Immigrant responses to the racial terrain vary based on the latency/deployment of the tools. When, for instance, entering a setting such as Winston-Salem in the 1980s–1990s, racism and white supremacy are largely latent and immigrants can embrace aspirational whiteness or maintain neutrality in race relations and racial positioning because the stakes were relatively low. But then, as economic and security crises shake the white community, these latent tools are taken out and deployed. Under these new conditions, Latino newcomers find it much more difficult to avoid the question of race or to engage in aspirational whiteness and distancing from blacks. At such a time, cross-racial linkages became more important and are advantageous.  If this alternative reading is correct, then Jones’s findings are not as far from the mainstream account as she thinks they are. It would still be the case that the default position for most Latino immigrants is to aspire toward the privileges of whiteness and to distance themselves from blackness when conditions allow for this.

It is, admittedly, difficult to be certain that this alternative reading works in Jones’s case. While she presents plenty of data from Latinx interviewees that is favorable to blacks, it is difficult to know how much of this favoring is due to enduring the difficulties of reverse incorporation and how much of this friendly sentiment was long-standing even before the difficulties emerged. In the end, however, whether the alternative reading does or does not apply, Jones presents a strong case that shows us that place and context matter and that the racial future cannot be read simplistically from the racial past.

YouTube radicalization: The recommendation algorithm actively discourages viewers from visiting radicals/extremists, favors left-leaning or neutral channels & mainstream media over independent channels

Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization. Mark Ledwich, Anna Zaitsev. arXiv Dec 24 2019. https://arxiv.org/abs/1912.11211

Abstract: The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

V. LIMITATIONS AND CONCLUSIONS

There are several limitations to our study that must be considered for the future. First, the main limitation is the anonymity of the data set and the recommendations. The recommendations the algorithm provided were not based on videos watched over extensive periods. We expect and have anecdotally observed that the recommendation algorithm gets more fine-tuned and context-specific after each video that is watched. However, we currently do not have a way of collecting such information from individual user accounts, but our study shows that the anonymous user is generally directed towards more mainstream content than extreme. Similarly, anecdotal evidence from a personal account shows that YouTube suggests content that is very similar to previously watched videos while also directing traffic into more mainstream channels. That is, contrary to prior claims; the algorithm does not appear to stray into suggesting videos several degrees away from a user’s normal viewing habits. Second, the video categorization of our study is partially subjective. Although we have taken several measures to bring objectivity into the classification and analyzed similarities between each labeler by calculating the intraclass correlation coefficiencies, there is no way to eliminate bias. There is always a possibility for disagreement and ambiguity for categorizations of political content. We, therefore, welcome future suggestions to help us improve our classification. In conclusion, our study shows that one cannot proclaim that YouTube’s algorithm, at the current state, is leading users towards more radical content. There is clearly plenty of content on YouTube that one might view as radicalizing or inflammatory. However, the responsibility of that content is with the content creator and the consumers themselves. Shifting the responsibility for radicalization from users and content creators to YouTube is not supported by our data. The data shows that YouTube does the exact opposite of the radicalization claims. YouTube engineers have said that 70 percent of all views are based on the recommendations [38]. When combined with this remark with the fact that the algorithm clearly favors mainstream media channels, we believe that it would be fair to state that the majority of the views are directed towards left-leaning mainstream content. We agree with the Munger and Phillips (2019), the scrutiny for radicalization should be shined upon the content creators and the demand and supply for radical content, not the YouTube algorithm. On the contrary, the current iteration of the recommendations algorithm is working against the extremists. Nevertheless, YouTube has conducted several deletion sweeps targeting extremist content [29]. These actions might be ill-advised. Deleting extremist channels from YouTube does not reduce the supply for the content [50]. These banned content creators migrate to other video hosting more permissible sites. For example, a few channels that were initially included in the Alt-right category of the Ribero et al. (2019) paper, are now gone from YouTube but still exist on alternative platforms such as the BitChute. The danger we see here is that there are no algorithms directing viewers from extremist content towards more centrist materials on these alternative platforms or the Dark Web, making deradicalization efforts more difficult [51]. We believe that YouTube has the potential to act as a deradicalization force. However, it seems that the company will have to decide first if the platform is meant for independent YouTubers or if it is just another outlet for mainstream media.


A. The Visualization and Other Resources Our data, channel categorization, and data analysis used in this study are all available on GitHub for anyone to see. Please visit the GitHub page for links to data or the Data visualization. We welcome comments, feedback, and critique on the channel categorization as well as other methods applied in this study.

B. Publication Plan This paper has been submitted for consideration at First Monday



Response to critique on our paper “Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization” https://medium.com/@anna.zaitsev/response-to-critique-on-our-paper-algorithmic-extremism-examining-youtubes-rabbit-hole-of-8b53611ce903

There is less support for redistribution & race-targeted aid among blacks in the U.S. today than in the 70s; anti-black stereotypes have had consequences for political attitudes for whites, but for blacks too

Inequality, Stereotypes and Black Public Opinion: The Role of Distancing. Emily M. Wager. . http://www.emilymwager.com/uploads/1/2/7/2/127261100/inequality_stereotypes_and_black_public_opinion.pdf

Abstract: There is less support for redistribution and race-targeted aid among blacks in the U.S. today than in the 1970s, despite persistent and enduring racial and economic disparities. Why? I argue that anti-black stereotypes suggesting blacks are lazy and reliant on government assistance have not only had consequences for political attitudes of whites but blacks as well. I note that as stigmas persist, they can have durable effects on the groups they directly stigmatize. To combat being personally stereotyped, some members of stigmatized groups will practice “defensive othering,” where one accepts a negative stereotype of one’s own group and simultaneously distances oneself from that stereotype. I illustrate the ways in which defensive othering plays a role in black attitudes toward redistribution using individual and aggregate level survey data, as well as qualitative interviews


6 Conclusion
When talking to ordinary people, I observed how Americans, including blacks, expressed disapproval of the high level of access citizens have to government assistance, recited scripts about meritocracy, and brought up others they knew that had “abused the system.” However, these opinions must be placed in a broader historical context, one in which for decades whites have restricted blacks’ access to distributive and redistributive programs, reinforced racialized images of government assistance recipients and justified racial inequalities through claims of meritocracy. These racist messages are part of the “smog in the air” we all breathe.

This study makes the argument that the stigmatization of blacks as lazy recipients of government assistance has the potential to shape blacks’ own reported attitudes about the role of government in addressing inequalities. I argue this should be seen as a condition of internalized racial oppression, specifically “defensive othering,” which involves the acceptance of negative group stereotypes while simultaneously distancing oneself from that stereotype. Internalized oppression is not a reflection of weakness, ignorance or inferiority on the part of the subordinate group. Instead, as Pyke (2010) succinctly states, “all systems of inequality are maintained and reproduced, in part, through their internalization by the oppressed” (p. 552). This study shows that belief in blacks’ unwillingness to work is not an uncommon feature in black public opinion today. Consequently, acceptance of this stereotype among black individuals leads them to be significantly less demanding of race and nonrace based government aid. This study also suggests a media environment that disproportionately characterized the poor as lazy and black may help to explain the rise in blacks’ acceptance of in-group stereotypes.

There are several avenues for future research. First, scholars might consider what individual and contextual factors contribute to the acceptance of negative in-group stereotypes and the consequences for political attitudes. Variation in racial and socioeconomic contexts, such as neighborhoods, could very well lead to public opinion change (Cohen and Dawson, 1993; Gay, 2004). Future research would also benefit from examining more robust measures of media coverage by extending the time frame and examining the content of mainstream media as well as black media. Finally, I identify stereotypes as one possible reason for the shifts we see in black public opinion. Scholars might also consider how actual experiences with social services shape political attitudes. For example, qualitative researchers have found that social service providers can purposefully lead recipients to adopt more neoliberal attitudes (Woolford and Nelund, 2013).

This study serves several purposes. First, it aims to build a deeper understanding of racial minorities’ redistributive policy preferences in a literature where they are often ignored. This disregard for black public opinion is a part of a larger failure to recognize blacks as more than the object of whites’ resentment in the study of race in political science (Harris-Lacewell, 2003). Simply put, the public’s relatively weak demand for redistribution despite extreme inequality may not be able to be understood with only one theory. My explanation suggests that when blacks and whites are asked in surveys about redistributive policies, they are often not drawing on the same considerations. Given that blacks have been most negatively depicted in relation to these policies, they have more motivation than whites to distance themselves from freeloading stereotypes. Second, while American politics scholars have paid much attention to the role of racial biases in whites’ political attitudes, this study explores how racism can impact the attitudes of the marginally situated people racism targets. This fits in with a large literature that identifies the negative psychological and physiological consequences of stereotypes for members of stigmatized groups,19 but is rare in the study of political attitudes and behavior. Finally, in 19Some notable studies include Steele (2016); Blascovich et al. (2001); Burgess et al. (2010); Cohen and Garcia (2005); Lewis Jr and Sekaquaptewa (2016) this paper, I rely on an interpretive perspective to study public opinion, which encourages researchers to not analyze opinion and behavior as divorced from the historical and social context in which they take place.

From 2017... Under what conditions factual misperceptions may be effectively corrected: Ingroup members, specifically co-partisans and peers, are perceived to be more credible & more effective correctors

Correcting Factual Misperceptions: How Source Cues Matter. Emily Wager. Master's Thesis, North Carolina University at Chapel Hill. 2017. https://pdfs.semanticscholar.org/46da/d633c4d636be01e5cacb1d8c6798534a1303.pdf

Abstract: From the birther movement to the push of “alternative facts” from the White House, recent events have highlighted the prominence of misinformation in the U.S. This study seeks to broaden our understanding of under what conditions factual misperceptions may be effectively corrected. Specifically, I use Social Identity Theory to argue that ingroup members, specifically co-partisans and peers, are perceived to be more credible, and in turn are more effective correctors, than outgroup members (out-partisans and elites), contingent on identity strength. I also argue that peers should be effective correctors among those with low levels of institutional trust. To test my expectations, this study employs a 2 x 2 experimental design with a control group to determine how successful various source cues are at changing factual beliefs about a hotly debated topic in the U.S.— immigration. Overall I find preliminary support for my expectations.

Discussion

The experiments in this paper help us understand how factual beliefs about politics
can be changed by manipulating the source of the corrections. I find that responses to
corrections from peers and co-partisans differ significantly according to subjects’ group
identity and trust in institutions. As a result, the corrections about immigration are most
successful among those that strongly identify with the group the source is a member of,
whether a co-partisan or peer.
My findings contribute to the literature on correcting misperceptions in several respects.
First, while prior corrections research has exclusively used a between-subjects design to
circumvent the possibility that people feel grounded in their responses before receiving
correction to after (Nyhan & Reifler, 2010), this study employed a within-subjects design
in order to establish how subjects actually change their factual beliefs. Indeed, my findings
demonstrate that substantial movement in reported factual beliefs does happen following
corrections and that these beliefs typically hold over time.
Second, the results from this study corroborate previous work demonstrating that the
impact of source cues on opinion varies systematically by individual identification with the
source (Hartman and Weber 2009). Among those who strongly identify with their party,
inparty corrections are successful in moving factual beliefs in the accurate direction while
outparty corrections lead to a “backfire” effect, pushing individuals in the opposite direction. On the other hand, weak partisans are receptive to corrective information from outparty
members. Therefore, while the literature on the impact of ideological source cues on correcting misperceptions has offered inconsistent findings (Nyhan and Reifler 2013; Berinsky
2015), the results in this paper demonstrate that not all Republicans and Democrats respond
uniformly to corrections—strength of group identification matters. While I theorize that
perceived credibility of ingroup and outgroup members is the causal mechanism at work, it
would be valuable to directly study how individuals evaluate various sources on key characteristics (trustworthiness, knowledge, etc.,). This would allow social scientists to gain a
better understanding of specifically why certain sources are successful and others are not.
While research on source cues in political science has largely focused on partisan cues,
these findings also contribute to our understanding of how voters respond to information
from elites and peers. In accordance with my expectations, I find some evidence that peers
are more successful correctors than elites, especially among those who strongly identify
with their peer group and among those have weak trust in institutions. These findings add
to existing work (Attwell and Freeman 2015) seeking to understand how social groups can
promote accuracy in cases when experts or elites appear ineffective. Future work should
further explore how voters evaluate the credibility of elites and peers differently, and what
other individual-level factors might explain why certain people are more receptive to political information from peers than elites. It would also be valuable to replicate these findings
on peer groups other than university students or use real peers that are not fabricated. Lastly,
while peer cues in this experiment did not have the substantial impact on factual beliefs that
were expected across all four statements, they should not be concluded as irrelevant. Walsh
(2004) illustrates the importance of face-to-face interactions among small peer groups in
political thinking. It is possible that factual corrections in the context of these sort of peer
interactions are effective, and scholars should aim to understand the significance of such
interactions in the real world.
Of course, I am mindful of the inherent limitations of the evidence presented here. The
most serious limitation to this experiment was the scale used to measure the dependent
variable, which confounds confidence and acceptance/rejection of false statements. For this
reason it is difficult to untangle the differences between changes in acceptance or rejection
of beliefs and actual confidence in beliefs. For example, it could be possible that a peer
correction could just make subjects more confident in their already (correct) beliefs, and not
actually encourage a switch from acceptance to rejection of a false statement. Subsequent
studies should unpack these distinctions.
There is also the question over whether individuals’ reported factual beliefs are in fact
sincere and not just expressive partisan cheerleading (Bullock, Gerber and Seth 2015). I
argue that sincerity is largely inconsequential here. If subjects are willing to report strong
confidence in falsehoods in a survey, this cannot be completely irrelevant to the way they
perceive the political world. Lastly, if misperceptions are successfully corrected, it is no
guarantee that subjects’ respective political attitudes are actually moved in a certain direction. As Gaines, Kuklinski, Quirk, Peyton and Verkuilen (2007) note, those who “update
their beliefs accordingly need not imply they update their opinions accordingly [emphasis
added]” (p. 971). However, because factual beliefs and attitudes are so conflated, measuring
the two together in a questionnaire might have discouraged individuals from accurately updating because they are reminded that certain beliefs are inconsistent with their worldview.
The purpose of this study is only to examine the conditions under which strong beliefs in
falsehoods (not issue positions) can be effectively challenged, but future work should explore how relevant factual beliefs shape opinions on immigration. While prior work has
found no evidence that correcting factual beliefs about immigrant population sizes leads
to attitude change (Lawrence and Sides 2014; Hopkins, Sides and Citrin 2016), these are
likely not the only factual beliefs that inform voters’ respective attitudes.

Love the Science, Hate the Scientists: Conservative Identity Protects Belief in Science and Undermines Trust in Scientists

Love the Science, Hate the Scientists: Conservative Identity Protects Belief in Science and Undermines Trust in Scientists. Marcus Mann, Cyrus Schleifer. Social Forces, soz156, December 23 2019. https://doi.org/10.1093/sf/soz156

Abstract: The decline in trust in the scientific community in the United States among political conservatives has been well established. But this observation is complicated by remarkably positive and stable attitudes toward scientific research itself. What explains the persistence of positive belief in science in the midst of such dramatic change? By leveraging research on the performativity of conservative identity, we argue that conservative scientific institutions have manufactured a scientific cultural repertoire that enables participation in this highly valued epistemological space while undermining scientific authority perceived as politically biased. We test our hypothesized link between conservative identity and scientific perceptions using panel data from the General Social Survey. We find that those with stable conservative identities hold more positive attitudes toward scientific research while simultaneously holding more negative attitudes towards the scientific community compared to those who switch to and from conservative political identities. These findings support a theory of a conservative scientific repertoire that is learned over time and that helps orient political conservatives in scientific debates that have political repercussions. Implications of these findings are discussed for researchers interested in the cultural differentiation of scientific authority and for stakeholders in scientific communication and its public policy.


Discussion and Conclusion

Confidence in the scientific community has declined among political conservatives in recent years but attitudes toward scientific research as a benefit to society have remained stable. Meanwhile, conservative social movements have established their own conservatively oriented scientific institutions (e.g., see Dunlap, Riley and McCright 2016; Dunlap and Jacques, 2013; Jacques, Dunlap, and Freeman, 2008; McCright & Dunlap, 2000, 2003, 2010; Gross et al., 2011) and the dawn of the Internet and social media has made it easier than ever for conservative audiences to access conservative knowledge. The preceding analysis aimed to show how these developments intersect by demonstrating that stable conservative partisans are more likely than their switching counterparts to distrust the scientific community and to believe that scientific research is a benefit to society. These findings support arguments that conservative efforts to communicate alternative scientific knowledge have been successful insofar as stable conservatives maintain trust in science while rejecting the authority of mainstream scientists. The implications of these developments are numerous.

First, this study replicates the findings of Gauchat (2012) and helps confirm one of the most dramatic trends in scientific perceptions in the last fifty years. Second, we build on previous work (O’Brien and Noy 2015; Roos 2017) that shows how rejections of mainstream scientific knowledge often signal specific cultural perceptions as opposed to deficits in scientific knowledge itself (although see Allum, Sturgis, Tabourazi, & Brunton-smith, 2008; Sturgis & Allum, 2004). We contribute to this work by studying political identity and scientific attitudes and finding that rejections of scientists need not be driven by a broader rejection of scientific research itself. This is further evidence that cultural communities viewed as being anti-science maintain a complex arrangement of scientific perceptions that can include high levels of scientific knowledge and positive views of scientific research. Furthermore, consistent identification in such a community can be indicative of positive scientific attitudes.

We are not the first to examine how membership in a cultural community affects perceptions of science. Moscivici (1961/2008) coined the concept of “social representations” by studying how the advent of psychoanalysis was received and communicated among three different moral communities in France—urban-liberals, Catholics, and Communists—and observing how new scientific ideas were refracted through the organizational and cultural lenses of these social milieus. This study extends this long line of research on cultural membership and scientific perceptions by examining the issue of consistency in political identity and attitudes toward scientists and scientific research, as opposed to interpretations of a distinct scientific discipline or the relationship between scientific knowledge and attitudes.

More specifically, this research applies Perrin et al.’s (2014) performative theory of conservative identity and extends their work by examining it in the context of identity stability. Identity stability is important for a performative theory of political identity because it reflects enduring familiarity with and acceptance of elite characterizations of political identity. In other words, if conservatives learn to be conservative (or if any partisan learns to be partisan), identity stability is a direct reflection of a period in which this learning can occur and the resilience of this identity through national political change. We find that consistent identification predicts having learned that it is scientists, and not science itself, that produce findings counter to conservative political goals. Furthermore, learning implies teaching and we have also argued that the pattern of attitudes shown here is indicative of successful social movement efforts to establish alternative and conservatively oriented institutions of knowledge (Gross et al. 2011). In this respect, we join other scholars in identifying the construction of politically partisan knowledge institutions as an important social movement outcome that has been under-studied among social movement scholars (Frickel and Gross 2005; Gross et al. 2011) and especially by those interested in framing processes (Benford and Snow 2000; Snow et al. 1986).

Several limitations to our empirical analysis warrant discussion. Most importantly, these data were not ideal for examining the mechanisms of engagement with conservative science explicitly. Computational researchers are well positioned to more accurately measure exposure to, and consumption of, conservative scientific information online. This type of work is well underway in the context of political news media (Barberá et al. 2015; Conover, Ratkiewicz, and Francisco 2011; Etling, Roberts, and Faris 2014; Faris et al. 2017; Guess, Nyhan, and Reifler 2018), but very little explores the impact of conservative scientific institutions. Think tanks like the Heritage Foundation and Discovery Institute, while unique in their missions and ideologies, offer politically conservative and religiously fundamentalist scientific resources to their audiences respectively, while partisan content creators like “Prager University” provide conservative information to subscribers with the veneer of an academic approach. But the effect of increased exposure to these kinds of partisan scientific resources—whose main point of public contact is through the Internet and social media—remains unclear.

In this article, we were not able to directly measure the consumption of conservative scientific information on the Internet, but we can offer some suggestive evidence that getting scientific information from the Internet makes a difference for stable and unstable conservative attitudes. Using a question that asks, “Where do you get most of your information about science and technology,” we can examine how using the Internet to consume scientific information affects differences between stable and unstable conservatives on our two dependent variables over time. Figure 2 shows these descriptive trends from 2006 to 2014 using fractional polynomial best fit trend lines with 95% confidence intervals. It is most important to note that stable conservatives that get their scientific information from the Internet are among the least likely to trust scientists over this timespan and the most likely, by a good margin, to see scientific research as a benefit. They are the group with the largest gap between their trust in scientists and belief in the benefits of scientific research.

[Figure 2. Descriptive trends in attitudes towards science among stable and unstable conservatives by science media outlet. Fractional polynomial best fit trend lines with 95% confidence interval in gray shading (Source: General Social Survey Panel, 2006–2014).


This aligns with our overall analyses—in that ostensibly greater access to partisan scientific authority exaggerates this gap for conservatives—but it remains a suggestive finding for future research to adjudicate more thoroughly. For instance, are these patterns really the result of better access to partisan science or is there something qualitatively different about online scientific content that exaggerates perceptions of scientists as over-stepping their authority (Evans 2018)? And in what ways are the populations getting their scientific information online different from others? Work in this vein could help answer important descriptive questions about conservative scientific sources, including how pervasive and heterogenous they are, and what associations exist between the sources themselves in terms of shared staff, audiences, and even content. A comprehensive study on public-facing scientific sources online could help map the cultural heterogeneity of scientific communication itself beyond the politically binary analysis provided here and provide a welcome point of comparison by suggesting other cultural scientific repertoires that are orienting and enabling of participation in scientific debates.

Future research should also include qualitative examinations of the conservative scientific repertoire. Differences and similarities in how stable liberals and conservatives, both groups that report high levels of belief in scientific research as a benefit to society, talk about and understand scientific issues is not well understood. Just as Swidler (2001) examined how people brought the universally valued concept of love to bear on their particular circumstances, future researchers can examine how political partisans selectively deploy “science” and its related concepts in their daily lives. This includes further examination into how attitudes toward scientists and scientific research are partitioned and how this disassociation is expressed or reconciled in the context of in-depth interviews. Scholars of religion and science (see e.g., Ecklund 2012; Ecklund and Scheitle 2017; Evans 2018) have been hard at work on questions like these and have set the stage for similar work on political partisans, including in non-US contexts.

These findings also raise questions about how cultural groups navigate moments of institutional trust and their relationships with other communities that may not support their worldview. The title of this article is a play on the (conservative) Christian saying, “Love the Sinner, Hate the Sin,”—a call to separate the actor (the sinner who might accept God’s forgiveness) and the action (the sin, which is against God’s will) in terms of one’s attitude toward a social performance (the sinner committing the sin). For our case, the process is inverted, with the political conservative showing low approval for the actor (the scientist) while maintaining a high approval for the action/process (the method of science). In both cases, individuals have the cognitive ability to separate actor and action in their evaluations, an ostensibly counter-intuitive process, and so the need for a snappy turn of phrase. Testing when and under what conditions people make striking actor/action distinctions in their evaluations is beyond the scope of this article. However, we demonstrate the integral role of identity and cultural membership in these processes, suggesting future research that might examine variation in actor/action evaluations among different cultural groups.

For example, we show how attitudes toward individual elites (scientists) are hurt, while attitudes toward the institutional practice (scientific research) are protected for stable political conservatives. But how this distinction between actors and action extends to other cultural groups and institutions depends on a variety of factors. Parallels might be found in how stable political liberals view capitalist institutions, where economic elites might be viewed unfavorably while belief in capitalism itself as an overall benefit to society remains stable. Other movements distrust elites and seek the abolition of entire institutions (e.g., anti-religious atheists), while still others distrust institutions while preserving positive attitudes toward individuals within them, as when political reactionary movements like the Tea Party or the Democratic Socialists successfully place leaders in elected political roles. This line of thinking suggests that actor/action distinctions are not indicative of conservativism itself or any kind of specifically conservative mentality (Mannheim 1993). We argue that one mechanism guiding the organization of these attitudes is whether an institution is politically useful (i.e., whether scientific appeals might help conservatives make political arguments) but further comparative studies can elucidate how different contexts shape attitudes toward individual elites and the institutions of which they are a part.

Finally, these results carry implications for science communication policy experts and strategists. Those conservatives most skeptical of man-made climate change and the scientists promoting it are also the most likely to believe that scientific research is a general benefit to society. Therefore, promoting policy that promotes the idea of science as a valid epistemology in order to increase belief in anthropogenic climate change seems misguided. Rather, outreach efforts might be more effective if geared toward humanizing the scientific community and correcting misperceptions of scientists themselves. By improving public agreement on where legitimate and trustworthy science is being accomplished, future debates at the intersections of science and politics can begin to focus more on what problems to prioritize instead of what the problems are.

Women who quitted Instagram, users who are no longer exposed to direct evaluative feedback about their images, reported significantly higher levels of life satisfaction and positive affect

Taking a Short Break from Instagram: The Effects on Subjective Well-Being. Giulia Fioravanti, Alfonso Prostamo, and Silvia Casale. Cyberpsychology, Behavior, and Social Networking, Dec 17 2019. https://doi.org/10.1089/cyber.2019.0400

Abstract: This study investigated whether abstaining from Instagram (Ig) affects subjective well-being among young men and women. By comparing an intervention group (40 participants who take a break from Ig for a week) with a control group (40 participants who kept using Ig), we found that women who quitted Ig reported significantly higher levels of life satisfaction and positive affect than women who kept using it. Whereas positive affect increment depended on social appearance comparison, life satisfaction rose independent of the tendency to compare one's own appearance with others. It is possible that users who are no longer exposed to direct evaluative feedback about their images on Ig—be it related to their appearance, habits, or opinions—can witness an increase in their global satisfaction levels. No significant effects were found among men.

Participants rated the neuroscience abstract as having stronger findings and as being more valid and reliable than the parapsychology abstract, despite the fact that the two abstracts were identical

Bias in the Evaluation of Psychology Studies: A Comparison of Parapsychology Versus Neuroscience. Bethany Butzer. EXPLORE, December 28 2019. https://doi.org/10.1016/j.explore.2019.12.010

Abstract: Research suggests that scientists display confirmation biases with regard to the evaluation of research studies, in that they evaluate results as being stronger when a study confirms their prior expectations. These biases may influence the peer review process, particularly for studies that present controversial findings. The purpose of the current study was to compare the evaluation of a parapsychology study versus a neuroscience study. One hundred participants with a background in psychology were randomly assigned to read and evaluate one of two virtually identical study abstracts (50 participants per group). One of the abstracts described the findings as if they were from a parapsychology study, whereas the other abstract described the findings as if they were from a neuroscience study. The results revealed that participants rated the neuroscience abstract as having stronger findings and as being more valid and reliable than the parapsychology abstract, despite the fact that the two abstracts were identical. Participants also displayed confirmation bias in their ratings of the parapsychology abstract, in that their ratings were correlated with their scores on transcendentalism (a measure of beliefs and experiences related to parapsychology, consciousness and reality). Specifically, higher transcendentalism was associated with more favorable ratings of the parapsychology abstract, whereas lower transcendentalism was associated with less favorable ratings. The findings suggest that psychologists need to be vigilant about potential biases that could impact their evaluations of parapsychology research during the peer review process.

Keywords: BiasResearchPsychologyConfirmation biasParapsychologyPsiNeuroscience

No strong evidence for a causal role of testosterone in promoting human aggression, positive but weakly correlations

Is testosterone linked to human aggression? A meta-analytic examination of the relationship between baseline, dynamic, and manipulated testosterone on human aggression. S. N. Geniole et al. Hormones and Behavior, December 28 2019, 104644. https://doi.org/10.1016/j.yhbeh.2019.104644

Highlights
• baseline testosterone is positively (but weakly) correlated with human aggression. The relationship between baseline testosterone and aggression is significantly stronger in male vs. females samples.
• context-dependent changes in testosterone are positively (but weakly) correlated with human aggression. The relationship between changes in testosterone and aggression is significantly stronger in male vs. females samples.
• No strong evidence for a causal role of testosterone in promoting human aggression

Abstract: Testosterone is often considered a critical regulator of aggressive behaviour. There is castration/replacement evidence that testosterone indeed drives aggression in some species, but causal evidence in humans is generally lacking and/or—for the few studies that have pharmacologically manipulated testosterone concentrations—inconsistent. More often researchers have examined differences in baseline testosterone concentrations between groups known to differ in aggressiveness (e.g., violent vs non-violent criminals) or within a given sample using a correlational approach. Nevertheless, testosterone is not static but instead fluctuates in response to cues of challenge in the environment, and these challenge-induced fluctuations may more strongly regulate situation-specific aggressive behaviour. Here, we quantitatively summarize literature from all three approaches (baseline, change, and manipulation), providing the most comprehensive meta-analysis of these testosterone-aggression associations/effects in humans to date. Baseline testosterone shared a weak but significant association with aggression (r = 0.054, 95% CIs [0.028, 0.080]), an effect that was stronger and significant in men (r = 0.071, 95% CIs [0.041, 0.101]), but not women (r = 0.002, 95% CIs [−0.041, 0.044]). Changes in T were positively correlated with aggression (r = 0.108, 95% CIs [0.041, 0.174]), an effect that was also stronger and significant in men (r = 0.162, 95% CIs [0.076, 0.246]), but not women (r = 0.010, 95% CIs [−0.090, 0.109]). The causal effects of testosterone on human aggression were weaker yet, and not statistically significant (r = 0.046, 95% CIs [−0.015, 0.108]). We discuss the multiple moderators identified here (e.g., offender status of samples, sex) and elsewhere that may explain these generally weak effects. We also offer suggestions regarding methodology and sample sizes to best capture these associations in future work.

Keywords: Challenge hypothesisAndrogensSex differencesPharmacological challenge

Check also Hormones in speed-dating: The role of testosterone and cortisol in attraction. Leander van der Meij et al. Hormones and Behavior, Volume 116, November 2019, 104555. https://www.bipartisanalliance.com/2019/11/hormones-in-speed-dating-role-of.html

Having too many men – although sex ratio skew might increase competition and violence among some members of the population, overall levels of those same behaviors might decline

Ecological Sex Ratios and Human Mating. Jon K.Maner, Joshua M.Ackerman. Trends in Cognitive Sciences, December 28 2019. https://doi.org/10.1016/j.tics.2019.11.008

Abstract: The ratio of men to women in a given ecology can have profound influences on a range of interpersonal processes, from marriage and divorce rates to risk-taking and violent crime. Here, we organize such processes into two categories – intersexual choice and intrasexual competition – representing focal effects of imbalanced sex ratios.

Keywords: evolutionpsychologyrelationshipscognitionsexualitycompetition

SR = sex ratio

Conclusion
Although evolutionary psychology is sometimes viewed as focusing exclusively on phenomena assumed to be invariant across time, people, and cultures (psychological universals), several lines of research demonstrate the important role of ecological contingencies [12]. Humans display enormous flexibility, calibrating their behavior in a facultative manner to variables in the local environment [13]. Ecological SRs reflect a key variable to which men and women adjust their mating behavior. Those adjustments are highly strategic and are aimed at enhancing reproductive success given features of the mating environment and of the individual person. This last insight helps answer our opening question about the consequences of having too many men – although SR skew might increase competition and violence among some members of the population, overall levels of those same behaviors might decline. Identifying individual differences and situational factors that moderate SR effects, as well as proximate cognitive mechanisms that underlie those effects, provides a fertile ground for future research. Future work would also benefit from delineating more clearly the specific social cues and population boundaries that people use to assess SRs (Box 1).

Box 1. Unanswered Questions about Ecological SRs
• What specific cues and population boundaries do people use to assess SRs?
o Do people base their assessments on immediate interaction partners, their local communities, or broader social/ecological borders?
o How should researchers navigate difficulties associated with analysis of data aggregated at population levels (e.g., problems can arise when inferring individual processes from regionally aggregated data)? [7]

• What degree of SR skew is required to affect behavioral outcomes?
o Few systematic analyses of this question exist in humans. Do minor imbalances in SRs affect behavior, or are larger and more obvious imbalances required?

• When should various types of SRs (e.g., adult SR vs. operational SR) be distinguished theoretically and empirically?
o The ecological literature focuses primarily on adult SRs, but these sometimes include nonreproducing individuals less relevant for mating dynamics (e.g., elderly, sexual minorities).

• On what key ecological, cultural, and individual difference factors are SR effects contingent?
o Relatively little work has been done to address this question, but preliminary evidence supports certain factors (e.g., mate value, social status, conflict levels) and not others (e.g., life expectancy, wealth).

• When do SRs have effects beyond those immediately predicated on mating dynamics?
o What other downstream behaviors and cognitions are influenced by SR skew? Some evidence suggests, for example, that SRs affect distal outcomes including investment behavior, consumer spending, career choices, and health decisions.

The avoidance of obese people is well documented, but its psychological basis is poorly understood; we think obesity triggers equivalent emotional and avoidant‐based responses as a contagious disease

Is obesity treated like a contagious disease? Caley Tapp  Megan Oaten  Richard J. Stevenson  Stefano Occhipinti  Ravjinder Thandi. Journal of Applied Social Psychology, December 27 2019. https://doi.org/10.1111/jasp.12650

Abstract: The behavioral avoidance of people with obesity is well documented, but its psychological basis is poorly understood. Based upon a disease avoidance account of stigmatization, we tested whether a person with obesity triggers equivalent self‐reported emotional and avoidant‐based responses as a contagious disease (i.e., influenza). Two hundred and sixty‐four participants rated images depicting real disease signs (i.e., person with influenza), false alarms (i.e., person with obesity), person with facial bruising (i.e., negative control), and a healthy control for induced emotion and willingness for contact along increasing levels of physical proximity. Consistent with our prediction, as the prospect for contact became more intimate, self‐reported avoidance was equivalent in the influenza and obese target conditions, with both significantly exceeding reactions to the negative and healthy controls. In addition, participants reported greatest levels of disgust toward the obese and influenza target conditions. These results are consistent with an evolved predisposition to avoid individuals with disease signs. Implicit avoidance occurs even when participants know explicitly that such signs—here, obese body form—result from a noncontagious condition. Our findings provide important evidence for a disease avoidance explanation of the stigmatization of people with obesity.


4 | DISCUSSION
We predicted that participant desire for avoidance of a person with
obesity and a person with influenza would significantly exceed
avoidant-based responses toward healthy and negative controls
and that this avoidance desire would increase as the prospect for
contact becomes more intimate and that this effect will be more
pronounced for the obesity and influenza targets. Consistent with
our prediction, when the prospect for contact was intimate (i.e.,
kissing, sexual activity), self-reported avoidance was equivalent in
the influenza and obesity targets, with both significantly exceeding reactions to the negative and healthy controls. By contrast,
participants were significantly more willing to have more intimate
levels of contact with the bruise or healthy target. As the prospect for contact became sexualized (i.e., kiss on the mouth and
sex), both male and female participants reported the greatest,
and equivalent, avoidance toward the obesity and influenza targets, relative to the negative and healthy controls. When the contact involved real physical intimacy participants reacted toward
the obesity target as if they were a contagious disease carrier.
Consistent with previous research examining false disease signs
(e.g., Kouznetsova et al., 2012; Ryan et al., 2012), participants correctly indicate that obesity is not a contagious condition and that
influenza is a contagious condition.
In support of a disease avoidance explanation, our results also
show that participants felt higher levels disgust when exposed to
both a person with obesity and a person with influenza, compared
to the healthy and bruise targets. Although previous research has
found gender differences in trait disgust predicting responses
toward people with obesity (Fisher et al., 2013; Lieberman et al.,
2012), no differences in felt disgust between male and female participants emerged in our study. Gender differences did emerge for
ratings of fear and anger, and this was by in large driven by the
bruise target—female participants felt more fear toward the bruise
target, whereas male participants felt more anger in response to
the bruise target. We suggest that this is due to the differential
subjective meaning of a facial bruise for men and women, in that
a man with a bruised face implies that he has been involved in a
fight, whereas a woman with a bruise is more likely to be viewed
as a victim of violence. While men and women differed in terms of
their anger and fear responses toward the bruise target they did
not significantly differ in terms of their disgust or avoidance responses toward the bruise target, thus the differences in emotion
expressed toward the bruise target is unlikely to be the driver of
participant avoidance responses.
As this study obtained willingness for physical contact via self-reports, future research should examine whether the self-reported
desire to avoid intimate contact with people with obesity demonstrated in the present study is expressed behaviorally. Although it
is clearly not possible to examine intimate levels of physical contact
in an experiment, there are other methods of assessing whether
disgust-driven avoidance behavior occurs. A study conducted by
Ryan et al. (2012) utilized behavioral outcome measure to compare
responses to a person with a facial birthmark and a person with influenza, but this type of method has yet to be extended to other
nonnormative body features, such as obesity.
A limitation of the present research was that we did not gather information about participant's own weight, which meant that we were
unable to examine the effect of participant weight status on the effects of interest. However, there is a growing body of evidence which
suggests that people with obesity themselves hold negative stereotypes about people with obesity (e.g., Papadopoulos & Brennan,
2015; Wang, Brownell, & Wadden,2004). Thus, it is unlikely that
differential effects across levels of weight would exist with regards
to the desire to avoid physical contact with a person with obesity.
Future research should include appropriate measures of participant
weight in order to provide further empirical evidence regarding the
effect of participant weight on stigmatization of people with obesity,
with a particular focus on intimate levels of physical contact.
Future research should also consider the role of relevant individual differences and make use of designs that allow this to be examined. Differences in levels of perceived vulnerability to disease
or trait levels of disgust may moderate the findings of the present
research. It is likely that people with higher levels of perceived vulnerability to disease or higher levels of trait disgust would display
even more of a desire for avoidance, and this effect should exist for
both a person with influenza and a person with obesity. In addition,
it would be valuable for future research to incorporate a measure of
participant disgust at the prospect of each level of physical contact,
rather than just overall levels of disgust felt toward the target. This
would allow for a more fine grained exploration of the role of disgust
in the avoidance processes demonstrated.
In conclusion, the finding of greatest desire to avoid intimate
physical contact with the obese and influenza targets, in combination with the finding that both the obese and influenza targets also
generated the greatest self-reported disgust, suggests the activation of a disease avoidance system. The display of some superficial
form of physical nonnormality, leads observers to respond to them
as though they are contagious disease carriers (Kouznetsova et al.,
2012; Schaller & Duncan, 2007). Our results show that a person
with obesity appears to be treated as though they possess a disease
cue—a false alarm in this case. A likely explanation is that obese body
form was heuristically perceived as a sign of disease that triggered
a disgust and avoidance response as the prospect for disease transmission increased (i.e., as intimacy of physical contact increased).
These findings make an important contribution to our understanding
of the psychological basis underlying the stigmatization of people
with obesity. It would be useful for interventions aimed at reducing
stigma toward people with obesity to take a disease avoidance explanation into account, particularly with regards to the role of disgust.

From 2013... Of the 7,000 languages spoken today, some 2,500 are generally considered endangered; this vastly underestimates the danger, less than 5% of all languages can still ascend to the digital realm

Kornai A (2013) Digital Language Death. PLoS ONE 8(10): e77056. https://doi.org/10.1371/journal.pone.0077056

Abstract: Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

Conclusions
We have machine classified the world’s languages as digitally ascending (including all vital, thriving, and borderline cases) or not, and concluded, optimistically, that the former class is at best 5% of the latter. Broken down to individual languages and language groups the situation is quite complex and does not lend itself to a straightforward summary. In our subjective estimate, no more than a third of the incubator languages will make the transition to the digital age. As the example of the erstwhile Klingon wikipedia (now hosted on Wikia) shows, a group of enthusiasts can do wonders, but it cannot create a genuine community. The wikipedia language policy, https://meta.wikimedia.org/wiki/Language_proposal_policy, demanding that “at least five active users must edit that language regularly before a test project will be considered successful” can hardly be more lenient, but the actual bar is much higher. Wikipedia is a good place for digitally-minded speakers to congregate, but the natural outcome of these efforts is a heritage project, not a live community.

A community of wikipedia editors that work together to anchor to the web the culture carried by the language is a necessary but insufficient condition of true survival. By definition, digital ascent requires use in a broad variety of digital contexts. This is not to deny the value of heritage preservation, for the importance of such projects can hardly be overstated, but language survival in the digital age is essentially closed off to local language varieties whose speakers have at the time of the Industrial Revolution already ceded both prestige and core areas of functionality to the leading standard koinés, the varieties we call, without qualification, French, German, and Italian today.

A typical example is Piedmontese, still spoken by some 2–3 m people in the Torino region, and even recognized as having official status by the regional administration of Piedmont, but without any significant digital presence. More closed communities perhaps have a better chance: Faroese, with less than 50 k speakers, but with a high quality wikipedia, could be an example. There are glimmers of hope, for example [2] reported 40,000 downloads for a smartphone app to learn West Flemish dialect words and expressions, but on the whole, the chances of digital survival for those languages that participate in widespread bilingualism with a thriving alternative, in particular the chances of any minority language of the British Isles, are rather slim.

In rare cases, such as that of Kurdish, we may see the emergence of a digital koiné in a situation where today separate Northern (Kurmanji), Central (Sorani), and Southern (Kermanshahi) versions are maintained (the latter as an incubator). But there is no royal road to the digital age. While our study is synchronic only, the diachronic path to literacy and digital literacy is well understood: it takes a Caxton, or at any rate a significant publishing infrastructure, to enforce a standard, and it takes many years of formal education and a concentrated effort on the part of the community to train computational linguists who can develop the necessary tools, from transliterators (such as already powering the Chinese wikipedia) to spellcheckers and machine translation for their language. Perhaps the most remarkable example of this is Basque, which enjoys the benefits of a far-sighted EU language policy, but such success stories are hardly, if at all, relevant to economically more blighted regions with greater language diversity.

The machine translation services offered by Google are an increasingly important driver of cross-language communication. As expected, the first several releases stayed entirely in the thriving zone, and to this day all language pairs are across vital and thriving languages, with the exception of French – Haitian Creole. Were it not for the special attention DARPA, one of the main sponsors of machine translation, devoted to Haitian Creole, it is dubious we would have any MT aimed at this language. There is no reason whatsoever to suppose the Haitian government would have, or even could have, sponsored a similar effort [32]. Be it as it may, Google Translate for any language pair currently likes to have gigaword corpora in the source and target languages and about a million words of parallel text. For vital languages this is not a hard barrier to cross. We can generally put together a gigaword corpus just by crawling the web, and the standardly translated texts form a solid basis for putting together a parallel corpus [33]. But for borderline languages this is a real problem, because online material is so thinly spread over the web that we need techniques specifically designed to find it [16], and even these techniques yield only a drop in the bucket: instead of the gigaword monolingual corpora that we would need, the average language has only a few thousand words in the Crúbadán crawl. To make matters worse, the results of this crawl are not available to the public for fear of copyright infringement, yet in the digital age what cannot be downloaded does not exist.

The digital situation is far worse than the consensus figure of 2,500 to 3,000 endangered languages would suggest. Even the most pessimistic survey [34] assumed that as many as 600 languages, 10% of the population, were safe, but reports from the field increasingly contradict this. For British Columbia, [35] writes:

Here in BC, for example, the prospect of the survival of the native languages is nil for all of the languages other than Slave and Cree, which are somewhat more viable because they are still being learned by children in a few remote communities outside of BC. The native-language-as-second-language programs are so bad that I have NEVER encountered a child who has acquired any sort of functional command (and I don’t mean fluency - I mean even simple conversational ability or the ability to read and understand a fairly simple paragraph or non-ritual bit of conversation) through such a program. I have said this publicly on several occasions, at meetings of native language teachers and so forth, and have never been contradicted. Even if these programs were greatly improved, we know, from e.g. the results of French instruction, to which oodles of resources are devoted, that we could not expect to produce speakers sufficiently fluent to marry each other, make babies, and bring them up speaking the languages. It is perfectly clear that the only hope of revitalizing these languages is true immersion, but there are only two such programs in the province and there is little prospect of any more. The upshot is that the only reasonable policy is: (a) to document the languages thoroughly, both for scientific purposes and in the hope that perhaps, at some future time, conditions will have changed and if the communities are still interested, they can perhaps be revived then; (b) to focus school programs on the written language as vehicle of culture, like Latin, Hebrew, Sanskrit, etc. and on language appreciation. Nonetheless, there is no systematic program of documentation and instructional efforts are aimed almost entirely at conversation.

Cree, with a population of 117,400 (2006), actually has a wikipedia at http://cr.wikipedia.org but the real ratio is only 0.02, suggestive of a hobbyist project rather than a true community, an impression further supported by the fact that the Cree wikipedia has gathered less than 60 articles in the past six years. Slave (3,500 speakers in 2006) is not even in the incubator stage. This is to be compared to the over 30 languages listed by the Summer Institute of Linguistics for BC. In reality, there are currently less than 250 digitally ascending languages worldwide, and about half of the borderline cases are like Moroccan Arabic (ary), low prestige spoken dialects of major languages whose signs of vitality really originate with the high prestige acrolect. This suggests that in the long run no more than a third of the borderline cases will become vital. One group of languages that is particularly hard hit are the 120+ signed languages currently in use. Aside from American Sign Language, which is slowly but steadily acquiring digital dictionary data and search algorithms [36], it is perhaps the emerging International Sign [37] that has the best chances of survival.

There could be another 20 spoken languages still in the wikipedia incubator stage or even before that stage that may make it, but every one of these will be an uphill struggle. Of the 7,000 languages still alive, perhaps 2,500 will survive, in the classical sense, for another century. With only 250 digital survivors, all others must inevitably drift towards digital heritage status (Nynorsk) or digital extinction (Mandinka). This makes language preservation projects such as http://www.endangeredlanguages.com even more important. To quote from [6]:

Each language reflects a unique world-view and culture complex, mirroring the manner in which a speech community has resolved its problems in dealing with the world, and has formulated its thinking, its system of philosophy and understanding of the world around it. In this, each language is the means of expression of the intangible cultural heritage of people, and it remains a reflection of this culture for some time even after the culture which underlies it decays and crumbles, often under the impact of an intrusive, powerful, usually metropolitan, different culture. However, with the death and disappearance of such a language, an irreplaceable unit in our knowledge and understanding of human thought and world-view is lost forever.

Unfortunately, at a practical level heritage projects (including wikipedia incubators) are haphazard, with no systematic programs of documentation. Resources are often squandered, both in the EU and outside, on feel-good revitalization efforts that make no sense in light of the preexisting functional loss and economic incentives that work against language diversity [38].

Evidently, what we are witnessing is not just a massive die-off of the world’s languages, it is the final act of the Neolithic Revolution, with the urban agriculturalists moving on to a different, digital plane of existence, leaving the hunter-gatherers and nomad pastoralists behind. As an example, consider Komi, with two wikipedias corresponding to the two main varieties (Permyak, 94,000 speakers and Zyrian, 293,000 speakers), both with alarmingly low () real ratios. Given that both varieties have several dialects, some already extinct and some clearly still, the best hope is for a koiné to emerge around the dialect of the main city, Syktyvkar. Once the orthography is standardized, the university (where the main language of education is Russian) can in principle turn out computational linguists ready to create a spellchecker, an essential first step toward digital literacy [39]. But the results will benefit the koiné speakers, and the low prestige rural Zyrian dialects are likely to be left behind.

What must be kept in mind is that the scenario described for Komi is optimistic. There are several hundred thousand speakers, still amounting to about a quarter of the local population. There is a university. There are strong economic incentives (oil, timber) to develop the region further. But for the 95% of the world’s languages where one or more of these drivers are missing, there is very little hope of crossing the digital divide.