Sunday, December 29, 2019

Review of Jennifer A. Jones's The Browning of the New South

Review of Jennifer A. Jones's The Browning of the New South. Angel Adams Parham. Social Forces 1–4, soz141, Dec 2019, https://doi.org/10.1093/sf/soz141

Jones’s main argument is that context matters—place, class, racial composition of the area—all of these play a role in shaping migrants’ social and racial adjustment. [...] Jones shows how Latinx immigrants were welcomed at first and then, as the local economic and political context shifted, were abruptly un-welcomed. The term she uses to describe this un-welcoming is “reverse incorporation”. The two dimensions of reverse incorporation are: institutional closure and the souring of public opinion (69). As Latinx immigrants find themselves in the antagonistic process of being formally unwelcomed, they begin to cement ties with black Americans.

It is indeed striking how Janus-faced was the reception/rejection whites in Winston-Salem meted out to immigrant community members, most of whom were from Mexico. When immigrants first began to arrive, the welcome could not have been more enthusiastic: businesses were ready and willing to hire them—with or without documentation; a local bank engaged in a concerted campaign to make it as easy as possible for the newcomers to use their services; non-immigrant community members were profiled in the press as going the extra mile to ease the transition of newcomers into the community; and it was easy to obtain a driver license even without legal papers. In addition to all of this, Winston-Salem’s leaders took a trip to Guanajuto, Mexico with the expressed purpose of gaining ‘a deeper understanding of the culture of Mexico’s immigrants’ (58).

Then, beginning about 2005, the welcome mat began to be slowly and then rudely yanked from beneath the feet of these immigrants. Key changes included the emergence of the post-9/11 state after 2001 and the drastic weakening of the economy beginning in 2008 with the recession. The drying up of the labor market made the presence of immigrants far less attractive than it had been when employers were scrambling for workers. In addition, the post-9/11 state introduced new legal restrictions and a heightened amount of surveillance that was devastating to the immigrant community—both documented and undocumented. As this process of reverse incorporation proceeded, Latinx immigrants engaged in conscious attempts to join forces with black Americans who they knew to have suffered ongoing discrimination at the hands of the white majority. Churches and non-profit agencies held meetings and events to foster the strengthening of these ties between Latinos and blacks. In addition, Jones found that most of her interviewees held very positive views of blacks but harbored relatively cool feelings toward whites who they perceived to be socially distant.  On the whole, Jones’s findings are compelling: blacks and Latinos did band together in what she terms ‘minority linked fate’ and it is clear that the local context mattered quite significantly in shaping the ways Latinos evaluated and responded to the racial terrain of Winston-Salem. [...].

First, the moderate critique. Early on Jones enjoins us to be sensitive to the varieties of local context immigrants find when they settle into different parts of the United States. She notes that while there are some broad patterns, important differences between the configuration of settlement in Los Angeles and New York versus Charlotte and Atlanta, should lead us to an analysis that frames racial change as a rapidly shifting patchwork of race relations, rather than a unifying framework .... how groups relate to one another and access resources is fluid and context dependent. (8). While all of this is certainly true, one suspects that it would be possible to advance a working analytical framework that would help us to test out in future research which factors may be more or less likely to result in distancing from blackness, and which might be more likely to result in the strategy of minority linked fate Jones finds in Winston-Salem. Although the book is based on one in-depth case study, Jones has a command of the literature conducive to drawing some stronger conclusions about which patterns are likely to lead to one outcome and which to another. As it is, we are left with an analysis of one case that challenges the mainstream immigration literature but does little to help us to understand how her case might be profitably linked to others.  Indeed, even in the closing pages of the book she continues to assert that “new racial formation patterns will not be represented by national color lines, but by patchwork quilts of race relations determined by local conditions” (196).  These opening and closing declarations make it seem that racial formations will be completely random. I do not think, however, that Jones believes this.  Even in the four cases she mentions above: Los Angeles and New York versus Charlotte and Atlanta, we see clear distinctions between major coastal cities with an established immigrant history compared to smaller southern cities that are newer immigrant destinations. Distinct regional racial histories may, perhaps, be part of the patterned difference we should examine. There are, moreover, other axes of difference that could be used to at least tentatively propose a useful comparative framework to guide our thinking about immigrants’ racial integration into various kinds of communities.

In addition to this request for greater boldness in proposing patterns that could be examined comparatively by future researchers, I also propose an alternative reading of Jones’s Winston-Salem data. I must state at the outset that this alternative reading makes no challenge to the research findings per se. Rather, it suggests a way of looking at the data that reveals a different kind of picture— much like the classic case of a drawing that can be seen either as a vase or as two people in profile facing each other.

As the argument is currently framed, Jones counter-poses narratives of incorporation versus reverse incorporation and immigrant distancing versus immigrant bonding with black Americans. According to the mainstream account in the literature, immigrants of all kinds find it advantageous to distance themselves from black Americans. If they have capital in light skin, they may invest this in whiteness, if not, then they engage in cultural options that symbolically distance them from blackness. Jones claims that this is only one option and that, in certain contexts, friendly relations between blacks and Latinx immigrants is quite possible and even likely.

The alternative reading proposed here, however, is that both the mainstream account and the one Jones offers in her book are but two sides of the same coin where immigrants respond carefully to the default settings of racism and white supremacy they encounter in the United States. In some cases, the tools that sustain these twin rails are latent, lying in wait beneath the surface of everyday life, while in others they are aggressively deployed. Immigrant responses to the racial terrain vary based on the latency/deployment of the tools. When, for instance, entering a setting such as Winston-Salem in the 1980s–1990s, racism and white supremacy are largely latent and immigrants can embrace aspirational whiteness or maintain neutrality in race relations and racial positioning because the stakes were relatively low. But then, as economic and security crises shake the white community, these latent tools are taken out and deployed. Under these new conditions, Latino newcomers find it much more difficult to avoid the question of race or to engage in aspirational whiteness and distancing from blacks. At such a time, cross-racial linkages became more important and are advantageous.  If this alternative reading is correct, then Jones’s findings are not as far from the mainstream account as she thinks they are. It would still be the case that the default position for most Latino immigrants is to aspire toward the privileges of whiteness and to distance themselves from blackness when conditions allow for this.

It is, admittedly, difficult to be certain that this alternative reading works in Jones’s case. While she presents plenty of data from Latinx interviewees that is favorable to blacks, it is difficult to know how much of this favoring is due to enduring the difficulties of reverse incorporation and how much of this friendly sentiment was long-standing even before the difficulties emerged. In the end, however, whether the alternative reading does or does not apply, Jones presents a strong case that shows us that place and context matter and that the racial future cannot be read simplistically from the racial past.

YouTube radicalization: The recommendation algorithm actively discourages viewers from visiting radicals/extremists, favors left-leaning or neutral channels & mainstream media over independent channels

Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization. Mark Ledwich, Anna Zaitsev. arXiv Dec 24 2019. https://arxiv.org/abs/1912.11211

Abstract: The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

V. LIMITATIONS AND CONCLUSIONS

There are several limitations to our study that must be considered for the future. First, the main limitation is the anonymity of the data set and the recommendations. The recommendations the algorithm provided were not based on videos watched over extensive periods. We expect and have anecdotally observed that the recommendation algorithm gets more fine-tuned and context-specific after each video that is watched. However, we currently do not have a way of collecting such information from individual user accounts, but our study shows that the anonymous user is generally directed towards more mainstream content than extreme. Similarly, anecdotal evidence from a personal account shows that YouTube suggests content that is very similar to previously watched videos while also directing traffic into more mainstream channels. That is, contrary to prior claims; the algorithm does not appear to stray into suggesting videos several degrees away from a user’s normal viewing habits. Second, the video categorization of our study is partially subjective. Although we have taken several measures to bring objectivity into the classification and analyzed similarities between each labeler by calculating the intraclass correlation coefficiencies, there is no way to eliminate bias. There is always a possibility for disagreement and ambiguity for categorizations of political content. We, therefore, welcome future suggestions to help us improve our classification. In conclusion, our study shows that one cannot proclaim that YouTube’s algorithm, at the current state, is leading users towards more radical content. There is clearly plenty of content on YouTube that one might view as radicalizing or inflammatory. However, the responsibility of that content is with the content creator and the consumers themselves. Shifting the responsibility for radicalization from users and content creators to YouTube is not supported by our data. The data shows that YouTube does the exact opposite of the radicalization claims. YouTube engineers have said that 70 percent of all views are based on the recommendations [38]. When combined with this remark with the fact that the algorithm clearly favors mainstream media channels, we believe that it would be fair to state that the majority of the views are directed towards left-leaning mainstream content. We agree with the Munger and Phillips (2019), the scrutiny for radicalization should be shined upon the content creators and the demand and supply for radical content, not the YouTube algorithm. On the contrary, the current iteration of the recommendations algorithm is working against the extremists. Nevertheless, YouTube has conducted several deletion sweeps targeting extremist content [29]. These actions might be ill-advised. Deleting extremist channels from YouTube does not reduce the supply for the content [50]. These banned content creators migrate to other video hosting more permissible sites. For example, a few channels that were initially included in the Alt-right category of the Ribero et al. (2019) paper, are now gone from YouTube but still exist on alternative platforms such as the BitChute. The danger we see here is that there are no algorithms directing viewers from extremist content towards more centrist materials on these alternative platforms or the Dark Web, making deradicalization efforts more difficult [51]. We believe that YouTube has the potential to act as a deradicalization force. However, it seems that the company will have to decide first if the platform is meant for independent YouTubers or if it is just another outlet for mainstream media.


A. The Visualization and Other Resources Our data, channel categorization, and data analysis used in this study are all available on GitHub for anyone to see. Please visit the GitHub page for links to data or the Data visualization. We welcome comments, feedback, and critique on the channel categorization as well as other methods applied in this study.

B. Publication Plan This paper has been submitted for consideration at First Monday



Response to critique on our paper “Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization” https://medium.com/@anna.zaitsev/response-to-critique-on-our-paper-algorithmic-extremism-examining-youtubes-rabbit-hole-of-8b53611ce903

There is less support for redistribution & race-targeted aid among blacks in the U.S. today than in the 70s; anti-black stereotypes have had consequences for political attitudes for whites, but for blacks too

Inequality, Stereotypes and Black Public Opinion: The Role of Distancing. Emily M. Wager. . http://www.emilymwager.com/uploads/1/2/7/2/127261100/inequality_stereotypes_and_black_public_opinion.pdf

Abstract: There is less support for redistribution and race-targeted aid among blacks in the U.S. today than in the 1970s, despite persistent and enduring racial and economic disparities. Why? I argue that anti-black stereotypes suggesting blacks are lazy and reliant on government assistance have not only had consequences for political attitudes of whites but blacks as well. I note that as stigmas persist, they can have durable effects on the groups they directly stigmatize. To combat being personally stereotyped, some members of stigmatized groups will practice “defensive othering,” where one accepts a negative stereotype of one’s own group and simultaneously distances oneself from that stereotype. I illustrate the ways in which defensive othering plays a role in black attitudes toward redistribution using individual and aggregate level survey data, as well as qualitative interviews


6 Conclusion
When talking to ordinary people, I observed how Americans, including blacks, expressed disapproval of the high level of access citizens have to government assistance, recited scripts about meritocracy, and brought up others they knew that had “abused the system.” However, these opinions must be placed in a broader historical context, one in which for decades whites have restricted blacks’ access to distributive and redistributive programs, reinforced racialized images of government assistance recipients and justified racial inequalities through claims of meritocracy. These racist messages are part of the “smog in the air” we all breathe.

This study makes the argument that the stigmatization of blacks as lazy recipients of government assistance has the potential to shape blacks’ own reported attitudes about the role of government in addressing inequalities. I argue this should be seen as a condition of internalized racial oppression, specifically “defensive othering,” which involves the acceptance of negative group stereotypes while simultaneously distancing oneself from that stereotype. Internalized oppression is not a reflection of weakness, ignorance or inferiority on the part of the subordinate group. Instead, as Pyke (2010) succinctly states, “all systems of inequality are maintained and reproduced, in part, through their internalization by the oppressed” (p. 552). This study shows that belief in blacks’ unwillingness to work is not an uncommon feature in black public opinion today. Consequently, acceptance of this stereotype among black individuals leads them to be significantly less demanding of race and nonrace based government aid. This study also suggests a media environment that disproportionately characterized the poor as lazy and black may help to explain the rise in blacks’ acceptance of in-group stereotypes.

There are several avenues for future research. First, scholars might consider what individual and contextual factors contribute to the acceptance of negative in-group stereotypes and the consequences for political attitudes. Variation in racial and socioeconomic contexts, such as neighborhoods, could very well lead to public opinion change (Cohen and Dawson, 1993; Gay, 2004). Future research would also benefit from examining more robust measures of media coverage by extending the time frame and examining the content of mainstream media as well as black media. Finally, I identify stereotypes as one possible reason for the shifts we see in black public opinion. Scholars might also consider how actual experiences with social services shape political attitudes. For example, qualitative researchers have found that social service providers can purposefully lead recipients to adopt more neoliberal attitudes (Woolford and Nelund, 2013).

This study serves several purposes. First, it aims to build a deeper understanding of racial minorities’ redistributive policy preferences in a literature where they are often ignored. This disregard for black public opinion is a part of a larger failure to recognize blacks as more than the object of whites’ resentment in the study of race in political science (Harris-Lacewell, 2003). Simply put, the public’s relatively weak demand for redistribution despite extreme inequality may not be able to be understood with only one theory. My explanation suggests that when blacks and whites are asked in surveys about redistributive policies, they are often not drawing on the same considerations. Given that blacks have been most negatively depicted in relation to these policies, they have more motivation than whites to distance themselves from freeloading stereotypes. Second, while American politics scholars have paid much attention to the role of racial biases in whites’ political attitudes, this study explores how racism can impact the attitudes of the marginally situated people racism targets. This fits in with a large literature that identifies the negative psychological and physiological consequences of stereotypes for members of stigmatized groups,19 but is rare in the study of political attitudes and behavior. Finally, in 19Some notable studies include Steele (2016); Blascovich et al. (2001); Burgess et al. (2010); Cohen and Garcia (2005); Lewis Jr and Sekaquaptewa (2016) this paper, I rely on an interpretive perspective to study public opinion, which encourages researchers to not analyze opinion and behavior as divorced from the historical and social context in which they take place.

From 2017... Under what conditions factual misperceptions may be effectively corrected: Ingroup members, specifically co-partisans and peers, are perceived to be more credible & more effective correctors

Correcting Factual Misperceptions: How Source Cues Matter. Emily Wager. Master's Thesis, North Carolina University at Chapel Hill. 2017. https://pdfs.semanticscholar.org/46da/d633c4d636be01e5cacb1d8c6798534a1303.pdf

Abstract: From the birther movement to the push of “alternative facts” from the White House, recent events have highlighted the prominence of misinformation in the U.S. This study seeks to broaden our understanding of under what conditions factual misperceptions may be effectively corrected. Specifically, I use Social Identity Theory to argue that ingroup members, specifically co-partisans and peers, are perceived to be more credible, and in turn are more effective correctors, than outgroup members (out-partisans and elites), contingent on identity strength. I also argue that peers should be effective correctors among those with low levels of institutional trust. To test my expectations, this study employs a 2 x 2 experimental design with a control group to determine how successful various source cues are at changing factual beliefs about a hotly debated topic in the U.S.— immigration. Overall I find preliminary support for my expectations.

Discussion

The experiments in this paper help us understand how factual beliefs about politics
can be changed by manipulating the source of the corrections. I find that responses to
corrections from peers and co-partisans differ significantly according to subjects’ group
identity and trust in institutions. As a result, the corrections about immigration are most
successful among those that strongly identify with the group the source is a member of,
whether a co-partisan or peer.
My findings contribute to the literature on correcting misperceptions in several respects.
First, while prior corrections research has exclusively used a between-subjects design to
circumvent the possibility that people feel grounded in their responses before receiving
correction to after (Nyhan & Reifler, 2010), this study employed a within-subjects design
in order to establish how subjects actually change their factual beliefs. Indeed, my findings
demonstrate that substantial movement in reported factual beliefs does happen following
corrections and that these beliefs typically hold over time.
Second, the results from this study corroborate previous work demonstrating that the
impact of source cues on opinion varies systematically by individual identification with the
source (Hartman and Weber 2009). Among those who strongly identify with their party,
inparty corrections are successful in moving factual beliefs in the accurate direction while
outparty corrections lead to a “backfire” effect, pushing individuals in the opposite direction. On the other hand, weak partisans are receptive to corrective information from outparty
members. Therefore, while the literature on the impact of ideological source cues on correcting misperceptions has offered inconsistent findings (Nyhan and Reifler 2013; Berinsky
2015), the results in this paper demonstrate that not all Republicans and Democrats respond
uniformly to corrections—strength of group identification matters. While I theorize that
perceived credibility of ingroup and outgroup members is the causal mechanism at work, it
would be valuable to directly study how individuals evaluate various sources on key characteristics (trustworthiness, knowledge, etc.,). This would allow social scientists to gain a
better understanding of specifically why certain sources are successful and others are not.
While research on source cues in political science has largely focused on partisan cues,
these findings also contribute to our understanding of how voters respond to information
from elites and peers. In accordance with my expectations, I find some evidence that peers
are more successful correctors than elites, especially among those who strongly identify
with their peer group and among those have weak trust in institutions. These findings add
to existing work (Attwell and Freeman 2015) seeking to understand how social groups can
promote accuracy in cases when experts or elites appear ineffective. Future work should
further explore how voters evaluate the credibility of elites and peers differently, and what
other individual-level factors might explain why certain people are more receptive to political information from peers than elites. It would also be valuable to replicate these findings
on peer groups other than university students or use real peers that are not fabricated. Lastly,
while peer cues in this experiment did not have the substantial impact on factual beliefs that
were expected across all four statements, they should not be concluded as irrelevant. Walsh
(2004) illustrates the importance of face-to-face interactions among small peer groups in
political thinking. It is possible that factual corrections in the context of these sort of peer
interactions are effective, and scholars should aim to understand the significance of such
interactions in the real world.
Of course, I am mindful of the inherent limitations of the evidence presented here. The
most serious limitation to this experiment was the scale used to measure the dependent
variable, which confounds confidence and acceptance/rejection of false statements. For this
reason it is difficult to untangle the differences between changes in acceptance or rejection
of beliefs and actual confidence in beliefs. For example, it could be possible that a peer
correction could just make subjects more confident in their already (correct) beliefs, and not
actually encourage a switch from acceptance to rejection of a false statement. Subsequent
studies should unpack these distinctions.
There is also the question over whether individuals’ reported factual beliefs are in fact
sincere and not just expressive partisan cheerleading (Bullock, Gerber and Seth 2015). I
argue that sincerity is largely inconsequential here. If subjects are willing to report strong
confidence in falsehoods in a survey, this cannot be completely irrelevant to the way they
perceive the political world. Lastly, if misperceptions are successfully corrected, it is no
guarantee that subjects’ respective political attitudes are actually moved in a certain direction. As Gaines, Kuklinski, Quirk, Peyton and Verkuilen (2007) note, those who “update
their beliefs accordingly need not imply they update their opinions accordingly [emphasis
added]” (p. 971). However, because factual beliefs and attitudes are so conflated, measuring
the two together in a questionnaire might have discouraged individuals from accurately updating because they are reminded that certain beliefs are inconsistent with their worldview.
The purpose of this study is only to examine the conditions under which strong beliefs in
falsehoods (not issue positions) can be effectively challenged, but future work should explore how relevant factual beliefs shape opinions on immigration. While prior work has
found no evidence that correcting factual beliefs about immigrant population sizes leads
to attitude change (Lawrence and Sides 2014; Hopkins, Sides and Citrin 2016), these are
likely not the only factual beliefs that inform voters’ respective attitudes.

Love the Science, Hate the Scientists: Conservative Identity Protects Belief in Science and Undermines Trust in Scientists

Love the Science, Hate the Scientists: Conservative Identity Protects Belief in Science and Undermines Trust in Scientists. Marcus Mann, Cyrus Schleifer. Social Forces, soz156, December 23 2019. https://doi.org/10.1093/sf/soz156

Abstract: The decline in trust in the scientific community in the United States among political conservatives has been well established. But this observation is complicated by remarkably positive and stable attitudes toward scientific research itself. What explains the persistence of positive belief in science in the midst of such dramatic change? By leveraging research on the performativity of conservative identity, we argue that conservative scientific institutions have manufactured a scientific cultural repertoire that enables participation in this highly valued epistemological space while undermining scientific authority perceived as politically biased. We test our hypothesized link between conservative identity and scientific perceptions using panel data from the General Social Survey. We find that those with stable conservative identities hold more positive attitudes toward scientific research while simultaneously holding more negative attitudes towards the scientific community compared to those who switch to and from conservative political identities. These findings support a theory of a conservative scientific repertoire that is learned over time and that helps orient political conservatives in scientific debates that have political repercussions. Implications of these findings are discussed for researchers interested in the cultural differentiation of scientific authority and for stakeholders in scientific communication and its public policy.


Discussion and Conclusion

Confidence in the scientific community has declined among political conservatives in recent years but attitudes toward scientific research as a benefit to society have remained stable. Meanwhile, conservative social movements have established their own conservatively oriented scientific institutions (e.g., see Dunlap, Riley and McCright 2016; Dunlap and Jacques, 2013; Jacques, Dunlap, and Freeman, 2008; McCright & Dunlap, 2000, 2003, 2010; Gross et al., 2011) and the dawn of the Internet and social media has made it easier than ever for conservative audiences to access conservative knowledge. The preceding analysis aimed to show how these developments intersect by demonstrating that stable conservative partisans are more likely than their switching counterparts to distrust the scientific community and to believe that scientific research is a benefit to society. These findings support arguments that conservative efforts to communicate alternative scientific knowledge have been successful insofar as stable conservatives maintain trust in science while rejecting the authority of mainstream scientists. The implications of these developments are numerous.

First, this study replicates the findings of Gauchat (2012) and helps confirm one of the most dramatic trends in scientific perceptions in the last fifty years. Second, we build on previous work (O’Brien and Noy 2015; Roos 2017) that shows how rejections of mainstream scientific knowledge often signal specific cultural perceptions as opposed to deficits in scientific knowledge itself (although see Allum, Sturgis, Tabourazi, & Brunton-smith, 2008; Sturgis & Allum, 2004). We contribute to this work by studying political identity and scientific attitudes and finding that rejections of scientists need not be driven by a broader rejection of scientific research itself. This is further evidence that cultural communities viewed as being anti-science maintain a complex arrangement of scientific perceptions that can include high levels of scientific knowledge and positive views of scientific research. Furthermore, consistent identification in such a community can be indicative of positive scientific attitudes.

We are not the first to examine how membership in a cultural community affects perceptions of science. Moscivici (1961/2008) coined the concept of “social representations” by studying how the advent of psychoanalysis was received and communicated among three different moral communities in France—urban-liberals, Catholics, and Communists—and observing how new scientific ideas were refracted through the organizational and cultural lenses of these social milieus. This study extends this long line of research on cultural membership and scientific perceptions by examining the issue of consistency in political identity and attitudes toward scientists and scientific research, as opposed to interpretations of a distinct scientific discipline or the relationship between scientific knowledge and attitudes.

More specifically, this research applies Perrin et al.’s (2014) performative theory of conservative identity and extends their work by examining it in the context of identity stability. Identity stability is important for a performative theory of political identity because it reflects enduring familiarity with and acceptance of elite characterizations of political identity. In other words, if conservatives learn to be conservative (or if any partisan learns to be partisan), identity stability is a direct reflection of a period in which this learning can occur and the resilience of this identity through national political change. We find that consistent identification predicts having learned that it is scientists, and not science itself, that produce findings counter to conservative political goals. Furthermore, learning implies teaching and we have also argued that the pattern of attitudes shown here is indicative of successful social movement efforts to establish alternative and conservatively oriented institutions of knowledge (Gross et al. 2011). In this respect, we join other scholars in identifying the construction of politically partisan knowledge institutions as an important social movement outcome that has been under-studied among social movement scholars (Frickel and Gross 2005; Gross et al. 2011) and especially by those interested in framing processes (Benford and Snow 2000; Snow et al. 1986).

Several limitations to our empirical analysis warrant discussion. Most importantly, these data were not ideal for examining the mechanisms of engagement with conservative science explicitly. Computational researchers are well positioned to more accurately measure exposure to, and consumption of, conservative scientific information online. This type of work is well underway in the context of political news media (Barberá et al. 2015; Conover, Ratkiewicz, and Francisco 2011; Etling, Roberts, and Faris 2014; Faris et al. 2017; Guess, Nyhan, and Reifler 2018), but very little explores the impact of conservative scientific institutions. Think tanks like the Heritage Foundation and Discovery Institute, while unique in their missions and ideologies, offer politically conservative and religiously fundamentalist scientific resources to their audiences respectively, while partisan content creators like “Prager University” provide conservative information to subscribers with the veneer of an academic approach. But the effect of increased exposure to these kinds of partisan scientific resources—whose main point of public contact is through the Internet and social media—remains unclear.

In this article, we were not able to directly measure the consumption of conservative scientific information on the Internet, but we can offer some suggestive evidence that getting scientific information from the Internet makes a difference for stable and unstable conservative attitudes. Using a question that asks, “Where do you get most of your information about science and technology,” we can examine how using the Internet to consume scientific information affects differences between stable and unstable conservatives on our two dependent variables over time. Figure 2 shows these descriptive trends from 2006 to 2014 using fractional polynomial best fit trend lines with 95% confidence intervals. It is most important to note that stable conservatives that get their scientific information from the Internet are among the least likely to trust scientists over this timespan and the most likely, by a good margin, to see scientific research as a benefit. They are the group with the largest gap between their trust in scientists and belief in the benefits of scientific research.

[Figure 2. Descriptive trends in attitudes towards science among stable and unstable conservatives by science media outlet. Fractional polynomial best fit trend lines with 95% confidence interval in gray shading (Source: General Social Survey Panel, 2006–2014).


This aligns with our overall analyses—in that ostensibly greater access to partisan scientific authority exaggerates this gap for conservatives—but it remains a suggestive finding for future research to adjudicate more thoroughly. For instance, are these patterns really the result of better access to partisan science or is there something qualitatively different about online scientific content that exaggerates perceptions of scientists as over-stepping their authority (Evans 2018)? And in what ways are the populations getting their scientific information online different from others? Work in this vein could help answer important descriptive questions about conservative scientific sources, including how pervasive and heterogenous they are, and what associations exist between the sources themselves in terms of shared staff, audiences, and even content. A comprehensive study on public-facing scientific sources online could help map the cultural heterogeneity of scientific communication itself beyond the politically binary analysis provided here and provide a welcome point of comparison by suggesting other cultural scientific repertoires that are orienting and enabling of participation in scientific debates.

Future research should also include qualitative examinations of the conservative scientific repertoire. Differences and similarities in how stable liberals and conservatives, both groups that report high levels of belief in scientific research as a benefit to society, talk about and understand scientific issues is not well understood. Just as Swidler (2001) examined how people brought the universally valued concept of love to bear on their particular circumstances, future researchers can examine how political partisans selectively deploy “science” and its related concepts in their daily lives. This includes further examination into how attitudes toward scientists and scientific research are partitioned and how this disassociation is expressed or reconciled in the context of in-depth interviews. Scholars of religion and science (see e.g., Ecklund 2012; Ecklund and Scheitle 2017; Evans 2018) have been hard at work on questions like these and have set the stage for similar work on political partisans, including in non-US contexts.

These findings also raise questions about how cultural groups navigate moments of institutional trust and their relationships with other communities that may not support their worldview. The title of this article is a play on the (conservative) Christian saying, “Love the Sinner, Hate the Sin,”—a call to separate the actor (the sinner who might accept God’s forgiveness) and the action (the sin, which is against God’s will) in terms of one’s attitude toward a social performance (the sinner committing the sin). For our case, the process is inverted, with the political conservative showing low approval for the actor (the scientist) while maintaining a high approval for the action/process (the method of science). In both cases, individuals have the cognitive ability to separate actor and action in their evaluations, an ostensibly counter-intuitive process, and so the need for a snappy turn of phrase. Testing when and under what conditions people make striking actor/action distinctions in their evaluations is beyond the scope of this article. However, we demonstrate the integral role of identity and cultural membership in these processes, suggesting future research that might examine variation in actor/action evaluations among different cultural groups.

For example, we show how attitudes toward individual elites (scientists) are hurt, while attitudes toward the institutional practice (scientific research) are protected for stable political conservatives. But how this distinction between actors and action extends to other cultural groups and institutions depends on a variety of factors. Parallels might be found in how stable political liberals view capitalist institutions, where economic elites might be viewed unfavorably while belief in capitalism itself as an overall benefit to society remains stable. Other movements distrust elites and seek the abolition of entire institutions (e.g., anti-religious atheists), while still others distrust institutions while preserving positive attitudes toward individuals within them, as when political reactionary movements like the Tea Party or the Democratic Socialists successfully place leaders in elected political roles. This line of thinking suggests that actor/action distinctions are not indicative of conservativism itself or any kind of specifically conservative mentality (Mannheim 1993). We argue that one mechanism guiding the organization of these attitudes is whether an institution is politically useful (i.e., whether scientific appeals might help conservatives make political arguments) but further comparative studies can elucidate how different contexts shape attitudes toward individual elites and the institutions of which they are a part.

Finally, these results carry implications for science communication policy experts and strategists. Those conservatives most skeptical of man-made climate change and the scientists promoting it are also the most likely to believe that scientific research is a general benefit to society. Therefore, promoting policy that promotes the idea of science as a valid epistemology in order to increase belief in anthropogenic climate change seems misguided. Rather, outreach efforts might be more effective if geared toward humanizing the scientific community and correcting misperceptions of scientists themselves. By improving public agreement on where legitimate and trustworthy science is being accomplished, future debates at the intersections of science and politics can begin to focus more on what problems to prioritize instead of what the problems are.

Women who quitted Instagram, users who are no longer exposed to direct evaluative feedback about their images, reported significantly higher levels of life satisfaction and positive affect

Taking a Short Break from Instagram: The Effects on Subjective Well-Being. Giulia Fioravanti, Alfonso Prostamo, and Silvia Casale. Cyberpsychology, Behavior, and Social Networking, Dec 17 2019. https://doi.org/10.1089/cyber.2019.0400

Abstract: This study investigated whether abstaining from Instagram (Ig) affects subjective well-being among young men and women. By comparing an intervention group (40 participants who take a break from Ig for a week) with a control group (40 participants who kept using Ig), we found that women who quitted Ig reported significantly higher levels of life satisfaction and positive affect than women who kept using it. Whereas positive affect increment depended on social appearance comparison, life satisfaction rose independent of the tendency to compare one's own appearance with others. It is possible that users who are no longer exposed to direct evaluative feedback about their images on Ig—be it related to their appearance, habits, or opinions—can witness an increase in their global satisfaction levels. No significant effects were found among men.

Participants rated the neuroscience abstract as having stronger findings and as being more valid and reliable than the parapsychology abstract, despite the fact that the two abstracts were identical

Bias in the Evaluation of Psychology Studies: A Comparison of Parapsychology Versus Neuroscience. Bethany Butzer. EXPLORE, December 28 2019. https://doi.org/10.1016/j.explore.2019.12.010

Abstract: Research suggests that scientists display confirmation biases with regard to the evaluation of research studies, in that they evaluate results as being stronger when a study confirms their prior expectations. These biases may influence the peer review process, particularly for studies that present controversial findings. The purpose of the current study was to compare the evaluation of a parapsychology study versus a neuroscience study. One hundred participants with a background in psychology were randomly assigned to read and evaluate one of two virtually identical study abstracts (50 participants per group). One of the abstracts described the findings as if they were from a parapsychology study, whereas the other abstract described the findings as if they were from a neuroscience study. The results revealed that participants rated the neuroscience abstract as having stronger findings and as being more valid and reliable than the parapsychology abstract, despite the fact that the two abstracts were identical. Participants also displayed confirmation bias in their ratings of the parapsychology abstract, in that their ratings were correlated with their scores on transcendentalism (a measure of beliefs and experiences related to parapsychology, consciousness and reality). Specifically, higher transcendentalism was associated with more favorable ratings of the parapsychology abstract, whereas lower transcendentalism was associated with less favorable ratings. The findings suggest that psychologists need to be vigilant about potential biases that could impact their evaluations of parapsychology research during the peer review process.

Keywords: BiasResearchPsychologyConfirmation biasParapsychologyPsiNeuroscience

No strong evidence for a causal role of testosterone in promoting human aggression, positive but weakly correlations

Is testosterone linked to human aggression? A meta-analytic examination of the relationship between baseline, dynamic, and manipulated testosterone on human aggression. S. N. Geniole et al. Hormones and Behavior, December 28 2019, 104644. https://doi.org/10.1016/j.yhbeh.2019.104644

Highlights
• baseline testosterone is positively (but weakly) correlated with human aggression. The relationship between baseline testosterone and aggression is significantly stronger in male vs. females samples.
• context-dependent changes in testosterone are positively (but weakly) correlated with human aggression. The relationship between changes in testosterone and aggression is significantly stronger in male vs. females samples.
• No strong evidence for a causal role of testosterone in promoting human aggression

Abstract: Testosterone is often considered a critical regulator of aggressive behaviour. There is castration/replacement evidence that testosterone indeed drives aggression in some species, but causal evidence in humans is generally lacking and/or—for the few studies that have pharmacologically manipulated testosterone concentrations—inconsistent. More often researchers have examined differences in baseline testosterone concentrations between groups known to differ in aggressiveness (e.g., violent vs non-violent criminals) or within a given sample using a correlational approach. Nevertheless, testosterone is not static but instead fluctuates in response to cues of challenge in the environment, and these challenge-induced fluctuations may more strongly regulate situation-specific aggressive behaviour. Here, we quantitatively summarize literature from all three approaches (baseline, change, and manipulation), providing the most comprehensive meta-analysis of these testosterone-aggression associations/effects in humans to date. Baseline testosterone shared a weak but significant association with aggression (r = 0.054, 95% CIs [0.028, 0.080]), an effect that was stronger and significant in men (r = 0.071, 95% CIs [0.041, 0.101]), but not women (r = 0.002, 95% CIs [−0.041, 0.044]). Changes in T were positively correlated with aggression (r = 0.108, 95% CIs [0.041, 0.174]), an effect that was also stronger and significant in men (r = 0.162, 95% CIs [0.076, 0.246]), but not women (r = 0.010, 95% CIs [−0.090, 0.109]). The causal effects of testosterone on human aggression were weaker yet, and not statistically significant (r = 0.046, 95% CIs [−0.015, 0.108]). We discuss the multiple moderators identified here (e.g., offender status of samples, sex) and elsewhere that may explain these generally weak effects. We also offer suggestions regarding methodology and sample sizes to best capture these associations in future work.

Keywords: Challenge hypothesisAndrogensSex differencesPharmacological challenge

Check also Hormones in speed-dating: The role of testosterone and cortisol in attraction. Leander van der Meij et al. Hormones and Behavior, Volume 116, November 2019, 104555. https://www.bipartisanalliance.com/2019/11/hormones-in-speed-dating-role-of.html

Having too many men – although sex ratio skew might increase competition and violence among some members of the population, overall levels of those same behaviors might decline

Ecological Sex Ratios and Human Mating. Jon K.Maner, Joshua M.Ackerman. Trends in Cognitive Sciences, December 28 2019. https://doi.org/10.1016/j.tics.2019.11.008

Abstract: The ratio of men to women in a given ecology can have profound influences on a range of interpersonal processes, from marriage and divorce rates to risk-taking and violent crime. Here, we organize such processes into two categories – intersexual choice and intrasexual competition – representing focal effects of imbalanced sex ratios.

Keywords: evolutionpsychologyrelationshipscognitionsexualitycompetition

SR = sex ratio

Conclusion
Although evolutionary psychology is sometimes viewed as focusing exclusively on phenomena assumed to be invariant across time, people, and cultures (psychological universals), several lines of research demonstrate the important role of ecological contingencies [12]. Humans display enormous flexibility, calibrating their behavior in a facultative manner to variables in the local environment [13]. Ecological SRs reflect a key variable to which men and women adjust their mating behavior. Those adjustments are highly strategic and are aimed at enhancing reproductive success given features of the mating environment and of the individual person. This last insight helps answer our opening question about the consequences of having too many men – although SR skew might increase competition and violence among some members of the population, overall levels of those same behaviors might decline. Identifying individual differences and situational factors that moderate SR effects, as well as proximate cognitive mechanisms that underlie those effects, provides a fertile ground for future research. Future work would also benefit from delineating more clearly the specific social cues and population boundaries that people use to assess SRs (Box 1).

Box 1. Unanswered Questions about Ecological SRs
• What specific cues and population boundaries do people use to assess SRs?
o Do people base their assessments on immediate interaction partners, their local communities, or broader social/ecological borders?
o How should researchers navigate difficulties associated with analysis of data aggregated at population levels (e.g., problems can arise when inferring individual processes from regionally aggregated data)? [7]

• What degree of SR skew is required to affect behavioral outcomes?
o Few systematic analyses of this question exist in humans. Do minor imbalances in SRs affect behavior, or are larger and more obvious imbalances required?

• When should various types of SRs (e.g., adult SR vs. operational SR) be distinguished theoretically and empirically?
o The ecological literature focuses primarily on adult SRs, but these sometimes include nonreproducing individuals less relevant for mating dynamics (e.g., elderly, sexual minorities).

• On what key ecological, cultural, and individual difference factors are SR effects contingent?
o Relatively little work has been done to address this question, but preliminary evidence supports certain factors (e.g., mate value, social status, conflict levels) and not others (e.g., life expectancy, wealth).

• When do SRs have effects beyond those immediately predicated on mating dynamics?
o What other downstream behaviors and cognitions are influenced by SR skew? Some evidence suggests, for example, that SRs affect distal outcomes including investment behavior, consumer spending, career choices, and health decisions.

The avoidance of obese people is well documented, but its psychological basis is poorly understood; we think obesity triggers equivalent emotional and avoidant‐based responses as a contagious disease

Is obesity treated like a contagious disease? Caley Tapp  Megan Oaten  Richard J. Stevenson  Stefano Occhipinti  Ravjinder Thandi. Journal of Applied Social Psychology, December 27 2019. https://doi.org/10.1111/jasp.12650

Abstract: The behavioral avoidance of people with obesity is well documented, but its psychological basis is poorly understood. Based upon a disease avoidance account of stigmatization, we tested whether a person with obesity triggers equivalent self‐reported emotional and avoidant‐based responses as a contagious disease (i.e., influenza). Two hundred and sixty‐four participants rated images depicting real disease signs (i.e., person with influenza), false alarms (i.e., person with obesity), person with facial bruising (i.e., negative control), and a healthy control for induced emotion and willingness for contact along increasing levels of physical proximity. Consistent with our prediction, as the prospect for contact became more intimate, self‐reported avoidance was equivalent in the influenza and obese target conditions, with both significantly exceeding reactions to the negative and healthy controls. In addition, participants reported greatest levels of disgust toward the obese and influenza target conditions. These results are consistent with an evolved predisposition to avoid individuals with disease signs. Implicit avoidance occurs even when participants know explicitly that such signs—here, obese body form—result from a noncontagious condition. Our findings provide important evidence for a disease avoidance explanation of the stigmatization of people with obesity.


4 | DISCUSSION
We predicted that participant desire for avoidance of a person with
obesity and a person with influenza would significantly exceed
avoidant-based responses toward healthy and negative controls
and that this avoidance desire would increase as the prospect for
contact becomes more intimate and that this effect will be more
pronounced for the obesity and influenza targets. Consistent with
our prediction, when the prospect for contact was intimate (i.e.,
kissing, sexual activity), self-reported avoidance was equivalent in
the influenza and obesity targets, with both significantly exceeding reactions to the negative and healthy controls. By contrast,
participants were significantly more willing to have more intimate
levels of contact with the bruise or healthy target. As the prospect for contact became sexualized (i.e., kiss on the mouth and
sex), both male and female participants reported the greatest,
and equivalent, avoidance toward the obesity and influenza targets, relative to the negative and healthy controls. When the contact involved real physical intimacy participants reacted toward
the obesity target as if they were a contagious disease carrier.
Consistent with previous research examining false disease signs
(e.g., Kouznetsova et al., 2012; Ryan et al., 2012), participants correctly indicate that obesity is not a contagious condition and that
influenza is a contagious condition.
In support of a disease avoidance explanation, our results also
show that participants felt higher levels disgust when exposed to
both a person with obesity and a person with influenza, compared
to the healthy and bruise targets. Although previous research has
found gender differences in trait disgust predicting responses
toward people with obesity (Fisher et al., 2013; Lieberman et al.,
2012), no differences in felt disgust between male and female participants emerged in our study. Gender differences did emerge for
ratings of fear and anger, and this was by in large driven by the
bruise target—female participants felt more fear toward the bruise
target, whereas male participants felt more anger in response to
the bruise target. We suggest that this is due to the differential
subjective meaning of a facial bruise for men and women, in that
a man with a bruised face implies that he has been involved in a
fight, whereas a woman with a bruise is more likely to be viewed
as a victim of violence. While men and women differed in terms of
their anger and fear responses toward the bruise target they did
not significantly differ in terms of their disgust or avoidance responses toward the bruise target, thus the differences in emotion
expressed toward the bruise target is unlikely to be the driver of
participant avoidance responses.
As this study obtained willingness for physical contact via self-reports, future research should examine whether the self-reported
desire to avoid intimate contact with people with obesity demonstrated in the present study is expressed behaviorally. Although it
is clearly not possible to examine intimate levels of physical contact
in an experiment, there are other methods of assessing whether
disgust-driven avoidance behavior occurs. A study conducted by
Ryan et al. (2012) utilized behavioral outcome measure to compare
responses to a person with a facial birthmark and a person with influenza, but this type of method has yet to be extended to other
nonnormative body features, such as obesity.
A limitation of the present research was that we did not gather information about participant's own weight, which meant that we were
unable to examine the effect of participant weight status on the effects of interest. However, there is a growing body of evidence which
suggests that people with obesity themselves hold negative stereotypes about people with obesity (e.g., Papadopoulos & Brennan,
2015; Wang, Brownell, & Wadden,2004). Thus, it is unlikely that
differential effects across levels of weight would exist with regards
to the desire to avoid physical contact with a person with obesity.
Future research should include appropriate measures of participant
weight in order to provide further empirical evidence regarding the
effect of participant weight on stigmatization of people with obesity,
with a particular focus on intimate levels of physical contact.
Future research should also consider the role of relevant individual differences and make use of designs that allow this to be examined. Differences in levels of perceived vulnerability to disease
or trait levels of disgust may moderate the findings of the present
research. It is likely that people with higher levels of perceived vulnerability to disease or higher levels of trait disgust would display
even more of a desire for avoidance, and this effect should exist for
both a person with influenza and a person with obesity. In addition,
it would be valuable for future research to incorporate a measure of
participant disgust at the prospect of each level of physical contact,
rather than just overall levels of disgust felt toward the target. This
would allow for a more fine grained exploration of the role of disgust
in the avoidance processes demonstrated.
In conclusion, the finding of greatest desire to avoid intimate
physical contact with the obese and influenza targets, in combination with the finding that both the obese and influenza targets also
generated the greatest self-reported disgust, suggests the activation of a disease avoidance system. The display of some superficial
form of physical nonnormality, leads observers to respond to them
as though they are contagious disease carriers (Kouznetsova et al.,
2012; Schaller & Duncan, 2007). Our results show that a person
with obesity appears to be treated as though they possess a disease
cue—a false alarm in this case. A likely explanation is that obese body
form was heuristically perceived as a sign of disease that triggered
a disgust and avoidance response as the prospect for disease transmission increased (i.e., as intimacy of physical contact increased).
These findings make an important contribution to our understanding
of the psychological basis underlying the stigmatization of people
with obesity. It would be useful for interventions aimed at reducing
stigma toward people with obesity to take a disease avoidance explanation into account, particularly with regards to the role of disgust.

From 2013... Of the 7,000 languages spoken today, some 2,500 are generally considered endangered; this vastly underestimates the danger, less than 5% of all languages can still ascend to the digital realm

Kornai A (2013) Digital Language Death. PLoS ONE 8(10): e77056. https://doi.org/10.1371/journal.pone.0077056

Abstract: Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide.

Conclusions
We have machine classified the world’s languages as digitally ascending (including all vital, thriving, and borderline cases) or not, and concluded, optimistically, that the former class is at best 5% of the latter. Broken down to individual languages and language groups the situation is quite complex and does not lend itself to a straightforward summary. In our subjective estimate, no more than a third of the incubator languages will make the transition to the digital age. As the example of the erstwhile Klingon wikipedia (now hosted on Wikia) shows, a group of enthusiasts can do wonders, but it cannot create a genuine community. The wikipedia language policy, https://meta.wikimedia.org/wiki/Language_proposal_policy, demanding that “at least five active users must edit that language regularly before a test project will be considered successful” can hardly be more lenient, but the actual bar is much higher. Wikipedia is a good place for digitally-minded speakers to congregate, but the natural outcome of these efforts is a heritage project, not a live community.

A community of wikipedia editors that work together to anchor to the web the culture carried by the language is a necessary but insufficient condition of true survival. By definition, digital ascent requires use in a broad variety of digital contexts. This is not to deny the value of heritage preservation, for the importance of such projects can hardly be overstated, but language survival in the digital age is essentially closed off to local language varieties whose speakers have at the time of the Industrial Revolution already ceded both prestige and core areas of functionality to the leading standard koinés, the varieties we call, without qualification, French, German, and Italian today.

A typical example is Piedmontese, still spoken by some 2–3 m people in the Torino region, and even recognized as having official status by the regional administration of Piedmont, but without any significant digital presence. More closed communities perhaps have a better chance: Faroese, with less than 50 k speakers, but with a high quality wikipedia, could be an example. There are glimmers of hope, for example [2] reported 40,000 downloads for a smartphone app to learn West Flemish dialect words and expressions, but on the whole, the chances of digital survival for those languages that participate in widespread bilingualism with a thriving alternative, in particular the chances of any minority language of the British Isles, are rather slim.

In rare cases, such as that of Kurdish, we may see the emergence of a digital koiné in a situation where today separate Northern (Kurmanji), Central (Sorani), and Southern (Kermanshahi) versions are maintained (the latter as an incubator). But there is no royal road to the digital age. While our study is synchronic only, the diachronic path to literacy and digital literacy is well understood: it takes a Caxton, or at any rate a significant publishing infrastructure, to enforce a standard, and it takes many years of formal education and a concentrated effort on the part of the community to train computational linguists who can develop the necessary tools, from transliterators (such as already powering the Chinese wikipedia) to spellcheckers and machine translation for their language. Perhaps the most remarkable example of this is Basque, which enjoys the benefits of a far-sighted EU language policy, but such success stories are hardly, if at all, relevant to economically more blighted regions with greater language diversity.

The machine translation services offered by Google are an increasingly important driver of cross-language communication. As expected, the first several releases stayed entirely in the thriving zone, and to this day all language pairs are across vital and thriving languages, with the exception of French – Haitian Creole. Were it not for the special attention DARPA, one of the main sponsors of machine translation, devoted to Haitian Creole, it is dubious we would have any MT aimed at this language. There is no reason whatsoever to suppose the Haitian government would have, or even could have, sponsored a similar effort [32]. Be it as it may, Google Translate for any language pair currently likes to have gigaword corpora in the source and target languages and about a million words of parallel text. For vital languages this is not a hard barrier to cross. We can generally put together a gigaword corpus just by crawling the web, and the standardly translated texts form a solid basis for putting together a parallel corpus [33]. But for borderline languages this is a real problem, because online material is so thinly spread over the web that we need techniques specifically designed to find it [16], and even these techniques yield only a drop in the bucket: instead of the gigaword monolingual corpora that we would need, the average language has only a few thousand words in the Crúbadán crawl. To make matters worse, the results of this crawl are not available to the public for fear of copyright infringement, yet in the digital age what cannot be downloaded does not exist.

The digital situation is far worse than the consensus figure of 2,500 to 3,000 endangered languages would suggest. Even the most pessimistic survey [34] assumed that as many as 600 languages, 10% of the population, were safe, but reports from the field increasingly contradict this. For British Columbia, [35] writes:

Here in BC, for example, the prospect of the survival of the native languages is nil for all of the languages other than Slave and Cree, which are somewhat more viable because they are still being learned by children in a few remote communities outside of BC. The native-language-as-second-language programs are so bad that I have NEVER encountered a child who has acquired any sort of functional command (and I don’t mean fluency - I mean even simple conversational ability or the ability to read and understand a fairly simple paragraph or non-ritual bit of conversation) through such a program. I have said this publicly on several occasions, at meetings of native language teachers and so forth, and have never been contradicted. Even if these programs were greatly improved, we know, from e.g. the results of French instruction, to which oodles of resources are devoted, that we could not expect to produce speakers sufficiently fluent to marry each other, make babies, and bring them up speaking the languages. It is perfectly clear that the only hope of revitalizing these languages is true immersion, but there are only two such programs in the province and there is little prospect of any more. The upshot is that the only reasonable policy is: (a) to document the languages thoroughly, both for scientific purposes and in the hope that perhaps, at some future time, conditions will have changed and if the communities are still interested, they can perhaps be revived then; (b) to focus school programs on the written language as vehicle of culture, like Latin, Hebrew, Sanskrit, etc. and on language appreciation. Nonetheless, there is no systematic program of documentation and instructional efforts are aimed almost entirely at conversation.

Cree, with a population of 117,400 (2006), actually has a wikipedia at http://cr.wikipedia.org but the real ratio is only 0.02, suggestive of a hobbyist project rather than a true community, an impression further supported by the fact that the Cree wikipedia has gathered less than 60 articles in the past six years. Slave (3,500 speakers in 2006) is not even in the incubator stage. This is to be compared to the over 30 languages listed by the Summer Institute of Linguistics for BC. In reality, there are currently less than 250 digitally ascending languages worldwide, and about half of the borderline cases are like Moroccan Arabic (ary), low prestige spoken dialects of major languages whose signs of vitality really originate with the high prestige acrolect. This suggests that in the long run no more than a third of the borderline cases will become vital. One group of languages that is particularly hard hit are the 120+ signed languages currently in use. Aside from American Sign Language, which is slowly but steadily acquiring digital dictionary data and search algorithms [36], it is perhaps the emerging International Sign [37] that has the best chances of survival.

There could be another 20 spoken languages still in the wikipedia incubator stage or even before that stage that may make it, but every one of these will be an uphill struggle. Of the 7,000 languages still alive, perhaps 2,500 will survive, in the classical sense, for another century. With only 250 digital survivors, all others must inevitably drift towards digital heritage status (Nynorsk) or digital extinction (Mandinka). This makes language preservation projects such as http://www.endangeredlanguages.com even more important. To quote from [6]:

Each language reflects a unique world-view and culture complex, mirroring the manner in which a speech community has resolved its problems in dealing with the world, and has formulated its thinking, its system of philosophy and understanding of the world around it. In this, each language is the means of expression of the intangible cultural heritage of people, and it remains a reflection of this culture for some time even after the culture which underlies it decays and crumbles, often under the impact of an intrusive, powerful, usually metropolitan, different culture. However, with the death and disappearance of such a language, an irreplaceable unit in our knowledge and understanding of human thought and world-view is lost forever.

Unfortunately, at a practical level heritage projects (including wikipedia incubators) are haphazard, with no systematic programs of documentation. Resources are often squandered, both in the EU and outside, on feel-good revitalization efforts that make no sense in light of the preexisting functional loss and economic incentives that work against language diversity [38].

Evidently, what we are witnessing is not just a massive die-off of the world’s languages, it is the final act of the Neolithic Revolution, with the urban agriculturalists moving on to a different, digital plane of existence, leaving the hunter-gatherers and nomad pastoralists behind. As an example, consider Komi, with two wikipedias corresponding to the two main varieties (Permyak, 94,000 speakers and Zyrian, 293,000 speakers), both with alarmingly low () real ratios. Given that both varieties have several dialects, some already extinct and some clearly still, the best hope is for a koiné to emerge around the dialect of the main city, Syktyvkar. Once the orthography is standardized, the university (where the main language of education is Russian) can in principle turn out computational linguists ready to create a spellchecker, an essential first step toward digital literacy [39]. But the results will benefit the koiné speakers, and the low prestige rural Zyrian dialects are likely to be left behind.

What must be kept in mind is that the scenario described for Komi is optimistic. There are several hundred thousand speakers, still amounting to about a quarter of the local population. There is a university. There are strong economic incentives (oil, timber) to develop the region further. But for the 95% of the world’s languages where one or more of these drivers are missing, there is very little hope of crossing the digital divide.

Errors in choice tasks are not only detected fast and reliably, participants often report that they knew that an error occurred already before a response was produced

Are errors detected before they occur? Early error sensations revealed by metacognitive judgments on the timing of error awareness. Francesco Di Gregorio, Martin E. Maier, Marco Steinhauser. Consciousness and Cognition. Volume 77, January 2020, 102857. https://doi.org/10.1016/j.concog.2019.102857

Highlights
• Humans frequently report that they detected errors already before executing the error response.
• Early error sensations occur consistently across tasks and metacognitive measures.
• Early error sensations are not caused by an expectation bias.

Abstract: Errors in choice tasks are not only detected fast and reliably, participants often report that they knew that an error occurred already before a response was produced. These early error sensations stand in contrast with evidence suggesting that the earliest neural correlates of error awareness emerge around 300 ms after erroneous responses. The present study aimed to investigate whether anecdotal evidence for early error sensations can be corroborated in a controlled study in which participants provide metacognitive judgments on the subjective timing of error awareness. In Experiment 1, participants had to report whether they became aware of their errors before or after the response. In Experiment 2, we measured confidence in these metacognitive judgments. Our data show that participants report early error sensations with high confidence in the majority of error trials across paradigms and experiments. These results provide first evidence for early error sensations, informing theories of error awareness.

Keywords: Error awarenessError detectionMetacognition


4. General discussion

Participants in experiments on error detection frequently report that they already knew that an error has occurred before the response was executed, a phenomenon we term early error sensation. The goal of the present study was to investigate whether these anecdotally reported early error sensations exist and whether they can be reliably reported. In four experiments using two experimental approaches, we provided evidence that early error sensations indeed exist, and that they occur on the majority of error trials. When participants were asked to classify responses in a flanker task either as being correct, as early detected errors, or as late detected errors in Experiment 1a, they reported early errors in 73.7% of errors. When an additional category for detected errors with unclear timing was introduced in Experiment 1b, early errors were reported in 59.1% of trials. When participants had to wager on the feeling of early error detection, they placed high bets on 62.4% (Exp. 2a) and 70.9% (Exp. 2b). These data demonstrate that early error sensations are reported very consistently across different primary tasks (flanker task vs. number/let discrimination) and secondary tasks (error classification vs. post-decision wagering).
Crucial, however, is the question whether these introspective reports indeed reflect that errors were detected before the response, or whether participants were unable to discriminate between early and late errors and simply guessed that early errors must occasionally occur. A challenging problem for measuring early error sensations is that we cannot objectively determine whether a given error was detected early or late. To deal with this problem, we introduced a reference for the metacognitive reports of early error sensations. In Experiment 2, we used a Visual Awareness task in which participants had to wager on the accuracy of their responses. In the subsequent Error Awareness task, we instructed participants to place high bets on early error sensations only if they were similarly confident as for the high bets in the Visual Awareness task. We argued that this induces a common metric for judging confidence of the two tasks, which allowed us to interpret the metacognitive reports of early error detection with respect to the metacognitive judgments of visual awareness. This reasoning receives support from previous findings showing that humans represent confidence in a task-unspecific format which allows them to compare confidence across tasks with a similarly high precision as confidence within tasks (de Gardelle & Mamassian, 2014). Moreover, it has recently been suggested that integrating information from different sources into a common metric might even be the major purpose of metacognition (Shea & Frith, 2019). In Experiment 2a, the frequencies of high bets were coincidentally similar in both tasks. We can thus infer that the average confidence by which participants reported early error sensations in this experiment corresponded to the average confidence by which they were aware of the visual stimuli in the Visual Awareness task. This confidence level ought to be rather high given that the objective performance in the Visual Awareness task was far above chance level.
We found no evidence that metacognitive reports of early error sensations were subject to an expectation bias. If participants simply guessed that early error sensations must occasionally occur, these guesses should be influenced by expectations about the frequency of early error sensations. To investigate whether such an expectation bias exists, we manipulated the difficulty of the Visual Awareness task, and thus the frequency of high bets in this task. However, whereas the frequency of high bets in the Visual Awareness task varied between Experiments 2a and 2b, the frequency of high bets in the Error Awareness task remained constant across the two experiments. This suggests that metacognitive judgments about early error sensations are not influenced by a specific expectation bias induced by the frequency of high bets in the Visual Awareness task. While we cannot fully exclude a general bias towards instruction-driven expectations about early error sensations, our results strongly suggest that metacognitive judgments on early error sensations are very consistent and reliable across experimental procedures.
We found no evidence that early and late detected errors differ with respect to any objective features. It has been reported that uncertainty or conflict during response selection can influence post-response decision process and metacognitive judgments about errors (Steinhauser et al., 2008Yeung and Summerfield, 2012). As a consequence, variables like stimulus congruency or RT could potentially influence subjective judgments about early error sensations. However, we found no robust evidence that this was the case in the present study. Participants reported early error sensations in a similar proportion for congruent and incongruent errors in Experiment 1. Moreover, RTs were similar across all error types. A small RT difference between early and late detected errors in Experiment 1a disappeared when we controlled for errors with unclear timing in Experiment 1b. This suggests that the emergence of early error sensations is not related to specific features of task processing like stimulus congruency or RTs. Thus, our data provide little evidence that early error sensations reflect the objective latency of error detection, which has been found to correlate with RT when response speed was directly manipulated (Steinhauser et al., 2008).
An important question is why early error sensations occurred on the majority of trials whereas the neural correlates of error awareness emerge not until 300 ms after an error (e.g., Steinhauser & Yeung, 2010). There are at least two possible explanations. A first explanation is that conclusions about the timing of error awareness from EEG measures like the Pe are incorrect. The Pe is often considered the earliest neural correlate of error awareness and the role of the Pe for the emergence of error awareness has been described within an evidence accumulation account (Steinhauser and Yeung, 2010Ullsperger et al., 2010). It is assumed that the Pe reflects the accumulated evidence that an error has occurred, and that error awareness emerges when this evidence exceeds a threshold. The evidence is provided by cognitive, autonomous, motor and sensory processing (Bode and Stahl, 2014Wessel et al., 2012Wessel et al., 2011), but does not necessarily rely on early error processing represented by the Ne/ERN (Di Gregorio et al., 2018). One possibility is that the feeling of error awareness emerges already before the Pe, for instance, at the time point of the Ne/ERN or even earlier (Bode & Stahl, 2014). The Pe could represent a later stage of metacognitive processing, perhaps related to the emergence of confidence about response accuracy (Boldt & Yeung, 2015).
A second explanation is that early error sensations are a metacognitive illusion. Error awareness could emerge at the time of the Pe but the illusion is created that the error has been detected already before the response. This mechanism could serve to subjectively synchronize error awareness with the timing of the objective error in the same way as visual awareness is subjectively aligned with the onset of a visual stimulus. In the context of visual awareness, expectations and other top-down variables can influence the accumulation of sensory evidence and consequentially metacognitive judgments about stimulus awareness (de Lange et al., 2010Kouider et al., 2010). Moreover, a backward referral process has been assumed to synchronize the subjective time point of visual awareness with the objective stimulus to create a coherent perception in the stream of consciousness (Libet et al., 1979Libet et al., 1983). A similar process could align the subjective time point of error awareness with the emergence of the objective error. This temporal alignment of actions (i.e., a response) and their effects (i.e., the feeling of being incorrect) could further serve to evoke a sense of agency, i.e., the feeling of having caused an effect. Indeed, previous studies have shown that action-effect contingencies are influenced by their temporal contiguity and vice versa. Humans tend to perceive two events more causally related the closer they occur in time (Greville & Buehner, 2010), and causality judgments correlate with the perceived temporal contiguity between actions and their sensory effects (Haering & Kiesel, 2016). In other words, these metacognitive illusions on early error sensations could serve to reconstruct temporal contiguity between perception, action and metacognitive contents (Kouider et al., 2010).
While we obtained clear and robust results across several experiments, the present method has also some limitations. A first limitation is that using a categorical measure for the timing of error detection implies a loss of information as time is a continuous phenomenon. However, differentiating only between errors detected before and after the response has the advantage of imposing considerably lower cognitive load than using a continuous measure. For instance, in the classical Libet studies (Libet et al., 1983), participants had to indicate the time of voluntary action initiation on a visual clock. However, in addition to considerable methodological weaknesses (Trevena & Miller, 2002), monitoring a clock represents a difficult secondary task that presumably interferes with both, the primary task and the task to detect errors. In contrast, our categorical measure uses the response as a reference rather than a continuous timer. As error detection already involves response monitoring (Steinhauser et al., 2008), only minimal additional load should be imposed.
As already discussed, a second limitation is that we have no objective measure that verifies the existence of early error sensations. Future studies could solve this problem by measuring neural correlates of early error sensations. Strong evidence for the existence of early error sensations would be provided if not only the Pe but also the earlier Ne/ERN would correlate with early error sensations. If only the Pe differed between early and late detected errors, this would suggest that early error sensations emerge during the later stage of conscious error processing. However, if such a difference was found also for the Ne/ERN, this would point to early error signals such as response conflict (Yeung et al., 2004) or prediction errors (Holroyd & Coles, 2002) as the origin of early error sensations. It is even possible that brain activity preceding the response can affect metacognitive judgments on early error sensations. ERP differences between errors and correct responses have been found prior to the response (Bode & Stahl, 2014) or even on the previous trial of simple tasks (Hajcak et al., 2005Hoonakker et al., 2016Ridderinkhof et al., 2003), as well as in tasks involving complex sequences of motor programs such as piano playing (Maidhof, Rieger, Prinz, & Kloesh, 2009) . In a similar vein, a study using self-report measures has revealed that internal error prediction occurs before responses in skilled typing (Rieger & Bart, 2016). Here, the question arises whether this activity serves as a cue for metacognitive judgments, or whether metacognition relies on direct access to the timing of these neural events.
A further question is whether early error sensations are related to early incorrect response activation. On correct trials, early incorrect response activation leads to a phenomenon called partial errors (Burle et al., 2002Coles et al., 1995Endrass et al., 2008), which can be consciously reported by participants (Rochet, Spieser, Casini, Hasbroucq, & Burle, 2014). Future studies could investigate whether such early incorrect response activation on error trials is responsible for early error sensations. Indeed, lower response force for errors than correct responses has been shown in skilled typing (Rabbitt, 1978). As this phenomenon has been interpreted as resulting from inhibition of the error response before actual response execution, it could be taken as indirect evidence for early error sensations. Future studies could examine whether errors accompanied by early error sensations are executed with lower response force than late errors.
The present study provides first evidence that participants have the subjective feeling of detecting errors already before they occurred. We show that these early error sensations can be robustly measured across different tasks and metacognitive judgments. Our results add to the broad body of evidence that humans have metacognitive access to a multitude of performance parameters. Previous studies could show that participants are able to report whether an error has occurred or not (Rabbitt, 1968Rabbitt, 2002), to provide graded confidence judgments on the accuracy of their response (Boldt & Yeung, 2015), to classify the type of error they committed (i.e., to which distractor stimulus they responded; Di Gregorio et al., 2016), and to estimate their RTs in choice tasks (Bryce & Bratzke, 2014). These metacognitive contents are used for optimizing decision processes (Desender et al., 2018Desender et al., 2014). Metacognitive representations on the timing of error detection could form another piece of information to support this optimization.