Sunday, July 31, 2022

Violence against all, pregnant women & children included: Organized violence from the rulers

This is human nature... From Maribel Fierro's Violence against women in Andalusi historical sources third/ninth-seventh/thirteenth centuries). In: Violence in the Islamic thought from the Quran to the Mongols, Robert Gleave, Istvan Kristo-Nagy, Eds. 2015. Cleaned of references:

The situation in Cordoba during the fitna barbariyya – so-called because the Cordobans rejected and fought against those caliphs who were supported by the Berbers – is described along the same lines: depravity reigned, wine was drunk publicly and adultery and sodomy were allowed. The Cordobans who showed a preference for Sulaymān al-Mustaʿīn – known as the caliph of the Berbers – were killed, together with some of the women who were with them; and other women were eventually sold as if they were prisoners of war. [...]. The caliph Muḥammad b. Hishām b. ʿAbd al-Jabbār al-Mahdī ordered the houses of the Cordoban Berbers to be pillaged and allowed their harems to be violated: women were made captive and sold in the dār al-banāt, and pregnant women were killed.

After al-Mahdī had escaped from Cordoba and was trying to recover his authority, his ally, the general Wāḍiḥ, made a pact with the Christians, according to which, among other things, the Christians were allowed to take the wives of the Berbers they defeated. When al-Mahdi returned to power, in spite of the fact that the Berbers had left Cordoba, he ordered that anybody resembling a Berber be killed, including children and pregnant women.

[...] 

Every man was killed, the harems were dishonoured and the virgins raped: blood fell down to their feet, and they were left naked and crying. The blacks and the lowest soldiers of the Zirid troops took possession of the women, so that their tents became full with them, until the Zirid king Badis took pity on them after three days. They were then left alone, naked and barefoot, and made their way to other villages and fortresses.

[...]

Captivity and enslavement were bad enough, but there was also no lack of cruelty, which is often represented when dealing with the treatment of virgins. The military leader of the Christians [...] included among the captives that were his part of the booty virgins who were eight and ten years old. The conquerors took possession of the houses with their inhabitants and all their belongings: women were raped in front of their relatives, those who were married in front of their husbands, and virgins in front of their fathers, who were powerless, because they were held in chains; Muslim women so abused were eventually passed to slaves, so that they could then take pleasure with them.


It is very difficult to know how much of this is re-writing history to make the previous ruler look bad, how much is propaganda against the religious enemies, etc. But even so, some of these things happened, probably in lesser numbers than we can read in the sources.

Individuals tend to conform to the group's moral judgments even without the presence of the group's members, but people with utilitarian inclinations conform to a greater extent and more frequently than people with deontological inclinations

The Effects of Individual Moral Inclinations on Group Moral Conformity. I. Z Marton-Alper, A. Sobeh, S.G Shamay-Tsoory. Current Research in Behavioral Sciences, July 30 2022, 100078. https://doi.org/10.1016/j.crbeha.2022.100078

Highlights

• Individuals tend to conform to the group's moral judgments even without the presence of the group's members.

• Individual's moral inclination affects their conformity tendency.

• people with utilitarian inclinations conform to a greater extent and more frequently than people with deontological inclinations.

Abstract: Conformity has been shown to affect behaviors ranging from attitudes to moral decisions. The current research examined how individual moral inclination (i.e., utilitarian vs. deontological) affects moral conformity in online settings. To this end we designed a trolley-like moral dilemma paradigm in which participants rated moral decisions both individually and after being exposed to other people's ratings. We validated the task with 363 participants, demonstrating that in online settings individuals tend to conform to the group's moral judgments. Using an additional 346 participants, we showed that individual differences influence the conformity tendency, such that people with utilitarian inclinations conform to a greater extent and more frequently than people with deontological inclinations. We conclude that people with prior utilitarian inclinations are more disposed to moral conformity.

Keywords: ConformityMoralityUtilitarianDeontologicalOnline


Adolescent and young adult daily mobility patterns were moderately to highly heritable

Individual differences in adolescent and young adult daily mobility patterns and their relationships to big five personality traits: a behavioral genetic analysis. Jordan D. Alexander et al. Journal of Research in Personality, July 29 2022, 104277. https://doi.org/10.1016/j.jrp.2022.104277

Abstract: Youth behavior changes and their relationships to personality have generally been investigated using self-report studies, which are subject to reporting biases and confounding variables. Supplementing these with objective measures, like GPS location data, and twin-based research designs, which help control for confounding genetic and environmental influences, may allow for more rigorous, causally informative research on adolescent behavior patterns. To investigate this possibility, this study aimed to (1) investigate whether behavior changes during the transition from adolescence to emerging adulthood are evident in changing mobility patterns, (2) estimate the influence of adolescent personality on mobility patterns, and (3) estimate genetic and environmental influences on mobility, personality, and the relationship between them. Twins aged Fourteen to twenty-two (N=709, 55% female) provided a baseline personality measure, the Big Five Inventory, and multiple years of smartphone GPS data from June 2016 - December 2019. Mobility, as measured by daily locations visited and distance travelled, was found via mixed effects models to increase during adolescence before declining slightly in emerging adulthood. Mobility was positively associated with Extraversion and Conscientiousness (r of 0.17 - 0.25, r of 0.10 - 0.16) and negatively with Openness (r of -0.11 - -0.13). ACE models found large genetic (A = 0.56 - 0.81) and small-moderate environmental (C of 0.12 - 0.28, E of 0.07 - 0.15) influences on mobility. A and E influences were highly shared across mobility measures (rg = 0.70, re= 0.58). Associations between mobility and personality were partially explained by mutual genetic influences (rg of -0.27 - 0.53). Results show that as autonomy increases during adolescence and emerging adulthood, we see corresponding increases in youth mobility. Furthermore, the heritability of mobility patterns and their relationship to personality demonstrate that mobility patterns are informative, psychologically meaningful behaviors worthy of continued interest in psychology.


Introduction

In many cultures, late adolescence is the first period of substantial autonomy during the lifespan. Adolescents spend less time with their parents and more time with their peers and exert far greater control over their daily lives and activities than in childhood (Steinberg & Morris, 2001). In the United States and other western countries, developmental milestones like learning to drive, beginning to work, attending college, and leaving home all take place during late adolescence and further contribute to this expansion of autonomy (Remschmidt, 1994). As adolescents grow increasingly autonomous, adolescent personality plays a greater role in their daily experiences, behavior patterns, and life experiences (Johnson et al., 2013, McAdams et al., 2013). For example, adolescent personality is predictive of engagement in social activities, academic or career aspirations, artistic expression, and interest in recreational drug use (DeYoung et al., 2008; Wrzus et al., 2013). Additionally, life events under some degree of an adolescent’s control, like school suspensions, breaking up with a romantic partner, and starting or losing a job are also significantly associated with adolescent personality (Billig et al., 1996).


As behavior patterns which emerge during adolescence, such as eating habits, exercise, substance use, and sexual decision making, are highly predictive of important health outcomes, understanding how factors like personality contribute to their development carries significant scientific and public health implications (Alberga et al., 2012, Chambers et al., 2003, Sawyer et al., 2018). Understanding how adolescents move through and engage with their environments can help scientists, clinicians, and policy makers understand risk trajectories, identify at risk individuals, and design interventions to reduce the incidence of health problems like obesity or substance use.


Psychologists have historically relied on observational, self-report-based studies to understand developmental changes in adolescent behavior patterns. Self-report surveys are efficient to administer and adaptable to a wide variety of psychological constructs; they have helped us glean important insights into how adolescents’ daily activities change and how they are influenced by factors like personality (Csikszentmihalyi et al., 2014, Wrzus et al., 2013). However, while self-report based observational studies have proven useful, they come with methodological limitations that limit our ability to draw generalizable conclusions. For instance, they do not directly measure behavior, are subject to response biases, are limited by participant self-knowledge, and are often burdensome for participants to complete (Paulus & Vazire, 2007). Additionally, observational research is prone to confounding variables which can produce spurious correlations and render interpretation particularly difficult (Grimes & Schulz, 2002).


The limitations of self-report data can in part be mitigated through additional measures which are less prone to the biases associated with self-report. Smartphone GPS data, for example, can be used to unobtrusively observe and quantify aspects of participants’ daily activities (Harari et al., 2016; Miller, 2012). Smartphone data offer standardized, objective measures of participants’ locations and movement patterns which may be useful in corroborating the findings of existing research on adolescents’ daily lives. Previous research has demonstrated that human mobility patterns can be reliably measured using GPS data (Andrade et al., 2019) and that such patterns are meaningfully related to personality and daily activities in adolescence and young adulthood. Several studies have reported relationships between daily mobility patterns and personality traits in adolescence and young adulthood (Ai et al., 2019, Alessandretti et al., 2018, Stachl et al., 2020). Additionally, mobility based measures have been used to predict adolescent psychological and health outcomes like alcohol use, affect, anxiety and depression symptoms, and sleep patterns in adolescent and college aged samples (Jacobson and Bhattacharya, 2022, Ren et al., 2022, Santani et al., 2018, Sathyanarayana et al., 2016).


However, existing research has been conducted over short time spans in relatively small samples of adolescents, and research observing mobility patterns over the course of adolescence has yet to be conducted. Hence it remains an open question how mobility patterns change during this period of growing autonomy. Such information can help inform claims about how daily life changes during adolescence and help provide further information about whether daily mobility patterns contain useful information about human behavior over longer time spans.


Such research can be further improved by using twin data, which can help us understand where individual differences in adolescents’ daily mobility patterns come from and how they are related to potential explanatory variables like personality. Twin data allows researchers to measure the extent of genetic and environmental contributions to variation in a trait or behavior. Additionally, multivariate behavioral genetic models using twin data can assess whether associations between traits result from mutual genetic or environmental influences. Hence, twin studies can help alleviate the problem of confounding variables in observational research by providing additional understanding of the nature and origins of correlational patterns: helping to parse the extent to which associations between variables are explained by genetic, shared environmental, or non-shared environmental factors (McGue et al., 2010). Twin-based analyses can thereby offer evidence for whether adolescent mobility patterns stem more from heritable traits, such as their preferences for particular activities, or from aspects of their environment, such as how many kilometers away from school they live. Furthermore, measuring the degree of overlapping genetic influences on mobility and personality can offer further insight into why mobility might be heritable, perhaps partly due to the influence of other heritable behavioral traits, like personality.


The present study thus had three primary aims. First, to assess whether changes in autonomy and daily activities which occur during adolescence and emerging adulthood are reflected in adolescent mobility patterns. Second, to investigate how changes in mobility are related to adolescent personality. Third and finally, to estimate how mobility and its relationship to personality are influenced by genetic, shared environmental, and non-shared environmental factors.


Saturday, July 30, 2022

From 2012... Party Evolutions in Moral Intuitions: A Text-Analysis of US Political Party Platforms from 1856-2008

Motyl, Matt, Party Evolutions in Moral Intuitions: A Text-Analysis of US Political Party Platforms from 1856-2008 (October 8, 2012). SSRN: http://dx.doi.org/10.2139/ssrn.2158893

Abstract: The theory of political realignments has been debated since its conception. The prevailing perspective is that critical political realignments generally do not occur in the way they were initially described. Rather, it seems more likely that if political realignments occur, they tend to be more gradual, secular realignments akin to issue evolution where parties slowly change their positions on issues over time. The focus of this analysis is on examining how each party’s moral intuitions change, as determined by a textual analysis of the Democratic and Republican party’s platforms from 1856-2008. The data suggest that, in general, the usage of words related to each of 5 moral intuitions move together over time for both parties. However, in 1896 and 1932, the Democratic and Republican parties diverge in their emphasis on harm, fairness, and authority-related moral intuitions. Furthermore, it appears that the clearest instance of party evolution in their moral intuitions occurred between 1896 and 1932, suggesting a fundamental shift in both party’s views of the federal government’s role in promoting individual welfare.

Keywords: morality, moral foundations, moral intuitions, polarization, realignment, party platforms, Democrat, Republican





Women tend to consume luxury goods as a way to signal their mating standards to men and thereby deter undesirable pursuers

The Mate Screening Motive: How Women Use Luxury Consumption to Signal to Men. Qihui Chen, Yajin Wang, Nailya Ordabayeva. Journal of Consumer Research, ucac034, July 26 2022. https://doi.org/10.1093/jcr/ucac034

Abstract: Previous research has found that for men, activating a mating motive increases luxury consumption as a way to attract a romantic partner. However, little is known about the role of luxury consumption in women’s romantic endeavors. The present research conceptualizes a mate screening motive which explains how women use luxury consumption to romantically signal to men. Six studies and two follow-ups conducted in controlled and field settings show that the mate screening motive boosts women’s consumption of luxury goods as a way to signal their mating standards to men and thereby deter undesirable pursuers. The effect is diminished when mate screening is less necessary such as when external screening tools are available (e.g., screening filters on dating websites), the quality of potential mates is high, and the focus is on selecting a desirable partner rather than deterring undesirable pursuers. The findings have important implications for understanding how consumers use products and brands in romantic relationships, and for designing marketing strategies and communication for luxury brands, commercial dating services, and dating apps. Our findings also provide insights for consumers on how to use brands and products as effective communication devices in romantic endeavors.


Keywords: mating motive, mate screening motive, romantic relationship, luxury consumption, conspicuous consumption


The Economic Limits of Bitcoin and Anonymous, Decentralized Trust on the Blockchain

Budish, Eric B., The Economic Limits of Bitcoin and Anonymous, Decentralized Trust on the Blockchain (June 27, 2022). University of Chicago, Becker Friedman Institute for Economics Working Paper No. 83, 2022. SSRN: http://dx.doi.org/10.2139/ssrn.4148014

Abstract: Satoshi Nakamoto invented a new form of trust. This paper presents a three equation argument that Nakamoto’s new form of trust, while undeniably ingenious, is extremely expensive: the recurring, 'flow' payments to the anonymous, decentralized compute power that maintains the trust must be large relative to the one-off, 'stock' benefits of attacking the trust. This result also implies that the cost of securing the trust grows linearly with the potential value of attack — e.g., securing against a $1 billion attack is 1000 times more expensive than securing against a $1 million attack. A way out of this flow-stock argument is if both (i) the compute power used to maintain the trust is non-repurposable, and (ii) a successful attack would cause the economic value of the trust to collapse. However, vulnerability to economic collapse is itself a serious problem, and the model points to specific collapse scenarios. The analysis thus suggests a 'pick your poison' economic critique of Bitcoin and its novel form of trust: it is either extremely expensive relative to its economic usefulness or vulnerable to sabotage and collapse.


A Discussion of Responses to this Paper’s Argument

This paper first circulated in shorter form in June 2018. I received a lot of comments and counterarguments in response to the paper’s main line of argument. I have tried to handle the central line of counter-argument throughout the main text of this updated draft. This is the point made by Huberman, Leshno and Moallemi (2021) and many practitioners that we should compare Bitcoin’s costs to the costs of market power in traditional finance, which are also high.24 I hope the present draft of the text makes more clear the conditional nature of the paper’s argument: if Bitcoin becomes more economically useful, then it will have to get even more expensive, linearly, or it will be vulnerable to attack. I hope as well that the more explicit computational simulations, for varying levels of Vattack all the way up to $100 billion, make clear that the way Bitcoin’s security cost model scales is importantly different from how costs scale for traditional finance protected by rule-of-law. In this appendix I discuss several of the other most common comments and counter-arguments I have received about this paper since it was first circulated.

A.1 Community

As noted above in Section 5, a majority attack on Bitcoin, or any other major cryptocurrency, would be widely noticed. A line of argument I heard frequently in response to the June 2018 draft is that the Bitcoin community would organize a response to the attack. For example, the community could organize a “hard fork” off of the state of the blockchain just prior to the attack, which would include all transactions perceived to be valid, void any perceived-as-invalid transactions, possibly confiscate or void the attacker’s other Bitcoin holdings if these are traceable, and possibly change the hash function or find some other way to ignore or circumvent the attacker’s majority of compute power.25 The community response argument seems valid as an argument that attacks might be more expensive or difficult to execute than is modeled here, but it raises two important issues. 24See Philippon (2015) and Greenwood and Scharfstein (2013) on high costs of traditional finance, and see Cochrane (2013) for a counterpoint. 25The phrase “hard fork” means that in addition to coordinating on a particular fork of a blockchain if there are multiple — in this case, the attacker’s chain, which is the longest, and the chain the community is urging be coordinated on in response — the code used by miners is updated as well. This could include hard-coded state information such as the new chain or information about voided Bitcoins held by the attacker, code updates such as a new hash function, etc. First, and most obviously, the argument contradicts the notion of anonymous, decentralized trust. It relies on a specific set of trusted individuals in the Bitcoin community. Second, consider the community response argument from the perspective of a traditional financial institution. In the event of a large-scale attack that involves billions of dollars, the traditional financial institution would, in this telling, be left in the hands of the Bitcoin community. At present, reliance on a tight-knit community of those most invested in Bitcoin (whether financially, intellectually, etc.), may sound reassuring — those with the most to lose would rally together to save it. But now imagine the hypothetical future in which Bitcoin becomes a more integral part of the global financial system, and imagine there is a fight over whether an entity like a Goldman Sachs is entitled to billions of dollars worth of Bitcoin that it believes was stolen — but the longest chain says otherwise. Will the “vampire squid” be made whole by the “Bitcoin community?” Quite possibly, but one can hopefully see the potential weakness of relying on an amorphous community as a source of trust for global finance.

A.2 Rule of Law

A related line of argument is that, in the event of a large-scale attack specifically on a financial institution such as a bank or exchange, rule of law would step in. For example, the financial institutions depicted as the victims of a double-spend in Figure 2, once they realize they no longer have the Bitcoins paid to them because of the attack, would obtain help from rule-of-law tracing down the attacker and recovering the stolen funds. This response, too, seems internally valid while contradicting the idea of anonymous, decentralized trust. It also seems particularly guilty of wanting to “have your cake and eat it too.” In this view, cryptocurrencies are mostly based on anonymous, decentralized trust — hence evading most forms of scrutiny by regulators and law enforcement — but, if there is a large attack, then rule-of-law will come to the rescue.

A.3 Counterattacks

Moroz et al. (2020) extend the analysis in Budish (2018) to enable the victim of a double-spending attack to attack back. They consider a game in which there is an Attacker and a Defender. If the Attacker double spends against the Defender for v dollars, the Defender can then retaliate, themselves organizing a 51% or more majority, to attack back so that the original honest chain becomes the longest chain again. This allows the Defender to recover their property. For example, suppose the escrow period is 6, denote the initial double-spend transaction as taking place in block 1, and suppose the attacker chain replaces the honest chain as soon as the escrow period elapses, as in Figure 2. Notationally, suppose the honest chain consists of blocks {1, 2, ..., 7} at the time the honest chain is replaced, and the attacker chain that replaces it is {1’, 2’, ..., 7’, 8’}. If the Defender can quickly organize a majority of their own, then they can build off of the {1, 2, ..., 7} chain, and eventually surpass the attacker chain, recovering their property. For example, maybe the honest chain reaches block 10 before the Attacker chain reaches block 10’, so then {1, 2, ..., 10} is the new longest chain and the Defender has their property back from the correct transaction in block 1. This argument is game theoretically valid, and indeed there are theoretical subtleties to the argument that the reader can appreciate for themselves in the paper. That said, it relies on every large-scale participant in the Bitcoin system being able and willing to conduct a 51% attack on a moment’s notice. This is kind of like requiring every major financial institution to have not just security guards, but access to a standing army.

A.4 Modification to Nakamoto I: Increase Throughput

Bitcoin processes about 2000 transactions per block, which is about 288,000 per day or 105 million per year. In contrast, Visa processes about 165 billion transactions per year (Visa, 2021). The reader will notice that the logic in equations (1)-(3) does not depend directly on the number of transactions in a block. If the number of transactions in a Bitcoin block were to increase by 1000x (to roughly Visa’s level), then the required pblock to keep Bitcoin secure against a given scale of attack Vattack, per equation (3) would not change. Thus, the required cost per transaction to keep Bitcoin secure against a given scale of attack would decline by a factor of 1000. In this scenario of a 1000x throughput increase, Bitcoin’s security costs per transaction are still large, but less astonishingly so. In the base case, to secure Bitcoin against a $1 billion attack would require costs per transaction of $31 instead of $31,000. To secure against a $100 billion attack would require costs per transaction of $3,100 instead of $3.1 million. A subtlety is that as the number of transactions per block grows, so too might the scope for attack. That is, Vattack might grow as well. Still, this seems a promising response to the logic of this paper. A particularly interesting variation on this idea is the paradigm called “Level 2.” In this paradigm, the Bitcoin blockchain (“Level 1”) would be used for relatively large transactions, but smaller transactions would be conducted off-chain, possibly supported with traditional forms of trust, with just occasional netting on the main Bitcoin blockchain. In this paradigm, as well, the large transactions on chain could also have a long escrow period, making attacks more expensive.26 26I thank Neha Narula for several helpful conversations about this approach.

A.5 Modification to Nakamoto II: Tweak Longest-Chain Convention

The discussion above in A.1 expressed skepticism about the “community” response to the logic of this paper. However, what about modifying the longest-chain convention to try to encode what the community would want to do in the event of an attack. The modification to the longest-chain convention could take advantage of two specific features of double-spending attacks: 1. The Attacker has to sign transactions both to the victim of the double-spending attack — call this the Bank — and to another account they control — call this the Cousin account. The fact that there are multiple-signed transactions for the same funds is an initial proof that something suspicious has happened. 2. The Attacker has to make the signed transaction to the Bank public significantly before — in “real-world clock time” — the signed transaction to their Cousin account. The difficulty with just using facts #1 and #2 to void the transaction to the Cousin is alluded to with the phrase “real-world clock time.” Part of what the Nakamoto (2008) blockchain innovation accomplishes is a sequencing of data that does not rely on an external, trusted, time-stamping device. Relatedly, the difficulty with just using fact #1 and having the policy “if there are multiple correctly signed transactions sending the same funds, destroy the funds” is that the victim of the double-spending attack, the Bank, will by now have sent real-world financial assets to the Attacker — and this transaction, in the real world (off the blockchain), cannot be voided no matter how we modify the blockchain protocol. A different way to put the concern is that such a policy would allow any party that sends funds on the blockchain in exchange for goods or financial assets off the blockchain, to then void the counterparty’s received funds after the fact. This seems a recipe for sabotage of the traditional financial sector. The open question, then, is whether the protocol can be modified so that in the event of fact #1, multiple signed transactions, there is some way to appeal to fact #2, grounded in the sequencing of events in real-world clock time, not adjudicated by the longest-chain convention’s determination of the sequence of events.

One pursuit along these lines is Leshno, Pass and Shi (in preparation).

A.6 A Different Consensus Protocol: Proof-of-Stake

Proof-of-stake is widely discussed as an alternative consensus protocol to Nakamoto’s (2008) proofof-work. In this paradigm, rather than earning the probabilistic right to validate blocks from performing computational work, one earns the probabilistic right to validate blocks from locking up stake in the cryptocurrency. The usual motivation for proof-of-stake relative to proof-of-work — the deadweight loss and environmental harm associated with proof-of-work mining, which as noted currently utilizes about 0.3-0.8% of global electricity consumption — is in fact completely orthogonal to the concerns in this paper. In its simplest form, proof-of-stake is vunerable to exactly the same critique (1)-(3) as proof-of-work. Just conceptualize c as the rental cost of stake (i.e., the opportunity cost of locking up one unit of the cryptocurrency), as opposed to the rental cost of capital plus variable electricity cost of running the capital. The amount of stake that will be locked up for validation will depend on the compensation to stakers, as in equation (1). This amount of stake in turn determines the level of security against majority attack, as in equation (2). Thus, equation (3) obtains, with the per-block compensation to stakers needing to be large relative to the value of a majority attack. See Gans and Gandal (2019). However, while in its simplest form proof-of-stake is vulnerable to the same economic limits as proof-of-work, the use of stakes rather than computational work may open new possibilities for establishing trust and thwarting attacks. The advantage is that stakes, unlike computational work, have memory. 27 It is possible, for instance, to grant more trust to stakes that have been locked up for a long period of time, and that have never behaved suspiciously (see Appendix A.5 just above), than to stakes that have only recently been locked up. Stakes can also be algorithmically confiscated by the protocol, whereas ASIC machines exist in the “real world”, outside of the grasp of the protocol. Thus, it seems possible that proof-of-stake could make majority attack significantly more expensive (relative to the level of economic activity) than it is under proof-of-work. That said, proof-of-stake has other potential weaknesses relative to proof-of-work, such as the “nothing-atstake” and “grinding” problems, and its game-theoretic foundations are less well understood. See Halaburda et al. (forthcoming), Section 3.6 for a detailed discussion, and Saleh (2021) for an early game-theoretic analysis. Notably, Ethereum, the second-largest cryptocurrency after Bitcoin, has been considering a move to proof-of-stake for some time. See Buterin (2014, 2016, 2020); Buterin and Griffith (2019). Much of the other research on proof-of-stake also seems to be happening outside of the traditional academic process. It will be interesting to see if a proof-of-stake protocol proves to be a convincing response to the logic of this paper.

Countries with a larger fraction of people with very strict civic norms have proportionally more societal-level rule violations; if perceived norms are so strict that they do not differentiate between small and large violations, then, conditional on a violation occurring, a large violation is individually optimal

Social norms and dishonesty across societies. Diego Aycinena et al. Proceedings of the National Academy of Sciences, July 28, 2022. 119 (31) e2120138119. https://doi.org/10.1073/pnas.2120138119


Significance: Much of the research in the experimental and behavioral sciences finds that stronger prosocial norms lead to higher levels of prosocial behavior. Here, we show that very strict prosocial norms are negatively correlated with prosocial behavior. Using laboratory experiments on honesty, we demonstrate that individuals who hold very strict norms of honesty are more likely to lie to the maximal extent. Further, countries with a larger fraction of people with very strict civic norms have proportionally more societal-level rule violations. We show that our findings are consistent with a simple behavioral rationale. If perceived norms are so strict that they do not differentiate between small and large violations, then, conditional on a violation occurring, a large violation is individually optimal.


Abstract: Social norms have long been recognized as an important factor in curtailing antisocial behavior, and stricter prosocial norms are commonly associated with increased prosocial behavior. In this study, we provide evidence that very strict prosocial norms can have a perverse negative relationship with prosocial behavior. In laboratory experiments conducted in 10 countries across 5 continents, we measured the level of honest behavior and elicited injunctive norms of honesty. We find that individuals who hold very strict norms (i.e., those who perceive a small lie to be as socially unacceptable as a large lie) are more likely to lie to the maximal extent possible. This finding is consistent with a simple behavioral rationale. If the perceived norm does not differentiate between the severity of a lie, lying to the full extent is optimal for a norm violator since it maximizes the financial gain, while the perceived costs of the norm violation are unchanged. We show that the relation between very strict prosocial norms and high levels of rule violations generalizes to civic norms related to common moral dilemmas, such as tax evasion, cheating on government benefits, and fare dodging on public transportation. Those with very strict attitudes toward civic norms are more likely to lie to the maximal extent possible. A similar relation holds across countries. Countries with a larger fraction of people with very strict attitudes toward civic norms have a higher society-level prevalence of rule violations.


Friday, July 29, 2022

Toxoplasma-infected women scored higher in tribalism and lower in cultural liberalism, compared with the Toxoplasma-free control group; infected men scored higher in economic equity

Le Petit Machiavellian Prince: Effects of Latent Toxoplasmosis on Political Beliefs and Values. Robin Kopecky et al. Evolutionary Psychology, July 29, 2022. https://doi.org/10.1177/14747049221112657

Abstract: Humans infected by Toxoplasma gondii express no specific symptoms but manifest higher incidence of many diseases, disorders and differences in personality and behavior. The aim of this study was to compare the political beliefs and values of Toxoplasma-infected and Toxoplasma-free participants. We measured beliefs and values of 2315 responders via an online survey (477 Toxoplasma-infected) using the Political Beliefs and Values Inventory (PI34). This study showed Toxoplasma-infected and Toxoplasma-free participants of our cross-sectional study differed in three of four factors of PI34, scoring higher in Tribalism and lower in Cultural liberalism and Anti-Authoritarianism. We found sex differences in political beliefs associated with Toxoplasma infection. Infected women scored higher in tribalism and lower in cultural liberalism, compared with the Toxoplasma-free control group, while infected men scored higher in economic equity. These results fit with sexual differences in behavior and attitude observed after toxoplasmosis infection. Controlling for the effect of worse physical health and mental health had little impact, suggesting that impaired health did not cause these changes. Rather than adaptation to prevalence of parasites, as suggested by parasite-stress theory, the differences might be side-effects of long-term mild inflammatory reaction. However, to get clear picture of the mild inflammation effects, more research focused on different infectious diseases is needed.

Keywords: Toxoplasma gondii, manipulation hypothesis, political beliefs, stress, infectious diseases, parasite threat, pathogen avoidance

The present study showed that Toxoplasma-infected and Toxoplasma-free participants of our cross-sectional study differed in three of four factors measured with Political inventory, namely scored higher in Tribalism and lower in Cultural liberalism and Anti-authoritarianism. These results are in line with previous broad research, showing that individuals in parasite affected areas are more likely to be conservative and authoritarian (Murray et al., 2013).

Furthermore, we observed sex differences in the studied factors associated with the Toxoplasma infection. Indeed, Toxoplasma-infected men scored higher in Economic Equity, showing a preference for a more equal and less competitive society, while women infected with toxoplasmosis scored higher in tribalism and lower in cultural liberalism. These associations were not reduced when the effect of worse physical health and mental health were controlled, suggesting that impaired health of infected subjects is not the cause of changes in political beliefs. The same conclusion was also supported by the fact that the changes go in the same direction in men and women, because stress coping-associated behavioral and personality changes mostly go in different directions in men and women.

It was suggested by Lindová et al, (20062012) that these associations might be the results of a mild chronic stress caused by the toxoplasmosis infection instead of the result of the toxoplasmosis itself. The presence of a chronic stress not only explain the presence of behavioral and political differences with the non-infected control group, but also the presence of sex differences in these behavior and ideologies as different responses to chronic stress in the two sexes, involving differences in the immune system response and in the coping strategies used. Many of the behavioral changes observed in toxoplasmosis infected people correlate with the function of dopamine in the brain and they may have more broad implications, including political ideologies. In line with our results, previous studies (Flegr et al., 2003Skallová et al., 2005) showed that infected subjects scored lower in novelty seeking, a factor that contributes to a conservative and political opinion (Carney et al., 2008). Indeed, in our sample the infection was associated with higher tribalism and lower cultural liberalism, specifically in women. While we expected differences in the political ideologies of infected men and women, we did not expect a higher score in economic equity in infected men. Typically, men affected with toxoplasmosis showed higher risk propensity and higher entrepreneurial activities (Johnson et al., 2018) more compatible with a competitive type of economy. The association of toxoplasmosis and the preference for an egalitarian economy in men needs to be better explored in future works.

Several studies have found that societies that are more affected by infectious pathogens also exhibit higher levels of conservative political attitudes such as xenophobia and traditionalism (Bennett & Nikolaev, 2020Murray et al., 2013Nikolaev & Salahodjaev, 2017Thornhill et al., 2009). Similar results have also been found in our study performed on the individual level. The hypothesis that has been proposed is that the attitudes exhibited are connected to pathogen avoidance behaviors aimed at minimizing contact with outsiders (intergroup effect) (Aarøe et al., 2017) who may be carrying new pathogens as well as the maintenance of social traditions that may serve to help protect against pathogens (intragroup effect) (Fincher & Thornhill, 2012), with evidence from a recent cross-national study favoring the intragroup effect (Tybur et al., 2016). Significantly, however, both the effects – and the intragroup effect in particular – seem potentially open to the interpretation that they are generalized responses to stress, rather than to a pathogen (Brown et al., 2016Currie & MacE, 2012Hruschka & Henrich, 2013Ma, 2020).

While the present study examines differences found in association between a parasitic infection and political values in context of increased stress at individual level, the results can be seen as an alternative to the parasite stress theory (Fincher et al., 2008Thornhill & Fincher, p. 2014) for the following two main reasons: Parasite-stress theory aims to be the ultimate evolutionary explanation of changes in traits that differ with varied geographical parasite stress levels, yet this study focuses directly on the difference between actual infected and non-infected subjects in one small region, where the intensity of the parasite stress is mostly constant (and low). Second reason is that it has been shown that primarily non-zoonotic diseases have a relation to human personality and societal values (Thornhill et al., 2010). However, toxoplasmosis is primarily a zoonotic disease with very specific and limited spread between people – the only intrapersonal route of infection was suggested from male to female or between two male partners through sexual transmission (Flegr, Klapilová, et al., 2014Flegr, Prandota, et al., 2014Hlaváčová et al., 2021Kaňková et al., 2020).

This being said, there might be a possible connection between the present study and the parasite-stress theory after all. An extensive body of research confirms association between infectious (and in most cases parasitic) diseases and changes in personality profile of animals from molluscs (Seaman & Briffa, 2015) and minnows (Kekäläinen et al., 2013) to migratory birds (Marinov et al., 2017) and mammals (Boyer et al., 2010) including men (Webster, 2001). There are also well studied associations between personality traits and political views, e.g. (Furnham & Fenton-O’Creevy, 2018Harell et al., 2021Verhulst et al., 2010Wang, 2016). While the direction of causality needs to be studied further (Bakker et al., 2021) and while the human-centred field of parasite induced changes in personality traits is regrettably understudied and quite complex (Friedman, 2008), we might expect at least some effect of infectious diseases on political attitudes caused by shifts of personality traits. A possibility thus exists, that at least part of the reported difference in political attitudes in countries with different parasitic disease burdens is not caused by parasite-avoidance but results from a significant part of the population being infected with one or multiple pathogens. This hypothesis is supported by studies that linked a change in personality traits with clear connection to political attitudes (e.g., conservatism) with chronic diseases, although not the infectious ones (Mendelsohn et al., 1995Sutin et al., 2013). On the other hand, some results suggest stronger prediction of personality traits by historical prevalence of diseases rather than by the current situation, suggesting parasite-avoidance as a factor with greater importance in personality shifts (Schaller & Murray, 2008).

Since the available body of literature discussing possible causal relationships between infectious diseases and political beliefs and values is very sparse, this direction of research might provide interesting and important insight into the changing political climate in certain countries. Studies focused on a wider range of infectious diseases besides toxoplasmosis and severe debilitating illnesses such as neurocysticercosis or AIDS would be especially valuable.

The present study showed a 19% prevalence of toxoplasmosis in men and 28% in women. The most recent Czech epidemiological study performed between 2014–2015 (Flegr, 2017) showed the prevalence of 25% in men and 36% in women aged 30–39 years. It is known, however, that prevalence of toxoplasmosis decreases relatively quickly in most developed countries, including the Czech Republic. For example, a large epidemiological study performed on Czech male soldiers 20 years ago found a prevalence of 35% for the age strata 30–35 years (Kolbekova et al., 2007). It is, therefore, possible that the observed seroprevalence reflects the actual situation in the Czech general population.

Limitations

The main limitation of the present study was the fact that the participants of the study were self-selected. Their subpopulation probably represents a specific (more altruistic and more curious) segment of the Czech population, rather than a random sample of the Czech internet population. In addition, people with impairments or severe diseases as well as those from the lowest socioeconomic strata were unable to participate. It is not therefore clear to which extent the results can be generalized to the general Czech (or the World) population.

Another limitation of the study is the moderate number of Toxoplasma-infected men (90). The reason of imbalanced sex ratio was the fact that women are often tested for toxoplasmosis during pregnancy and therefore a larger fraction of women than men know their toxoplasmosis status. Due to the lower number of men, the associations of toxoplasmosis with Cultural liberalism and Anti-authoritarianism were not significant in men, despite being stronger in men than in the more numerous (518) women.

Since this study dealt with the effect of pathogen-caused stress, there is a possible interference of the global COVID-19 pandemics. However, only 6% of the respondents participated in the period between April 2020 when the first infection in Czechia was observed, and the end of data collection in April 2021.

In the present study, we calculated aggregate indices of physical and mental health and used these indices in our statistical models. In future studies, it will be valuable to analyse the effects of individual health-related variables to disentangle the complex relationships between toxoplasmosis, mental and physical health, psychological traits and political beliefs. Such research could also answer important questions related to the causal direction of the observed correlations.

The associations found in the present study are based on correlations and we cannot infer the direction of causality. It cannot be ruled out that the explanation of the effect is in the opposite direction, that e.g., higher tribalism itself, by an unknown mechanism, increases the chance of being infected by Toxoplasma gondii. Like nearly all past studies, also this one was cross-sectional in nature. It is very difficult to study the relationship between Toxoplasma infection and personality using a longitudinal design. The frequency of the Toxoplasma infection in adulthood is low, and thousands of participants should be recruited to find several dozens of subjects who will acquire the infection during the study. Until such a study is performed, any conclusion about the causality behind the correlation between the infection and human personality must be based only on analogies with animal models (Skallová et al., 2006Hodková et al., 2007) or on the existence of a correlation between length of infection and observed personality trait changes (Flegr et al., 19962000) and must be therefore considered only provisional.

Further research is also needed to better clarify the extension and the implication of the associations we found between toxoplasmosis infection and political ideologies, and to clarify the role of sex differences.

We present a 72-year-old man with a unique profile of disorientation in time, such that he split each day into two, 12-h intervals: He had two sets of breakfast, lunch, and dinner, hence the designated “split-day syndrome.”

"Split-day syndrome," a patient with frontotemporal dementia who lives two days in the span of one: a case report and review of articles. Homa Pourriyahi,Mostafa Almasi-Dooghaee,Atefeh Imani,Taravat Vahedi & Babak Zamani. Behavior, Cognition and Neuroscience, Jul 28 2022. https://www.tandfonline.com/doi/abs/10.1080/13554794.2022.2105652

Abstract: Frontotemporal dementia (FTD) is among the most prevalent causes of young-onset dementia  . Along with the frontotemporal and striate atrophy, dopamine dysregulation is also present in FTD. The dopamine system controls mechanisms of time perception. Its depletion can cause miscalculations in the perception of time. We present a 72-year-old man with a unique profile of disorientation in time, such that he split each day into two, 12-h intervals. Although through each 12-h period, he went by his daily activities as if a complete day had passed, e.g., he had two sets of breakfast, lunch, and dinner , hence the designated “split-day syndrome.”



Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be

Does hazing actually increase group solidarity? Re-examining a classic theory with a modern fraternity. Aldo Cimino, Benjamin J.Thomas. Evolution and Human Behavior, July 29 2022. https://doi.org/10.1016/j.evolhumbehav.2022.07.001

Abstract: Anthropologists and other social scientists have long suggested that severe initiations (hazing) increase group solidarity. Because hazing groups tend to be highly secretive, direct and on-site tests of this hypothesis in the real world are nearly non-existent. Using an American social fraternity, we report a longitudinal test of the relationship between hazing severity and group solidarity. We tracked six sets of fraternity inductees as they underwent the fraternity's months-long induction process. Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be.


Keywords: HazingNewcomersRites of passageFraternities


Sharing Online Content — Even Without Reading It — Inflates Subjective Knowledge

Ward, Adrian F. and Zheng, Frank and Broniarczyk, Susan M., I Share, Therefore I Know? Sharing Online Content — Even Without Reading It — Inflates Subjective Knowledge (June 9, 2022). SSRN: http://dx.doi.org/10.2139/ssrn.4132814

Abstract: Billions of people across the globe use social media to acquire and share information. A large and growing body of research examines how consuming online content affects what people know. The present research investigates a complementary, yet previously unstudied question: how might sharing online content affect what people think they know? We posit that sharing may inflate subjective knowledge through a process of internalized social behavior. Sharing signals expertise; thus, sharers can avoid conflict between their public and private personas by coming to believe that they are as knowledgeable as their posts make them appear. We examine this possibility in the context of “sharing without reading,” a phenomenon that allows us to isolate the effect of sharing on subjective knowledge from any influence of reading or objective knowledge. Six studies provide correlational (study 1) and causal (studies 2, 2a) evidence that sharing—even without reading—increases subjective knowledge, and test the internalization mechanism by varying the degree to which sharing publicly commits the sharer to an expert identity (studies 3-5). A seventh study investigates potential consequences of sharing-inflated subjective knowledge on downstream behavior.

Keywords: subjective knowledge, word of mouth, social media, self-perception


Introduction of Sharia law in northern Nigeria: Decreases in infant mortality thru increased vaccination rates, duration of breastfeeding and prenatal health care; there were also increases in primary school enrollment

Islamic Law and Investments in Children: Evidence from the Sharia Introduction in Nigeria. Marco Alfano. Journal of Health Economics, July 21 2022, 102660. https://doi.org/10.1016/j.jhealeco.2022.102660

Abstract: Islamic law lays down detailed rules regulating children’s upbringing. This study examines the effect of such rules on investments in children by analysing the introduction of Sharia law in northern Nigeria. Triple-differences estimates using temporal, geographical and religious variation together with large, representative survey data show decreases in infant mortality. Official government statistics further confirm improvements in survival. Findings also show that Sharia increased vaccination rates, duration of breastfeeding and prenatal health care. Evidence suggests that Sharia improved survival by specifying strict child protection laws and by formalising children’s duty to maintain their parents in old age or in sickness.


JEL: O15 J12 J13

Keywords: BreastfeedingInfant SurvivalIslamNigeria

5.3 Primary school enrolment
Panel C of table 2 reports the results pertaining to primary school enrolment. I use information contained in the household questionnaire to merge children to their mothers and select 16 children born 1989 to 1998 (aged between 4 and 13 at the time of interview). In Nigeria, the school year starts in September. Accordingly, I redefine the year of birth and recode children born after September as being born in the following year. The sample consists of 6,125 children, who enrolled between the (school) years 1993/94 and 2002/03.
To calculate the age at which each child started school, I combine information on the years of education a child completed together with his or her age at interview Only 4% of children aged 6 to 24 repeat a year of school and less than 0.1% of children in the same age bracket drop out (DHS Final Report, 2003). Since their school starting age cannot be precisely calculated, I omit these individuals from the analysis. In Nigeria, children should enrol in school at the age of 6. For the whole country in 2003, school enrolment was relatively low, 46% of girls and 41% of boys aged 6 to 9 have never attended school (DHS Final Report, 2003).
Despite official regulations, children in Nigeria enrol in school at various different ages. To illustrate this phenomenon, I select children in school born between 1989 and 1994 (i.e. children who were due to start school before the introduction of the Sharia) and plot the distribution of the ages at which they started school in figure 5. The solid graph relates to children residing in Sharia states, the dashed to children in the rest of the country. In both samples, less than a quarter of children, who enrol in school, do so at the age of six. Almost 40% start school before that age and around a third begin school aged 7 or older. To take account of the aforementioned variation in the age at which children start school together with the legal requirement to start school at the age of six, I define the dependent variable as taking the value 1 if a child entered school between the ages of 4 to 6. For children due to enter school before the introduction of the Sharia, 43% of children entered school between 4 and 6 years old.
The difference in differences estimates in panel C of table 2 indicate that in states that introduced the Sharia, the probability of school enrolment (aged 6 or younger) increased after the Sharia by 8 to 10 percentage points. As before, the effect is robust to various specifications (columns 1 to 3). In contrast to this, the probability of school enrolment before the age of 6 hardly changed in the rest of the country after the introduction of the Sharia. The triple differences estimates in column 5 suggest that the Sharia increased the probability of children enrolling in school between the ages 4 and 6 by around 15 percentage points. For the partitioned ethnicities sample, the parameter estimates are slightly larger, 22 percentage points.
Finally, I use information on the exact year of birth of children (as always adjusted for the September cut off) to investigate whether changes in school enrolment occurred for children due to enter in the school year 2000/01. As before, I estimate the event study 17 framework outlined in equation 6. The baseline sample in this case consists of children born in the school year 1989/90, i.e. children due to start school between 1993/94 and 1995/96, depending on whether they started school aged 4, 5 or 6. The results in panel a of figure 6 report the estimates for states that introduced the Sharia. For this sample, conditional differences between Muslims and Christians for children due to enter school before the introduction of the Sharia are similar to the base year. The estimates for γθ fluctuate around 0 and are not statistically significant. By contrast, for children due to enter school after the school year 2000/01, the point estimates increase in size and become statistically significant. Panel b shows that for the remainder of the country, the conditional differences between Muslims and Christians remain similar to the baseline year throughout the time period under consideration.
Columns 3 and 4 of table 3 show that the impact of the Sharia on primary school enrolment was slightly larger for girls than for boys. The parameter estimate for boys is around 12 percentage points (column 3). The corresponding figure for girls is around 22 percentage points (column 4). A possible explanation for this heterogeneity is connected with the pretreatment means reported towards the top of table 3. For children due to enter school before the introduction of the Sharia, the proportion of boys entering school aged 4 to 6 was slightly higher than for girls (0.46 for the former and 0.39 for the latter). The Sharia explicitly states that young boys and girls should be treated equally. Parents following these rules should enrol boys and girls at the same rates. Combined with pre-existing disadvantages for girls this change in behaviour would lead to a stronger effect for girls than for boys.

Instead of religious skepticism and a related increase in progressivism...: UFO sightings promote a more conservative worldview

Kitamura, Shuhei. 2022. “UFOs: The Political Economy of Unidentified Threats.” OSF Preprints. July 29. doi:10.31219/osf.io/tme8f

Abstract: In this paper, I study the effect of Unidentified Flying Objects (UFOs) on political outcomes in the United States. Exploiting a random variation in the visibility of UFOs in the sky, I find that UFO sightings before general elections between 2000-2016 increased the vote share of the Republican presidential candidates. I also find that UFO sightings led voters to believe that the government should increase federal spending on military defense and on technology and science, although the latter effect was marginal. The results indicate that voters regard UFOs as unidentified threats to national security that warrant further defense enhancements and scientific research.


Political candidates: More differentiation between positive than negative options; after exceeding a certain, relatively small level of negativity, people do not see any further increase in negativity

Is good more alike than bad? Positive-negative asymmetry in the differentiation between options. A study on the evaluation of fictitious political profiles. Magdalena Jablonska, Andrzej Falkowski and Robert Mackiewicz. Front. Psychol., July 28 2022. https://doi.org/10.3389/fpsyg.2022.923027


Abstract: Our research focuses on the perception of difference in the evaluations of positive and negative options. The literature provides evidence for two opposite effects: on the one hand, negative objects are said to be more differentiated (e.g., density hypothesis), on the other, people are shown to see greater differences between positive options (e.g., liking-breeds-differentiation principle). In our study, we investigated the perception of difference between fictitious political candidates, hypothesizing greater differences among the evaluations of favorable candidates. Additionally, we analyzed how positive and negative information affect candidate evaluation, predicting further asymmetries. In three experiments, participants evaluated various candidate profiles presented in a numeric and narrative manner. The evaluation tasks were designed as individual or joint assessments. In all three studies, we found more differentiation between positive than negative options. Our research suggests that after exceeding a certain, relatively small level of negativity, people do not see any further increase in negativity. The increase in positivity, on the other hand, is more gradual, with greater differentiation among positive options. Our findings are discussed in light of cognitive-experiential self-theory and density hypothesis.


General discussion

In our research we analyzed the perceived differences among the sets of favorable and unfavorable options. More specifically, the aim of our studies was to investigate how people see the difference between good and bad political candidates. Certainly, they would vote for the good ones and not vote for the bad, but how do they compare the good candidate to a better one; and the bad to a worse? We looked for the answers to these questions in three experiments. In Study 1, participants compared the similarity of fictitious candidates to the best possible candidate or the worst possible one. We did not provide descriptions of the best and the worst possible and instead asked the participants to imagine such political figures. On the basis of some preliminary research, we chose some positive and some negative features and used them to prepare descriptions of five different candidates: the very bad, the bad, neutral, the good and the very good one. We presented their descriptions in a form of scales with negative and positive anchors. We used the same five descriptions and the same form of presentation in Study 2. This time, however, the participants not only assessed candidates’ similarities to the best and to the worst possible politicians but also estimated the probability of voting and likeability of the candidates as well as were asked to compare two profiles and decide how similar they were. We slightly changed the design in Study 3 in which we used narrative descriptions of the candidates. We conducted our research in the political setting, because candidate evaluation and selection is a process that many people at least occasionally undertake and which has important social, political and economic implications.

Our focus was on the differences between the evaluations of positive and negative candidates. The literature on differentiation provides evidence for two contradictory effects. On the one hand, negative information has been found to have more complex conceptual representations and lead to a wider response repertoire (Rozin and Royzman, 2001). Linguistic research and studies using spatial arrangement methods have also shown negative categories to be more diverse, with more words used to describe negative events and states (Rozin et al., 2010). Likewise, the proponents of density hypothesis (Unkelbach et al., 2008a) found that positive entities are more related (and thus denser) compared to their negative counterparts. On the other hand, literature provides convincing evidence for an opposite effect, that is a better differentiation between positive entities. For instance, Denrell (2005) found that people have more knowledge and more differentiated representations of liked than disliked social stimuli. In a similar vein, Smallman and others (Smallman et al., 2014Smallman and Becker, 2017) have shown that people make finer evaluative distinctions when rating appealing than unappealing options.

Following this line of research, we assume better differentiation between positive and not negative options to be a norm, especially when making evaluations of social objects or deciding which option to select. Thus, in our research we predicted that participants would be more likely to see the difference between favorable than unfavorable candidates. In our settings that should result in different evaluations of the good and the best candidates, while the evaluations of the bad and the worst one should not differ (Hypothesis 1). We also predicted that additional information about the candidates would be more likely to change a candidate’s image if the valence of the extra information is opposite to the current image. That is, if a candidate is already favorable, the new positive information might help him or her only to some degree, while negative information would significantly harm his or her image. On the contrary, when a candidate is presented in a negative manner, a new piece of negative information would not hurt him or her much, whereas an additional piece of positive information might be quite beneficial for the candidate’s image (Hypothesis 2). Finally, drawing on two earlier hypotheses—on the better differentiation of positive options and an asymmetrical effect of additional positive and negative features—we formulated a hypothesis that joined together these two predictions, assuming that additional positive information would improve the evaluation of an already good candidate, whereas additional negative information would not harm a bad candidate profile (Hypothesis 3).

The results supported our hypotheses. In Study 1 and Study 2 we found that there were no differences in the evaluations of negative candidates, such as a candidate with overall score –24 and a candidate with overall score –48 (the numbers refer to the balance of the evaluations on six different dimensions) were perceived as equally bad. Still, the participants perceived candidates with overall scores + 24 and + 48 as significantly different. The effect was replicated in Study 3, in which candidates were described in a narrative form. This result supports our Hypothesis 1. Importantly, whereas the results of Study 1 and 3 provided only an indirect test of the hypothesized effect, Study 2 gave a direct test as the participants saw both profiles together and were asked to assess their perceived similarity.

Our second research interest was to test how additional positive and negative pieces of information change candidate perception depending candidate valence. As expected, positive features increased candidate evaluation, whereas negative ones decreased it but these effects were not symmetrical, undermining the normative predictions of for instance the contrast model of similarity. This confirms our Hypothesis 2. Furthermore, we obtained a mixed support for Hypothesis 3. The results of Study 1 and Study 2 showed that whereas adding negative features to a candidate’s profile would not change his or her evaluation when this profile was already negative, additional positive features strengthened the image of a unfavorable candidate. However, we did not observe any effect of additional positive features in the evaluations of candidates whose images were presented in a narrative form in Study 3. One possible explanation is that two additional positive characteristics carried less information (i.e., were less diagnostic) than their negative counterparts.

Overall, our findings suggest that people do not see much of a difference between political candidates with many negative features, regardless of the extent to which they are presented as bad. As it seems, at least in the political domain, if an overall evaluation goes below some standard, people do not differentiate between bad options. The effect may be attributed to different motivations in the processing of positive and negative options. If all available alternatives are unappealing, it does not really matter which one of them is worse. After all, they all seem equally bad and, indeed, why anyone would support a bad candidate? This was the case for assessing the similarity to an ideal or bad politician (Study 1, 2, and 3) as well as liking and voting intention (Study 2 and 3). Thus, regardless of their initial expectations people would not vote for a politician if his or her features fall below a certain standard. One possibility that explains this effect is that they would not be able to justify their decision (Shafir et al., 1993).

Importantly, even the standards of “good” and “bad” are not symmetrical, so that it is relatively easy to be deemed as inadequate for the post but rather difficult to be perceived as a good candidate. The effect was especially visible in Study 1 and 2, where there was a dramatic drop in the evaluation of unfavorable candidates, with extremely low, bottom values for candidates’ similarity to an ideal politician and very high similarity to a bad politician. This extremity effect can partially account for the lack of differentiation between negative options. Still, no differences between unfavorable candidate profiles, as predicted in Hypothesis 1, were also found in Study 3, where candidates were presented in a narrative manner and where evaluations were less extreme. Overall, the results of three studies follow our Hypothesis 1, in which we predicted that the evaluations of negative candidates should not differ significantly. However, if the judgment pertains to attractive options, then the decision which one of them is better gains on importance. As visible in our studies, there were significant differences between favorable candidates. Importantly, no ceiling effect was observed. Thus, the bottom effects observed for negative candidate profiles were not paralleled by the symmetrical ceiling effect for positive candidates, suggesting that the participants differentiated their answers when they thought such differentiations were appropriate, providing evidence for better differentiation between positive options.

The results may be explained with regard to two independent information processing systems proposed by Epstein in his cognitive-experiential self-theory (Epstein, 1990Kirkpatrick and Epstein, 1992). The evolutionally older experiential system operates in an automatic and holistic manner, whereas the rational system is “a deliberative, verbally mediated, primarily conscious analytical system that functions by a person’s understanding of conventionally established rules of logic and evidence” (Denes-Raj and Epstein, 1994, p. 819). It seems that whereas an intense dislike toward negative options is an outcome of the experiential system, a better and more discriminative analysis of positive options is governed by the rational system. The finding can be also interpreted with the distinction into sufficient and necessary conditions, where a necessary condition is one which must be present in order for the event to occur but it does not guarantee the event, while a sufficient condition is a condition that will produce the event. Thus, it seems that the list of necessary conditions to be deemed as inadequate for the post is much shorter than the one for an ideal politician. Consequently, the standards for what it means to be good and bad are not symmetrical.

Our findings have important implications for density hypothesis (Unkelbach et al., 2008aAlves et al., 2016), according to which the distribution range of positivity is much narrower than the range of negativity. It seems reasonable to assume that an optimal spectrum is narrower than the negative one and, as shown in many empirical studies on density hypothesis, that the inner structure of positive information is denser than the structure of negative entities. Still, in our opinion it does not imply a better differentiation between negative options. As our studies suggest, the structure of positive categories may be denser but this density is accompanied by (or maybe is a reason for) a better discrimination between favorable options. After all, after rejecting all negative alternatives, people put in much effort to decide which of the remaining options is the best or at least acceptable—although the extent of this effort is moderated by decision importance and individual differences (e.g., a distinction into maximisers and satisficers Schwartz et al., 2002). Thus, if the structure of positive entities is denser, it is likely that people use finer combs to disentangle it.

We are aware of some important drawbacks of our study. First, we did not investigate how people estimate real candidates and, consequently, we did not take into account the importance of political views or associations that some voters may feel for different political parties. This research direction should be taken by other scholars. For instance, it is interesting to analyze how well people differentiate between candidates that are from their party compared to the members of the opposing party. Furthermore, the way we constructed our candidate profiles may pose certain limitations on the ecological validity of the study. Although, the use of such profiles was justified by our intention to have a maximal control over analyzed stimuli, further studies should investigate more complex stimuli. Also, it is interesting to analyze how well people differentiate between options, depending on the modality in which they were presented. For instance, in our studies we found that numerical candidate profiles were evaluated more extremely than candidates presented descriptively. Thus, presentation modality as well as the range of a positive and negative spectrum are further areas of research. Overall, our research provides valuable insight into positive-negative asymmetry with regard to a less-explored area of a differentiation between positive and negative options in the political setting. Contrary to the findings on the better differentiation between negative options, we find evidence for the opposite effect, showing that the evaluations of a few favorable objects are actually more nuanced.