Saturday, July 30, 2022

The Economic Limits of Bitcoin and Anonymous, Decentralized Trust on the Blockchain

Budish, Eric B., The Economic Limits of Bitcoin and Anonymous, Decentralized Trust on the Blockchain (June 27, 2022). University of Chicago, Becker Friedman Institute for Economics Working Paper No. 83, 2022. SSRN: http://dx.doi.org/10.2139/ssrn.4148014

Abstract: Satoshi Nakamoto invented a new form of trust. This paper presents a three equation argument that Nakamoto’s new form of trust, while undeniably ingenious, is extremely expensive: the recurring, 'flow' payments to the anonymous, decentralized compute power that maintains the trust must be large relative to the one-off, 'stock' benefits of attacking the trust. This result also implies that the cost of securing the trust grows linearly with the potential value of attack — e.g., securing against a $1 billion attack is 1000 times more expensive than securing against a $1 million attack. A way out of this flow-stock argument is if both (i) the compute power used to maintain the trust is non-repurposable, and (ii) a successful attack would cause the economic value of the trust to collapse. However, vulnerability to economic collapse is itself a serious problem, and the model points to specific collapse scenarios. The analysis thus suggests a 'pick your poison' economic critique of Bitcoin and its novel form of trust: it is either extremely expensive relative to its economic usefulness or vulnerable to sabotage and collapse.


A Discussion of Responses to this Paper’s Argument

This paper first circulated in shorter form in June 2018. I received a lot of comments and counterarguments in response to the paper’s main line of argument. I have tried to handle the central line of counter-argument throughout the main text of this updated draft. This is the point made by Huberman, Leshno and Moallemi (2021) and many practitioners that we should compare Bitcoin’s costs to the costs of market power in traditional finance, which are also high.24 I hope the present draft of the text makes more clear the conditional nature of the paper’s argument: if Bitcoin becomes more economically useful, then it will have to get even more expensive, linearly, or it will be vulnerable to attack. I hope as well that the more explicit computational simulations, for varying levels of Vattack all the way up to $100 billion, make clear that the way Bitcoin’s security cost model scales is importantly different from how costs scale for traditional finance protected by rule-of-law. In this appendix I discuss several of the other most common comments and counter-arguments I have received about this paper since it was first circulated.

A.1 Community

As noted above in Section 5, a majority attack on Bitcoin, or any other major cryptocurrency, would be widely noticed. A line of argument I heard frequently in response to the June 2018 draft is that the Bitcoin community would organize a response to the attack. For example, the community could organize a “hard fork” off of the state of the blockchain just prior to the attack, which would include all transactions perceived to be valid, void any perceived-as-invalid transactions, possibly confiscate or void the attacker’s other Bitcoin holdings if these are traceable, and possibly change the hash function or find some other way to ignore or circumvent the attacker’s majority of compute power.25 The community response argument seems valid as an argument that attacks might be more expensive or difficult to execute than is modeled here, but it raises two important issues. 24See Philippon (2015) and Greenwood and Scharfstein (2013) on high costs of traditional finance, and see Cochrane (2013) for a counterpoint. 25The phrase “hard fork” means that in addition to coordinating on a particular fork of a blockchain if there are multiple — in this case, the attacker’s chain, which is the longest, and the chain the community is urging be coordinated on in response — the code used by miners is updated as well. This could include hard-coded state information such as the new chain or information about voided Bitcoins held by the attacker, code updates such as a new hash function, etc. First, and most obviously, the argument contradicts the notion of anonymous, decentralized trust. It relies on a specific set of trusted individuals in the Bitcoin community. Second, consider the community response argument from the perspective of a traditional financial institution. In the event of a large-scale attack that involves billions of dollars, the traditional financial institution would, in this telling, be left in the hands of the Bitcoin community. At present, reliance on a tight-knit community of those most invested in Bitcoin (whether financially, intellectually, etc.), may sound reassuring — those with the most to lose would rally together to save it. But now imagine the hypothetical future in which Bitcoin becomes a more integral part of the global financial system, and imagine there is a fight over whether an entity like a Goldman Sachs is entitled to billions of dollars worth of Bitcoin that it believes was stolen — but the longest chain says otherwise. Will the “vampire squid” be made whole by the “Bitcoin community?” Quite possibly, but one can hopefully see the potential weakness of relying on an amorphous community as a source of trust for global finance.

A.2 Rule of Law

A related line of argument is that, in the event of a large-scale attack specifically on a financial institution such as a bank or exchange, rule of law would step in. For example, the financial institutions depicted as the victims of a double-spend in Figure 2, once they realize they no longer have the Bitcoins paid to them because of the attack, would obtain help from rule-of-law tracing down the attacker and recovering the stolen funds. This response, too, seems internally valid while contradicting the idea of anonymous, decentralized trust. It also seems particularly guilty of wanting to “have your cake and eat it too.” In this view, cryptocurrencies are mostly based on anonymous, decentralized trust — hence evading most forms of scrutiny by regulators and law enforcement — but, if there is a large attack, then rule-of-law will come to the rescue.

A.3 Counterattacks

Moroz et al. (2020) extend the analysis in Budish (2018) to enable the victim of a double-spending attack to attack back. They consider a game in which there is an Attacker and a Defender. If the Attacker double spends against the Defender for v dollars, the Defender can then retaliate, themselves organizing a 51% or more majority, to attack back so that the original honest chain becomes the longest chain again. This allows the Defender to recover their property. For example, suppose the escrow period is 6, denote the initial double-spend transaction as taking place in block 1, and suppose the attacker chain replaces the honest chain as soon as the escrow period elapses, as in Figure 2. Notationally, suppose the honest chain consists of blocks {1, 2, ..., 7} at the time the honest chain is replaced, and the attacker chain that replaces it is {1’, 2’, ..., 7’, 8’}. If the Defender can quickly organize a majority of their own, then they can build off of the {1, 2, ..., 7} chain, and eventually surpass the attacker chain, recovering their property. For example, maybe the honest chain reaches block 10 before the Attacker chain reaches block 10’, so then {1, 2, ..., 10} is the new longest chain and the Defender has their property back from the correct transaction in block 1. This argument is game theoretically valid, and indeed there are theoretical subtleties to the argument that the reader can appreciate for themselves in the paper. That said, it relies on every large-scale participant in the Bitcoin system being able and willing to conduct a 51% attack on a moment’s notice. This is kind of like requiring every major financial institution to have not just security guards, but access to a standing army.

A.4 Modification to Nakamoto I: Increase Throughput

Bitcoin processes about 2000 transactions per block, which is about 288,000 per day or 105 million per year. In contrast, Visa processes about 165 billion transactions per year (Visa, 2021). The reader will notice that the logic in equations (1)-(3) does not depend directly on the number of transactions in a block. If the number of transactions in a Bitcoin block were to increase by 1000x (to roughly Visa’s level), then the required pblock to keep Bitcoin secure against a given scale of attack Vattack, per equation (3) would not change. Thus, the required cost per transaction to keep Bitcoin secure against a given scale of attack would decline by a factor of 1000. In this scenario of a 1000x throughput increase, Bitcoin’s security costs per transaction are still large, but less astonishingly so. In the base case, to secure Bitcoin against a $1 billion attack would require costs per transaction of $31 instead of $31,000. To secure against a $100 billion attack would require costs per transaction of $3,100 instead of $3.1 million. A subtlety is that as the number of transactions per block grows, so too might the scope for attack. That is, Vattack might grow as well. Still, this seems a promising response to the logic of this paper. A particularly interesting variation on this idea is the paradigm called “Level 2.” In this paradigm, the Bitcoin blockchain (“Level 1”) would be used for relatively large transactions, but smaller transactions would be conducted off-chain, possibly supported with traditional forms of trust, with just occasional netting on the main Bitcoin blockchain. In this paradigm, as well, the large transactions on chain could also have a long escrow period, making attacks more expensive.26 26I thank Neha Narula for several helpful conversations about this approach.

A.5 Modification to Nakamoto II: Tweak Longest-Chain Convention

The discussion above in A.1 expressed skepticism about the “community” response to the logic of this paper. However, what about modifying the longest-chain convention to try to encode what the community would want to do in the event of an attack. The modification to the longest-chain convention could take advantage of two specific features of double-spending attacks: 1. The Attacker has to sign transactions both to the victim of the double-spending attack — call this the Bank — and to another account they control — call this the Cousin account. The fact that there are multiple-signed transactions for the same funds is an initial proof that something suspicious has happened. 2. The Attacker has to make the signed transaction to the Bank public significantly before — in “real-world clock time” — the signed transaction to their Cousin account. The difficulty with just using facts #1 and #2 to void the transaction to the Cousin is alluded to with the phrase “real-world clock time.” Part of what the Nakamoto (2008) blockchain innovation accomplishes is a sequencing of data that does not rely on an external, trusted, time-stamping device. Relatedly, the difficulty with just using fact #1 and having the policy “if there are multiple correctly signed transactions sending the same funds, destroy the funds” is that the victim of the double-spending attack, the Bank, will by now have sent real-world financial assets to the Attacker — and this transaction, in the real world (off the blockchain), cannot be voided no matter how we modify the blockchain protocol. A different way to put the concern is that such a policy would allow any party that sends funds on the blockchain in exchange for goods or financial assets off the blockchain, to then void the counterparty’s received funds after the fact. This seems a recipe for sabotage of the traditional financial sector. The open question, then, is whether the protocol can be modified so that in the event of fact #1, multiple signed transactions, there is some way to appeal to fact #2, grounded in the sequencing of events in real-world clock time, not adjudicated by the longest-chain convention’s determination of the sequence of events.

One pursuit along these lines is Leshno, Pass and Shi (in preparation).

A.6 A Different Consensus Protocol: Proof-of-Stake

Proof-of-stake is widely discussed as an alternative consensus protocol to Nakamoto’s (2008) proofof-work. In this paradigm, rather than earning the probabilistic right to validate blocks from performing computational work, one earns the probabilistic right to validate blocks from locking up stake in the cryptocurrency. The usual motivation for proof-of-stake relative to proof-of-work — the deadweight loss and environmental harm associated with proof-of-work mining, which as noted currently utilizes about 0.3-0.8% of global electricity consumption — is in fact completely orthogonal to the concerns in this paper. In its simplest form, proof-of-stake is vunerable to exactly the same critique (1)-(3) as proof-of-work. Just conceptualize c as the rental cost of stake (i.e., the opportunity cost of locking up one unit of the cryptocurrency), as opposed to the rental cost of capital plus variable electricity cost of running the capital. The amount of stake that will be locked up for validation will depend on the compensation to stakers, as in equation (1). This amount of stake in turn determines the level of security against majority attack, as in equation (2). Thus, equation (3) obtains, with the per-block compensation to stakers needing to be large relative to the value of a majority attack. See Gans and Gandal (2019). However, while in its simplest form proof-of-stake is vulnerable to the same economic limits as proof-of-work, the use of stakes rather than computational work may open new possibilities for establishing trust and thwarting attacks. The advantage is that stakes, unlike computational work, have memory. 27 It is possible, for instance, to grant more trust to stakes that have been locked up for a long period of time, and that have never behaved suspiciously (see Appendix A.5 just above), than to stakes that have only recently been locked up. Stakes can also be algorithmically confiscated by the protocol, whereas ASIC machines exist in the “real world”, outside of the grasp of the protocol. Thus, it seems possible that proof-of-stake could make majority attack significantly more expensive (relative to the level of economic activity) than it is under proof-of-work. That said, proof-of-stake has other potential weaknesses relative to proof-of-work, such as the “nothing-atstake” and “grinding” problems, and its game-theoretic foundations are less well understood. See Halaburda et al. (forthcoming), Section 3.6 for a detailed discussion, and Saleh (2021) for an early game-theoretic analysis. Notably, Ethereum, the second-largest cryptocurrency after Bitcoin, has been considering a move to proof-of-stake for some time. See Buterin (2014, 2016, 2020); Buterin and Griffith (2019). Much of the other research on proof-of-stake also seems to be happening outside of the traditional academic process. It will be interesting to see if a proof-of-stake protocol proves to be a convincing response to the logic of this paper.

Countries with a larger fraction of people with very strict civic norms have proportionally more societal-level rule violations; if perceived norms are so strict that they do not differentiate between small and large violations, then, conditional on a violation occurring, a large violation is individually optimal

Social norms and dishonesty across societies. Diego Aycinena et al. Proceedings of the National Academy of Sciences, July 28, 2022. 119 (31) e2120138119. https://doi.org/10.1073/pnas.2120138119


Significance: Much of the research in the experimental and behavioral sciences finds that stronger prosocial norms lead to higher levels of prosocial behavior. Here, we show that very strict prosocial norms are negatively correlated with prosocial behavior. Using laboratory experiments on honesty, we demonstrate that individuals who hold very strict norms of honesty are more likely to lie to the maximal extent. Further, countries with a larger fraction of people with very strict civic norms have proportionally more societal-level rule violations. We show that our findings are consistent with a simple behavioral rationale. If perceived norms are so strict that they do not differentiate between small and large violations, then, conditional on a violation occurring, a large violation is individually optimal.


Abstract: Social norms have long been recognized as an important factor in curtailing antisocial behavior, and stricter prosocial norms are commonly associated with increased prosocial behavior. In this study, we provide evidence that very strict prosocial norms can have a perverse negative relationship with prosocial behavior. In laboratory experiments conducted in 10 countries across 5 continents, we measured the level of honest behavior and elicited injunctive norms of honesty. We find that individuals who hold very strict norms (i.e., those who perceive a small lie to be as socially unacceptable as a large lie) are more likely to lie to the maximal extent possible. This finding is consistent with a simple behavioral rationale. If the perceived norm does not differentiate between the severity of a lie, lying to the full extent is optimal for a norm violator since it maximizes the financial gain, while the perceived costs of the norm violation are unchanged. We show that the relation between very strict prosocial norms and high levels of rule violations generalizes to civic norms related to common moral dilemmas, such as tax evasion, cheating on government benefits, and fare dodging on public transportation. Those with very strict attitudes toward civic norms are more likely to lie to the maximal extent possible. A similar relation holds across countries. Countries with a larger fraction of people with very strict attitudes toward civic norms have a higher society-level prevalence of rule violations.


Friday, July 29, 2022

Toxoplasma-infected women scored higher in tribalism and lower in cultural liberalism, compared with the Toxoplasma-free control group; infected men scored higher in economic equity

Le Petit Machiavellian Prince: Effects of Latent Toxoplasmosis on Political Beliefs and Values. Robin Kopecky et al. Evolutionary Psychology, July 29, 2022. https://doi.org/10.1177/14747049221112657

Abstract: Humans infected by Toxoplasma gondii express no specific symptoms but manifest higher incidence of many diseases, disorders and differences in personality and behavior. The aim of this study was to compare the political beliefs and values of Toxoplasma-infected and Toxoplasma-free participants. We measured beliefs and values of 2315 responders via an online survey (477 Toxoplasma-infected) using the Political Beliefs and Values Inventory (PI34). This study showed Toxoplasma-infected and Toxoplasma-free participants of our cross-sectional study differed in three of four factors of PI34, scoring higher in Tribalism and lower in Cultural liberalism and Anti-Authoritarianism. We found sex differences in political beliefs associated with Toxoplasma infection. Infected women scored higher in tribalism and lower in cultural liberalism, compared with the Toxoplasma-free control group, while infected men scored higher in economic equity. These results fit with sexual differences in behavior and attitude observed after toxoplasmosis infection. Controlling for the effect of worse physical health and mental health had little impact, suggesting that impaired health did not cause these changes. Rather than adaptation to prevalence of parasites, as suggested by parasite-stress theory, the differences might be side-effects of long-term mild inflammatory reaction. However, to get clear picture of the mild inflammation effects, more research focused on different infectious diseases is needed.

Keywords: Toxoplasma gondii, manipulation hypothesis, political beliefs, stress, infectious diseases, parasite threat, pathogen avoidance

The present study showed that Toxoplasma-infected and Toxoplasma-free participants of our cross-sectional study differed in three of four factors measured with Political inventory, namely scored higher in Tribalism and lower in Cultural liberalism and Anti-authoritarianism. These results are in line with previous broad research, showing that individuals in parasite affected areas are more likely to be conservative and authoritarian (Murray et al., 2013).

Furthermore, we observed sex differences in the studied factors associated with the Toxoplasma infection. Indeed, Toxoplasma-infected men scored higher in Economic Equity, showing a preference for a more equal and less competitive society, while women infected with toxoplasmosis scored higher in tribalism and lower in cultural liberalism. These associations were not reduced when the effect of worse physical health and mental health were controlled, suggesting that impaired health of infected subjects is not the cause of changes in political beliefs. The same conclusion was also supported by the fact that the changes go in the same direction in men and women, because stress coping-associated behavioral and personality changes mostly go in different directions in men and women.

It was suggested by Lindová et al, (20062012) that these associations might be the results of a mild chronic stress caused by the toxoplasmosis infection instead of the result of the toxoplasmosis itself. The presence of a chronic stress not only explain the presence of behavioral and political differences with the non-infected control group, but also the presence of sex differences in these behavior and ideologies as different responses to chronic stress in the two sexes, involving differences in the immune system response and in the coping strategies used. Many of the behavioral changes observed in toxoplasmosis infected people correlate with the function of dopamine in the brain and they may have more broad implications, including political ideologies. In line with our results, previous studies (Flegr et al., 2003Skallová et al., 2005) showed that infected subjects scored lower in novelty seeking, a factor that contributes to a conservative and political opinion (Carney et al., 2008). Indeed, in our sample the infection was associated with higher tribalism and lower cultural liberalism, specifically in women. While we expected differences in the political ideologies of infected men and women, we did not expect a higher score in economic equity in infected men. Typically, men affected with toxoplasmosis showed higher risk propensity and higher entrepreneurial activities (Johnson et al., 2018) more compatible with a competitive type of economy. The association of toxoplasmosis and the preference for an egalitarian economy in men needs to be better explored in future works.

Several studies have found that societies that are more affected by infectious pathogens also exhibit higher levels of conservative political attitudes such as xenophobia and traditionalism (Bennett & Nikolaev, 2020Murray et al., 2013Nikolaev & Salahodjaev, 2017Thornhill et al., 2009). Similar results have also been found in our study performed on the individual level. The hypothesis that has been proposed is that the attitudes exhibited are connected to pathogen avoidance behaviors aimed at minimizing contact with outsiders (intergroup effect) (Aarøe et al., 2017) who may be carrying new pathogens as well as the maintenance of social traditions that may serve to help protect against pathogens (intragroup effect) (Fincher & Thornhill, 2012), with evidence from a recent cross-national study favoring the intragroup effect (Tybur et al., 2016). Significantly, however, both the effects – and the intragroup effect in particular – seem potentially open to the interpretation that they are generalized responses to stress, rather than to a pathogen (Brown et al., 2016Currie & MacE, 2012Hruschka & Henrich, 2013Ma, 2020).

While the present study examines differences found in association between a parasitic infection and political values in context of increased stress at individual level, the results can be seen as an alternative to the parasite stress theory (Fincher et al., 2008Thornhill & Fincher, p. 2014) for the following two main reasons: Parasite-stress theory aims to be the ultimate evolutionary explanation of changes in traits that differ with varied geographical parasite stress levels, yet this study focuses directly on the difference between actual infected and non-infected subjects in one small region, where the intensity of the parasite stress is mostly constant (and low). Second reason is that it has been shown that primarily non-zoonotic diseases have a relation to human personality and societal values (Thornhill et al., 2010). However, toxoplasmosis is primarily a zoonotic disease with very specific and limited spread between people – the only intrapersonal route of infection was suggested from male to female or between two male partners through sexual transmission (Flegr, Klapilová, et al., 2014Flegr, Prandota, et al., 2014Hlaváčová et al., 2021Kaňková et al., 2020).

This being said, there might be a possible connection between the present study and the parasite-stress theory after all. An extensive body of research confirms association between infectious (and in most cases parasitic) diseases and changes in personality profile of animals from molluscs (Seaman & Briffa, 2015) and minnows (Kekäläinen et al., 2013) to migratory birds (Marinov et al., 2017) and mammals (Boyer et al., 2010) including men (Webster, 2001). There are also well studied associations between personality traits and political views, e.g. (Furnham & Fenton-O’Creevy, 2018Harell et al., 2021Verhulst et al., 2010Wang, 2016). While the direction of causality needs to be studied further (Bakker et al., 2021) and while the human-centred field of parasite induced changes in personality traits is regrettably understudied and quite complex (Friedman, 2008), we might expect at least some effect of infectious diseases on political attitudes caused by shifts of personality traits. A possibility thus exists, that at least part of the reported difference in political attitudes in countries with different parasitic disease burdens is not caused by parasite-avoidance but results from a significant part of the population being infected with one or multiple pathogens. This hypothesis is supported by studies that linked a change in personality traits with clear connection to political attitudes (e.g., conservatism) with chronic diseases, although not the infectious ones (Mendelsohn et al., 1995Sutin et al., 2013). On the other hand, some results suggest stronger prediction of personality traits by historical prevalence of diseases rather than by the current situation, suggesting parasite-avoidance as a factor with greater importance in personality shifts (Schaller & Murray, 2008).

Since the available body of literature discussing possible causal relationships between infectious diseases and political beliefs and values is very sparse, this direction of research might provide interesting and important insight into the changing political climate in certain countries. Studies focused on a wider range of infectious diseases besides toxoplasmosis and severe debilitating illnesses such as neurocysticercosis or AIDS would be especially valuable.

The present study showed a 19% prevalence of toxoplasmosis in men and 28% in women. The most recent Czech epidemiological study performed between 2014–2015 (Flegr, 2017) showed the prevalence of 25% in men and 36% in women aged 30–39 years. It is known, however, that prevalence of toxoplasmosis decreases relatively quickly in most developed countries, including the Czech Republic. For example, a large epidemiological study performed on Czech male soldiers 20 years ago found a prevalence of 35% for the age strata 30–35 years (Kolbekova et al., 2007). It is, therefore, possible that the observed seroprevalence reflects the actual situation in the Czech general population.

Limitations

The main limitation of the present study was the fact that the participants of the study were self-selected. Their subpopulation probably represents a specific (more altruistic and more curious) segment of the Czech population, rather than a random sample of the Czech internet population. In addition, people with impairments or severe diseases as well as those from the lowest socioeconomic strata were unable to participate. It is not therefore clear to which extent the results can be generalized to the general Czech (or the World) population.

Another limitation of the study is the moderate number of Toxoplasma-infected men (90). The reason of imbalanced sex ratio was the fact that women are often tested for toxoplasmosis during pregnancy and therefore a larger fraction of women than men know their toxoplasmosis status. Due to the lower number of men, the associations of toxoplasmosis with Cultural liberalism and Anti-authoritarianism were not significant in men, despite being stronger in men than in the more numerous (518) women.

Since this study dealt with the effect of pathogen-caused stress, there is a possible interference of the global COVID-19 pandemics. However, only 6% of the respondents participated in the period between April 2020 when the first infection in Czechia was observed, and the end of data collection in April 2021.

In the present study, we calculated aggregate indices of physical and mental health and used these indices in our statistical models. In future studies, it will be valuable to analyse the effects of individual health-related variables to disentangle the complex relationships between toxoplasmosis, mental and physical health, psychological traits and political beliefs. Such research could also answer important questions related to the causal direction of the observed correlations.

The associations found in the present study are based on correlations and we cannot infer the direction of causality. It cannot be ruled out that the explanation of the effect is in the opposite direction, that e.g., higher tribalism itself, by an unknown mechanism, increases the chance of being infected by Toxoplasma gondii. Like nearly all past studies, also this one was cross-sectional in nature. It is very difficult to study the relationship between Toxoplasma infection and personality using a longitudinal design. The frequency of the Toxoplasma infection in adulthood is low, and thousands of participants should be recruited to find several dozens of subjects who will acquire the infection during the study. Until such a study is performed, any conclusion about the causality behind the correlation between the infection and human personality must be based only on analogies with animal models (Skallová et al., 2006Hodková et al., 2007) or on the existence of a correlation between length of infection and observed personality trait changes (Flegr et al., 19962000) and must be therefore considered only provisional.

Further research is also needed to better clarify the extension and the implication of the associations we found between toxoplasmosis infection and political ideologies, and to clarify the role of sex differences.

We present a 72-year-old man with a unique profile of disorientation in time, such that he split each day into two, 12-h intervals: He had two sets of breakfast, lunch, and dinner, hence the designated “split-day syndrome.”

"Split-day syndrome," a patient with frontotemporal dementia who lives two days in the span of one: a case report and review of articles. Homa Pourriyahi,Mostafa Almasi-Dooghaee,Atefeh Imani,Taravat Vahedi & Babak Zamani. Behavior, Cognition and Neuroscience, Jul 28 2022. https://www.tandfonline.com/doi/abs/10.1080/13554794.2022.2105652

Abstract: Frontotemporal dementia (FTD) is among the most prevalent causes of young-onset dementia  . Along with the frontotemporal and striate atrophy, dopamine dysregulation is also present in FTD. The dopamine system controls mechanisms of time perception. Its depletion can cause miscalculations in the perception of time. We present a 72-year-old man with a unique profile of disorientation in time, such that he split each day into two, 12-h intervals. Although through each 12-h period, he went by his daily activities as if a complete day had passed, e.g., he had two sets of breakfast, lunch, and dinner , hence the designated “split-day syndrome.”



Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be

Does hazing actually increase group solidarity? Re-examining a classic theory with a modern fraternity. Aldo Cimino, Benjamin J.Thomas. Evolution and Human Behavior, July 29 2022. https://doi.org/10.1016/j.evolhumbehav.2022.07.001

Abstract: Anthropologists and other social scientists have long suggested that severe initiations (hazing) increase group solidarity. Because hazing groups tend to be highly secretive, direct and on-site tests of this hypothesis in the real world are nearly non-existent. Using an American social fraternity, we report a longitudinal test of the relationship between hazing severity and group solidarity. We tracked six sets of fraternity inductees as they underwent the fraternity's months-long induction process. Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be.


Keywords: HazingNewcomersRites of passageFraternities


Sharing Online Content — Even Without Reading It — Inflates Subjective Knowledge

Ward, Adrian F. and Zheng, Frank and Broniarczyk, Susan M., I Share, Therefore I Know? Sharing Online Content — Even Without Reading It — Inflates Subjective Knowledge (June 9, 2022). SSRN: http://dx.doi.org/10.2139/ssrn.4132814

Abstract: Billions of people across the globe use social media to acquire and share information. A large and growing body of research examines how consuming online content affects what people know. The present research investigates a complementary, yet previously unstudied question: how might sharing online content affect what people think they know? We posit that sharing may inflate subjective knowledge through a process of internalized social behavior. Sharing signals expertise; thus, sharers can avoid conflict between their public and private personas by coming to believe that they are as knowledgeable as their posts make them appear. We examine this possibility in the context of “sharing without reading,” a phenomenon that allows us to isolate the effect of sharing on subjective knowledge from any influence of reading or objective knowledge. Six studies provide correlational (study 1) and causal (studies 2, 2a) evidence that sharing—even without reading—increases subjective knowledge, and test the internalization mechanism by varying the degree to which sharing publicly commits the sharer to an expert identity (studies 3-5). A seventh study investigates potential consequences of sharing-inflated subjective knowledge on downstream behavior.

Keywords: subjective knowledge, word of mouth, social media, self-perception


Introduction of Sharia law in northern Nigeria: Decreases in infant mortality thru increased vaccination rates, duration of breastfeeding and prenatal health care; there were also increases in primary school enrollment

Islamic Law and Investments in Children: Evidence from the Sharia Introduction in Nigeria. Marco Alfano. Journal of Health Economics, July 21 2022, 102660. https://doi.org/10.1016/j.jhealeco.2022.102660

Abstract: Islamic law lays down detailed rules regulating children’s upbringing. This study examines the effect of such rules on investments in children by analysing the introduction of Sharia law in northern Nigeria. Triple-differences estimates using temporal, geographical and religious variation together with large, representative survey data show decreases in infant mortality. Official government statistics further confirm improvements in survival. Findings also show that Sharia increased vaccination rates, duration of breastfeeding and prenatal health care. Evidence suggests that Sharia improved survival by specifying strict child protection laws and by formalising children’s duty to maintain their parents in old age or in sickness.


JEL: O15 J12 J13

Keywords: BreastfeedingInfant SurvivalIslamNigeria

5.3 Primary school enrolment
Panel C of table 2 reports the results pertaining to primary school enrolment. I use information contained in the household questionnaire to merge children to their mothers and select 16 children born 1989 to 1998 (aged between 4 and 13 at the time of interview). In Nigeria, the school year starts in September. Accordingly, I redefine the year of birth and recode children born after September as being born in the following year. The sample consists of 6,125 children, who enrolled between the (school) years 1993/94 and 2002/03.
To calculate the age at which each child started school, I combine information on the years of education a child completed together with his or her age at interview Only 4% of children aged 6 to 24 repeat a year of school and less than 0.1% of children in the same age bracket drop out (DHS Final Report, 2003). Since their school starting age cannot be precisely calculated, I omit these individuals from the analysis. In Nigeria, children should enrol in school at the age of 6. For the whole country in 2003, school enrolment was relatively low, 46% of girls and 41% of boys aged 6 to 9 have never attended school (DHS Final Report, 2003).
Despite official regulations, children in Nigeria enrol in school at various different ages. To illustrate this phenomenon, I select children in school born between 1989 and 1994 (i.e. children who were due to start school before the introduction of the Sharia) and plot the distribution of the ages at which they started school in figure 5. The solid graph relates to children residing in Sharia states, the dashed to children in the rest of the country. In both samples, less than a quarter of children, who enrol in school, do so at the age of six. Almost 40% start school before that age and around a third begin school aged 7 or older. To take account of the aforementioned variation in the age at which children start school together with the legal requirement to start school at the age of six, I define the dependent variable as taking the value 1 if a child entered school between the ages of 4 to 6. For children due to enter school before the introduction of the Sharia, 43% of children entered school between 4 and 6 years old.
The difference in differences estimates in panel C of table 2 indicate that in states that introduced the Sharia, the probability of school enrolment (aged 6 or younger) increased after the Sharia by 8 to 10 percentage points. As before, the effect is robust to various specifications (columns 1 to 3). In contrast to this, the probability of school enrolment before the age of 6 hardly changed in the rest of the country after the introduction of the Sharia. The triple differences estimates in column 5 suggest that the Sharia increased the probability of children enrolling in school between the ages 4 and 6 by around 15 percentage points. For the partitioned ethnicities sample, the parameter estimates are slightly larger, 22 percentage points.
Finally, I use information on the exact year of birth of children (as always adjusted for the September cut off) to investigate whether changes in school enrolment occurred for children due to enter in the school year 2000/01. As before, I estimate the event study 17 framework outlined in equation 6. The baseline sample in this case consists of children born in the school year 1989/90, i.e. children due to start school between 1993/94 and 1995/96, depending on whether they started school aged 4, 5 or 6. The results in panel a of figure 6 report the estimates for states that introduced the Sharia. For this sample, conditional differences between Muslims and Christians for children due to enter school before the introduction of the Sharia are similar to the base year. The estimates for γθ fluctuate around 0 and are not statistically significant. By contrast, for children due to enter school after the school year 2000/01, the point estimates increase in size and become statistically significant. Panel b shows that for the remainder of the country, the conditional differences between Muslims and Christians remain similar to the baseline year throughout the time period under consideration.
Columns 3 and 4 of table 3 show that the impact of the Sharia on primary school enrolment was slightly larger for girls than for boys. The parameter estimate for boys is around 12 percentage points (column 3). The corresponding figure for girls is around 22 percentage points (column 4). A possible explanation for this heterogeneity is connected with the pretreatment means reported towards the top of table 3. For children due to enter school before the introduction of the Sharia, the proportion of boys entering school aged 4 to 6 was slightly higher than for girls (0.46 for the former and 0.39 for the latter). The Sharia explicitly states that young boys and girls should be treated equally. Parents following these rules should enrol boys and girls at the same rates. Combined with pre-existing disadvantages for girls this change in behaviour would lead to a stronger effect for girls than for boys.

Instead of religious skepticism and a related increase in progressivism...: UFO sightings promote a more conservative worldview

Kitamura, Shuhei. 2022. “UFOs: The Political Economy of Unidentified Threats.” OSF Preprints. July 29. doi:10.31219/osf.io/tme8f

Abstract: In this paper, I study the effect of Unidentified Flying Objects (UFOs) on political outcomes in the United States. Exploiting a random variation in the visibility of UFOs in the sky, I find that UFO sightings before general elections between 2000-2016 increased the vote share of the Republican presidential candidates. I also find that UFO sightings led voters to believe that the government should increase federal spending on military defense and on technology and science, although the latter effect was marginal. The results indicate that voters regard UFOs as unidentified threats to national security that warrant further defense enhancements and scientific research.


Political candidates: More differentiation between positive than negative options; after exceeding a certain, relatively small level of negativity, people do not see any further increase in negativity

Is good more alike than bad? Positive-negative asymmetry in the differentiation between options. A study on the evaluation of fictitious political profiles. Magdalena Jablonska, Andrzej Falkowski and Robert Mackiewicz. Front. Psychol., July 28 2022. https://doi.org/10.3389/fpsyg.2022.923027


Abstract: Our research focuses on the perception of difference in the evaluations of positive and negative options. The literature provides evidence for two opposite effects: on the one hand, negative objects are said to be more differentiated (e.g., density hypothesis), on the other, people are shown to see greater differences between positive options (e.g., liking-breeds-differentiation principle). In our study, we investigated the perception of difference between fictitious political candidates, hypothesizing greater differences among the evaluations of favorable candidates. Additionally, we analyzed how positive and negative information affect candidate evaluation, predicting further asymmetries. In three experiments, participants evaluated various candidate profiles presented in a numeric and narrative manner. The evaluation tasks were designed as individual or joint assessments. In all three studies, we found more differentiation between positive than negative options. Our research suggests that after exceeding a certain, relatively small level of negativity, people do not see any further increase in negativity. The increase in positivity, on the other hand, is more gradual, with greater differentiation among positive options. Our findings are discussed in light of cognitive-experiential self-theory and density hypothesis.


General discussion

In our research we analyzed the perceived differences among the sets of favorable and unfavorable options. More specifically, the aim of our studies was to investigate how people see the difference between good and bad political candidates. Certainly, they would vote for the good ones and not vote for the bad, but how do they compare the good candidate to a better one; and the bad to a worse? We looked for the answers to these questions in three experiments. In Study 1, participants compared the similarity of fictitious candidates to the best possible candidate or the worst possible one. We did not provide descriptions of the best and the worst possible and instead asked the participants to imagine such political figures. On the basis of some preliminary research, we chose some positive and some negative features and used them to prepare descriptions of five different candidates: the very bad, the bad, neutral, the good and the very good one. We presented their descriptions in a form of scales with negative and positive anchors. We used the same five descriptions and the same form of presentation in Study 2. This time, however, the participants not only assessed candidates’ similarities to the best and to the worst possible politicians but also estimated the probability of voting and likeability of the candidates as well as were asked to compare two profiles and decide how similar they were. We slightly changed the design in Study 3 in which we used narrative descriptions of the candidates. We conducted our research in the political setting, because candidate evaluation and selection is a process that many people at least occasionally undertake and which has important social, political and economic implications.

Our focus was on the differences between the evaluations of positive and negative candidates. The literature on differentiation provides evidence for two contradictory effects. On the one hand, negative information has been found to have more complex conceptual representations and lead to a wider response repertoire (Rozin and Royzman, 2001). Linguistic research and studies using spatial arrangement methods have also shown negative categories to be more diverse, with more words used to describe negative events and states (Rozin et al., 2010). Likewise, the proponents of density hypothesis (Unkelbach et al., 2008a) found that positive entities are more related (and thus denser) compared to their negative counterparts. On the other hand, literature provides convincing evidence for an opposite effect, that is a better differentiation between positive entities. For instance, Denrell (2005) found that people have more knowledge and more differentiated representations of liked than disliked social stimuli. In a similar vein, Smallman and others (Smallman et al., 2014Smallman and Becker, 2017) have shown that people make finer evaluative distinctions when rating appealing than unappealing options.

Following this line of research, we assume better differentiation between positive and not negative options to be a norm, especially when making evaluations of social objects or deciding which option to select. Thus, in our research we predicted that participants would be more likely to see the difference between favorable than unfavorable candidates. In our settings that should result in different evaluations of the good and the best candidates, while the evaluations of the bad and the worst one should not differ (Hypothesis 1). We also predicted that additional information about the candidates would be more likely to change a candidate’s image if the valence of the extra information is opposite to the current image. That is, if a candidate is already favorable, the new positive information might help him or her only to some degree, while negative information would significantly harm his or her image. On the contrary, when a candidate is presented in a negative manner, a new piece of negative information would not hurt him or her much, whereas an additional piece of positive information might be quite beneficial for the candidate’s image (Hypothesis 2). Finally, drawing on two earlier hypotheses—on the better differentiation of positive options and an asymmetrical effect of additional positive and negative features—we formulated a hypothesis that joined together these two predictions, assuming that additional positive information would improve the evaluation of an already good candidate, whereas additional negative information would not harm a bad candidate profile (Hypothesis 3).

The results supported our hypotheses. In Study 1 and Study 2 we found that there were no differences in the evaluations of negative candidates, such as a candidate with overall score –24 and a candidate with overall score –48 (the numbers refer to the balance of the evaluations on six different dimensions) were perceived as equally bad. Still, the participants perceived candidates with overall scores + 24 and + 48 as significantly different. The effect was replicated in Study 3, in which candidates were described in a narrative form. This result supports our Hypothesis 1. Importantly, whereas the results of Study 1 and 3 provided only an indirect test of the hypothesized effect, Study 2 gave a direct test as the participants saw both profiles together and were asked to assess their perceived similarity.

Our second research interest was to test how additional positive and negative pieces of information change candidate perception depending candidate valence. As expected, positive features increased candidate evaluation, whereas negative ones decreased it but these effects were not symmetrical, undermining the normative predictions of for instance the contrast model of similarity. This confirms our Hypothesis 2. Furthermore, we obtained a mixed support for Hypothesis 3. The results of Study 1 and Study 2 showed that whereas adding negative features to a candidate’s profile would not change his or her evaluation when this profile was already negative, additional positive features strengthened the image of a unfavorable candidate. However, we did not observe any effect of additional positive features in the evaluations of candidates whose images were presented in a narrative form in Study 3. One possible explanation is that two additional positive characteristics carried less information (i.e., were less diagnostic) than their negative counterparts.

Overall, our findings suggest that people do not see much of a difference between political candidates with many negative features, regardless of the extent to which they are presented as bad. As it seems, at least in the political domain, if an overall evaluation goes below some standard, people do not differentiate between bad options. The effect may be attributed to different motivations in the processing of positive and negative options. If all available alternatives are unappealing, it does not really matter which one of them is worse. After all, they all seem equally bad and, indeed, why anyone would support a bad candidate? This was the case for assessing the similarity to an ideal or bad politician (Study 1, 2, and 3) as well as liking and voting intention (Study 2 and 3). Thus, regardless of their initial expectations people would not vote for a politician if his or her features fall below a certain standard. One possibility that explains this effect is that they would not be able to justify their decision (Shafir et al., 1993).

Importantly, even the standards of “good” and “bad” are not symmetrical, so that it is relatively easy to be deemed as inadequate for the post but rather difficult to be perceived as a good candidate. The effect was especially visible in Study 1 and 2, where there was a dramatic drop in the evaluation of unfavorable candidates, with extremely low, bottom values for candidates’ similarity to an ideal politician and very high similarity to a bad politician. This extremity effect can partially account for the lack of differentiation between negative options. Still, no differences between unfavorable candidate profiles, as predicted in Hypothesis 1, were also found in Study 3, where candidates were presented in a narrative manner and where evaluations were less extreme. Overall, the results of three studies follow our Hypothesis 1, in which we predicted that the evaluations of negative candidates should not differ significantly. However, if the judgment pertains to attractive options, then the decision which one of them is better gains on importance. As visible in our studies, there were significant differences between favorable candidates. Importantly, no ceiling effect was observed. Thus, the bottom effects observed for negative candidate profiles were not paralleled by the symmetrical ceiling effect for positive candidates, suggesting that the participants differentiated their answers when they thought such differentiations were appropriate, providing evidence for better differentiation between positive options.

The results may be explained with regard to two independent information processing systems proposed by Epstein in his cognitive-experiential self-theory (Epstein, 1990Kirkpatrick and Epstein, 1992). The evolutionally older experiential system operates in an automatic and holistic manner, whereas the rational system is “a deliberative, verbally mediated, primarily conscious analytical system that functions by a person’s understanding of conventionally established rules of logic and evidence” (Denes-Raj and Epstein, 1994, p. 819). It seems that whereas an intense dislike toward negative options is an outcome of the experiential system, a better and more discriminative analysis of positive options is governed by the rational system. The finding can be also interpreted with the distinction into sufficient and necessary conditions, where a necessary condition is one which must be present in order for the event to occur but it does not guarantee the event, while a sufficient condition is a condition that will produce the event. Thus, it seems that the list of necessary conditions to be deemed as inadequate for the post is much shorter than the one for an ideal politician. Consequently, the standards for what it means to be good and bad are not symmetrical.

Our findings have important implications for density hypothesis (Unkelbach et al., 2008aAlves et al., 2016), according to which the distribution range of positivity is much narrower than the range of negativity. It seems reasonable to assume that an optimal spectrum is narrower than the negative one and, as shown in many empirical studies on density hypothesis, that the inner structure of positive information is denser than the structure of negative entities. Still, in our opinion it does not imply a better differentiation between negative options. As our studies suggest, the structure of positive categories may be denser but this density is accompanied by (or maybe is a reason for) a better discrimination between favorable options. After all, after rejecting all negative alternatives, people put in much effort to decide which of the remaining options is the best or at least acceptable—although the extent of this effort is moderated by decision importance and individual differences (e.g., a distinction into maximisers and satisficers Schwartz et al., 2002). Thus, if the structure of positive entities is denser, it is likely that people use finer combs to disentangle it.

We are aware of some important drawbacks of our study. First, we did not investigate how people estimate real candidates and, consequently, we did not take into account the importance of political views or associations that some voters may feel for different political parties. This research direction should be taken by other scholars. For instance, it is interesting to analyze how well people differentiate between candidates that are from their party compared to the members of the opposing party. Furthermore, the way we constructed our candidate profiles may pose certain limitations on the ecological validity of the study. Although, the use of such profiles was justified by our intention to have a maximal control over analyzed stimuli, further studies should investigate more complex stimuli. Also, it is interesting to analyze how well people differentiate between options, depending on the modality in which they were presented. For instance, in our studies we found that numerical candidate profiles were evaluated more extremely than candidates presented descriptively. Thus, presentation modality as well as the range of a positive and negative spectrum are further areas of research. Overall, our research provides valuable insight into positive-negative asymmetry with regard to a less-explored area of a differentiation between positive and negative options in the political setting. Contrary to the findings on the better differentiation between negative options, we find evidence for the opposite effect, showing that the evaluations of a few favorable objects are actually more nuanced. 

Thursday, July 28, 2022

Over the past 14 years, Americans have become less explicitly and implicitly biased against people of different races, skin tones, or sexual preferences

Patterns of Implicit and Explicit Attitudes: IV. Change and Stability From 2007 to 2020. Tessa E. S. Charlesworth, Mahzarin R. Banaji. Psychological Science, July 27, 2022. https://doi.org/10.1177/09567976221084257

Abstract: Using more than 7.1 million implicit and explicit attitude tests drawn from U.S. participants to the Project Implicit website, we examined long-term trends across 14 years (2007–2020). Despite tumultuous sociopolitical events, trends from 2017 to 2020 persisted largely as forecasted from past data (2007–2016). Since 2007, all explicit attitudes decreased in bias between 22% (age attitudes) and 98% (race attitudes). Implicit sexuality, race, and skin-tone attitudes also continued to decrease in bias, by 65%, 26%, and 25%, respectively. Implicit age, disability, and body-weight attitudes, however, continued to show little to no long-term change. Patterns of change and stability were generally consistent across demographic groups (e.g., men and women), indicating widespread, macrolevel change. Ultimately, the data magnify evidence that (some) implicit attitudes reveal persistent, long-term change toward neutrality. The data also newly reveal the potential for short-term influence from sociopolitical events that temporarily disrupt progress toward neutrality, although attitudes eventually return to long-term homeostasis in trends.

Keywords: implicit attitude change, explicit attitude change, Implicit Association Test (IAT), long-term change, time-series analysis, autoregressive-integrated-moving-average (ARIMA) model, open data, open materials, preregistered.

Small effect, yet significant: The intergenerational transmission of sexual frequency

The intergenerational transmission of sexual frequency. Scott T. Yabiku & Lauren Newmyer. Biodemography and Social Biology, Jul 27 2022. https://www.tandfonline.com/doi/abs/10.1080/19485565.2022.2104691

Abstract: Intergenerational relationships are one of the most frequently studied topics in the social sciences. Within the area of family, researchers find intergenerational similarity in family behaviors such as marriage, divorce, and fertility. Yet less research has examined the intergenerational aspects of a key proximate determinant of fertility: sexual frequency. We use the National Survey of Families and Households to examine the relationship between sexual frequency of parents and the sexual frequency of children when adults. We link parental sexual frequency in 1987/1988, when children were ages 5–18, to the sexual frequency of the children in 2001–2003 when these grown children were ages 18–34. We find a modest, yet significant association, between parental and adult children sexual frequency. A mechanism behind this association appears to be the higher likelihood of being in a union among children of parents with high sexual frequency.


Wednesday, July 27, 2022

The impact of time spent playing video games on well-being is probably too small to be subjectively noticeable and not credibly different from zero

Time spent playing video games is unlikely to impact well-being. Matti Vuorre et al. Royal Society Open Science. July 27 2022. https://doi.org/10.1098/rsos.220411

Abstract: Video games are a massively popular form of entertainment, socializing, cooperation and competition. Games' ubiquity fuels fears that they cause poor mental health, and major health bodies and national governments have made far-reaching policy decisions to address games’ potential risks, despite lacking adequate supporting data. The concern–evidence mismatch underscores that we know too little about games' impacts on well-being. We addressed this disconnect by linking six weeks of 38 935 players’ objective game-behaviour data, provided by seven global game publishers, with three waves of their self-reported well-being that we collected. We found little to no evidence for a causal connection between game play and well-being. However, results suggested that motivations play a role in players' well-being. For good or ill, the average effects of time spent playing video games on players’ well-being are probably very small, and further industry data are required to determine potential risks and supportive factors to health.

3.1. Effects between play and well-being over time

We then focused on our first research objective: determining the extent to which game play affects well-being. Scatterplots describing the associations between (lagged) hours played and well-being are shown in figure 3. The meta-analysis of play time and affect indicated that, on average, video game play had little to no effect on affect, with 68% posterior probability of a positive effect (figure 4, top left). The 95% most likely effect sizes of a one-hour daily increase in play on the 13-point SPANE scale ([−0.09, 0.16]) indicated that the effect was not credibly different from zero: the magnitude and associated uncertainty of this effect suggests that there is little to no practical causal connection (given our assumptions described above) between game play in the preceding two weeks and current affect.

4. Discussion

Evidence about video games' potential impacts so far has suffered from several limitations, most notably inaccurate measurement and a lack of explicit, testable causal models. We aimed to remedy these shortcomings by pairing objective behavioural data with self-reports of psychological states. Across six weeks, seven games and 38 935 players, our results suggest that the most pronounced hopes and fears surrounding video games may be unfounded: time spent playing video games had limited if any impact on well-being. Similarly, well-being had little to no effect on time spent playing.

We conclude the effects of playing are negligible because they are very unlikely to be large enough to be subjectively noticed. Anvari & Lakens [55] demonstrated that the smallest perceptible difference on PANAS, a scale similar to SPANE, was 0.20 (2%) on a 5-point Likert scale. In our study, 1 h day−1 increase in play resulted in 0.03 unit increase in well-being: assuming linearity and equidistant response categories, the average player would have to play 10 h more per day than typical to notice changes (i.e. 2% [0.26 units]) in well-being. Moreover, our model indicated 99% probability that the effect of increasing daily play time by one hour on well-being is too small to be subjectively noticeable. Even if effects steadily accumulated over time—an unrealistic assumption—players would notice a difference only after 17 weeks.

We also studied the roles of motivational experiences during play. Conceptually replicating previous cross-sectional findings [21], our results suggested that intrinsic motivation positively and extrinsic motivation negatively affects well-being. Motivations’ suggested effects were larger, and we can be more confident in them, than those of play time. However, the effect of a 1-point deviation from a player's typical intrinsic motivation on affect did not reach the threshold of being subjectively noticeable (0.10 estimate versus 0.26 threshold). Similarly: we cannot be certain a 1-point increase is a large or a small shift—participants' average range on the 7-point intrinsic motivation scale was 0.36. Until future work determines what constitutes an adequate ‘treatment’, these conclusions remain open to future investigation and interpretation. Our findings, therefore, suggest that amount of play does not, on balance, undermine well-being. Instead, our results align with a perspective that the motivational experiences during play may influence well-being [23]. Simply put, the subjective qualities of play may be more important than its quantity. The extent to which this effect generalizes or is practically significant remains an open question.

4.1. Limitations

Although we studied the play and well-being of thousands of people across diverse games, our study barely scratched the surface of video game play more broadly. Hundreds of millions of players play tens of thousands of games on online platforms. We were only able to study seven games, and thus the generalizability of our findings is limited [29]. To truly understand why people play and to what effect, we need to study a broader variety of games, genres and players. Moreover, we analysed total game time, which is the broadest possible measure of play. Although it is necessary to begin at a broad level [12,56], future work must account for the situations, motivations and contexts in which people play [57]. Additionally, play time is a skewed variable because a minority of players spend a great amount of time playing. This means that the Gaussian assumptions of the RICLPM might be threatened, and future simulation work should investigate how the RICLPM deals with skewed data. We also emphasize that our conclusions regarding the causal nature of the observed associations are tentative: without theoretical and empirical identification of confounds, our and future studies will probably produce biased estimates. Finally, industry-provided behavioural data have their own measurement error and there are differences between publishers. Independent researchers must continue working with industry to better understand behavioural data and their limitations.

Is the role of sleep in memory consolidation overrated? It seems so.

Is the role of sleep in memory consolidation overrated? Mohammad Dastghei et al. Neuroscience & Biobehavioral Reviews, July 26 2022, 104799. https://doi.org/10.1016/j.neubiorev.2022.104799

Highlights

• evidence of sleep-independent memory consolidation is reviewed

• plasticity mechanisms are active during sleep and wakefulness

• quiet waking is particularly conducive to plasticity induction and consolidation

• sleep is one among several behavioral states that allow for effective memory formation


Abstract: Substantial empirical evidence suggests that sleep benefits the consolidation and reorganization of learned information. Consequently, the concept of “sleep-dependent memory consolidation” is now widely accepted by the scientific community, in addition to influencing public perceptions regarding the functions of sleep. There are, however, numerous studies that have presented findings inconsistent with the sleep-memory hypothesis. Here, we challenge the notion of “sleep-dependency” by summarizing evidence for effective memory consolidation independent of sleep. Plasticity mechanisms thought to mediate or facilitate consolidation during sleep (e.g., neuronal replay, reactivation, slow oscillations, neurochemical milieu) also operate during non-sleep states, particularly quiet wakefulness, thus allowing for the stabilization of new memories. We propose that it is not sleep per se, but the engagement of plasticity mechanisms, active during both sleep and (at least some) waking states, that constitutes the critical factor determining memory formation. Thus, rather than playing a "critical" role, sleep falls along a continuum of behavioral states that vary in their effectiveness to support memory consolidation at the neural and behavioral level.


Keywords: SleepMemory consolidationElectroencephalogram (EEG)ReactivationReplayWakefulnessSynaptic plasticity