Ebert, T., Gebauer, J. E., Talman, J. R., & Rentfrow, P. J. (2020). Religious people only live longer in religious cultural contexts: A gravestone analysis. Journal of Personality and Social Psychology, Feb 2020. https://doi.org/10.1037/pspa0000187
Abstract: Religious people live longer than nonreligious people, according to a staple of social science research. Yet, are those longevity benefits an inherent feature of religiosity? To find out, we coded gravestone inscriptions and imagery to assess the religiosity and longevity of 6,400 deceased people from religious and nonreligious U.S. counties. We show that in religious cultural contexts, religious people lived 2.2 years longer than did nonreligious people. In nonreligious cultural contexts, however, religiosity conferred no such longevity benefits. Evidently, a longer life is not an inherent feature of religiosity. Instead, religious people only live longer in religious cultural contexts where religiosity is valued. Our study answers a fundamental question on the nature of religiosity and showcases the scientific potential of gravestone analyses.
Tuesday, February 11, 2020
Managing Systemic Financial Crises: New Lessons and Lessons Relearned
Managing Systemic Financial Crises: New Lessons and Lessons Relearned. Marina Moretti; Marc C Dobler; Alvaro Piris. IMF Departmental Paper No. 20/05, February 11, 2020. https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2020/02/10/Managing-Systemic-Financial-Crises-New-Lessons-and-Lessons-Relearned-48626
Chapter 1 Introduction
Systemic financial crises have been a recurring feature of economies in modern times. Panics, wherein collapsing trust in the banking system and creditor runs have significant negative consequences for economic activity—rare events in any one country—have occurred relatively frequently across the IMF membership. Common causes include high leverage, booming credit, an erosion of underwriting standards, exposure to rapidly rising property prices and other asset bubbles, excessive exposure to the government, inadequate supervision, and often a high external current account deficit. Financial distress typically lasts several years and is associated with large economic contractions and high fiscal costs (Laeven and Valencia 2018). Figure 1 shows the prevalence of systemic financial crises over the past 30 years, including the number of crisis episodes each year. The global financial crisis (GFC) was just such a panic, albeit one that transcended national and regional boundaries.
IMF staff experience in helping countries manage systemic banking crises has evolved over time. Major financial sector problems have been addressed in the context of IMF-supported programs primarily in emerging market economies, developing countries and, more recently, in some advanced economies during the GFC. The IMF approach to managing these events was summarized in a 2003 paper (Hoelscher and Quintyn 2003) before there was international consensus on legal frameworks, preparedness, and policy approaches, and when practices varied widely across the membership. The principles outlined in that paper built on staff experience in a range of countries—notably, Indonesia, Republic of Korea, Russia, and Thailand in the late 1990s; and Argentina, Ecuador, Turkey, and Uruguay in the early 2000s. It emphasized that managing a systemic banking crisis is a complex, multiyear process and presented tools available as part of a comprehensive framework for addressing systemic banking problems while minimizing taxpayers’ costs. Although these core concepts and principles remain largely valid today, they merit a revisit following the experiences and lessons learned from the GFC.
The GFC shared similarities with past systemic crises, albeit with an impact felt well beyond directly affected countries (Claessens and others 2010). As in previous episodes of financial distress, the countries most affected by the GFC—the US starting in 2008 and several countries in Europe—saw creditor runs and contagion across institutions, significant fiscal and quasi-fiscal outlays, and a sharp contraction in credit and economic activity (see Figure 1). The reason the impact was more widely felt across the global economy: the crisis originated in advanced economies with large financial sectors. These countries embodied a substantial portion of global economic output, trade, and financial activity and affected internationally active financial firms providing significant cross-border services. The speed of transmission of financial distress across borders was unprecedented, given the complex and opaque financial linkages between financial firms. These factors introduced new challenges, as they impacted the effectiveness of many existing crisis management tools.
Reflecting these new challenges, individual country responses during the GFC differed from past experiences in important respects (Table 1):
The size and scope of liquidity support provided by major central banks was unprecedented. More liquidity was provided to more counterparties for longer periods against a wider range of collateral. Much of this support was through liquidity facilities open to all market participants, while some was provided as emergency liquidity assistance (ELA) to individual institutions. This occurred against the backdrop of accommodative monetary policy and quantitative easing.
Explicit liability guarantees were more selectively deployed than in past crises, when blanket guarantees covering a wide set of liabilities were more commonly used by authorities. During the GFC (with some notable exceptions), explicit liability guarantees typically applied only to specific institutions, new debt issuance, specific asset classes, or were capped (for example, a higher level of deposit insurance). However, implicit guarantees were widespread, as demonstrated by the extensive public solvency support provided to financial institutions and markets. Systemic financial institutions were rarely liquidated or resolved,1 and, of those that were, some proved destabilizing for the broader financial system. This trend reflected in part inadequate powers to resolve such firms in an orderly way.
Difficulties in achieving effective cross-border cooperation in resolution between authorities in different countries came to the fore, given the global footprint of some weak institutions. The lack of mechanisms to enforce resolution measures on a cross-border basis and cooperate more broadly led, in some cases, to the breakup of cross-border groups into national components.
More emphasis was placed on banks’ ability to manage nonperforming assets internally or through market disposals, with less reliance on centralized asset management companies (AMCs)—public agencies that purchase and manage nonperforming loans (NPLs). Protracted weak growth in some countries, the large scale of the problem, and gaps in legal frameworks also meant that progress in addressing distressed assets and deleveraging private sector balance sheets was slower in some countries than in previous crises.
Table 1. Lessons on the Design of the Financial Safety Net•What is Similar? ••What is New?
• Escalating early intervention and enforcement measures •• More intrusive supervision and early intervention powers
• Special resolution regimes for banks •• A new international standard on resolution regimes for systemic financial institutions requiring a range of resolution powers and tools
• Establishing deposit insurance (if prior conditions enable)1 with adequate ex ante funding, available to fund resolution on a least cost basis •• An international standard on deposit insurance, requiring ex ante funding and no coinsurance
•• Desirability of depositor preference
• Capacity to provide emergency liquidity to banks, at the discretion of the central bank •• Liquidity assistance frameworks with broader eligibility conditions, collateral, and safeguards
1 IMF staff does not recommend establishing a deposit insurance system in countries with weak banking supervision, ineffective resolution regimes, and identifiably weak banks. Doing so would expose a nascent scheme to significant risk, (when it has yet to build adequate funding and operational capacity) and could undermine depositor confidence.
The GFC was a watershed. Policymakers were confronted with the gaps and weaknesses in their legal and policy frameworks to address bank liquidity and solvency problems, their understanding of systemic risk in institutions and markets, and domestic and international cooperation. Under these constraints, the policy responses that were deployed put substantial public resources at risk. While ultimately successful in stabilizing financial systems and the macroeconomy, the fiscal and economic costs were high. The far-reaching impact of the GFC provided impetus for a major overhaul of financial sector oversight (Financial Stability Forum 2008; IMF 2018). The regulatory reform agenda agreed to by the Group of Twenty leaders in 2009 elevated the discussions to the highest policy level and kept international attention focused on establishing a stronger set of globally consistent rules. The new architecture aimed to (1) enhance capital buffers and reduce leverage and financial procyclicality; (2) contain funding mismatches and currency risk; (3) enhance the regulation and supervision of large and interconnected institutions, including by expanding the supervisory perimeter; (4) improve the supervision of a complex financial system; (5) align governance and compensation practices of banks with prudent risk taking; (6) overhaul resolution regimes of large financial institutions; and (7) introduce macroprudential policies. Through its multilateral and bilateral surveillance of its membership, including the Financial Sector Assessment Program (FSAP), Article IV missions, and its Global Financial Stability Reports, the IMF has contributed to implementing the regulatory reform agenda.
This paper summarizes the general principles, strategies, and techniques for preparing for and managing systemic banking crises, based on the views and experience of IMF staff, considering developments since the GFC. The paper does not summarize the causes of the GFC, its evolution, or the policy responses adopted; these concepts have been well documented elsewhere.2 Moreover, it does not cover the full reform agenda since the crisis, rather, only two parts—one on key elements of a legal and operational framework for crisis preparedness (the “financial safety net”) and the other on operational strategies and techniques to manage systemic crises if they occur. Each section summarizes relevant lessons learned during the GFC and other recent episodes of financial distress, merging them with preexisting advice to give a complete picture of the main elements of IMF staff advice to member countries on operational aspects of crisis preparedness and management. The advice builds on and is consistent with international financial standards, tailored to country-specific circumstances based on IMF staff crisis experience. The advice recognizes that every crisis is different and that managing systemic failures is exceptionally challenging, both operationally and politically. Nonetheless, better-prepared authorities are less likely to resort to bailing out bank shareholders and creditors when facing such circumstances.
Part I, on crisis preparedness, outlines the design and operational features of a well-designed financial safety net. It discusses how staff advice on these issues has evolved, drawing from the international standards and good practice that emerged in the aftermath of the GFC. Effective financial safety nets play an important role in minimizing the risk of systemwide financial distress—by increasing the likelihood that failing financial institutions can be resolved without triggering financial instability. However, they cannot eliminate that risk, particularly at times of severe stress.
Part II, on crisis management, discusses aspects of a policy response to a full-blown banking crisis. It details the evolution of IMF advice in light of what worked well—or less well—during the GFC, reflecting the experience of IMF staff in actual crisis situations. The narrative is organized around policies for dealing with three distinct aspects3 of systemic banking crisis:
* Containment—strategies and techniques to stem creditor runs and stabilize financial sector liquidity in the acute phase of panic and high uncertainty. This phase is typically short-lived, with an escalating policy response as needed to avoid the collapse of the financial system.
* Restructuring and resolution—strategies and techniques to diagnose bank soundness and viability, and to recapitalize or resolve failing financial institutions, which are typically implemented over the following year or more, depending on the severity of the situation.
* Dealing with distressed assets—strategies and techniques to clean up private sector balance sheets that first identify and then remove impediments to effective resolution of distressed assets, with implementation likely to stretch over several years.
IMF member countries have continued to cope with financial panics and widespread financial sector weakness. The IMF remains fully engaged on these issues, often in the context of IMF-supported programs, with a significant focus on managing systemic problems and financial sector reforms. Staff continue to provide support and advice on supervisory practice, resolution, deposit insurance, and emergency liquidity in IMF member countries learning from experience and adapt policy advice to developments and country-specific circumstances.
Box 9. Dealing with Excessive Related-Party Exposures
Excessive related-party exposures present a major risk to financial stability. Related-party loans that go unreported conceal credit and concentration risk and may be on preferred terms, reducing bank profitability and solvency. Persistently high related-party exposures also hold down economic growth by tying up capital that could otherwise be used to provide lending to legitimate, creditworthy businesses on an arms-length basis. Related-party exposures complicate bank resolution, as shareholders whose rights have been suspended have an incentive to default on their loans to the bank.
Opaque bank ownership greatly facilitates the hiding of related-party exposures and transactions. Opaque ownership is associated with poor governance, AML/CFT violations, and fraudulent activities. Banks without clear ultimate beneficial owners cannot count on shareholder support in times of crisis, and the quality of their capital cannot be verified. Moreover, unknown owners cannot be held accountable for criminal actions leading to a bank’s failure.
Resolving these problems requires a three-pillar approach. Legal reforms are needed to lay the foundation for targeted bank diagnostics and effective enforcement actions:
* Legal reforms to introduce international standards for transparent disclosure and monitoring of bank owners and related parties—including prudent limits, strict conflict of interest rules on the processes and procedures for dealing with related parties, and escalating enforcement measures. Non-transparent ownership should be made a legal ground for license revocation or resolution, and the supervisor authorized to presume a related party under certain circumstances. This shifts from supervisors to banks the “burden of proof”—to demonstrate that a suspicious transaction is not with a related party.
* Bank diagnostics are targeted at identifying ultimate beneficial owners and related-party exposures and transactions and assessing compliance with prudential lending limits for related-party and large exposures. The criteria for identification include control, economic dependency, and acting in concert. Identification of related-party transactions should also consider their risk-related features, such as the existence of preferential terms, the quality of documentation, and internal controls over the transactions.
* Enforcement actions are taken to (1) remove unsuitable bank shareholders—that is, shareholders whose ultimate beneficial owner is not identified, or are otherwise found to be unsuitable; and (2) unwind excessive related-party exposures through repayment or disposal of the exposure, or resolution of the relationship (change in ownership of the bank or the borrower).
The three-pillar approach is best implemented in the context of a comprehensive financial sector strategy. There may not be enough time to implement legal reforms during early intervention or the resolution of systemic banks. In such situations, suspected related-party exposures and liabilities must be swiftly identified and ringfenced. Once the system is stabilized, however, the three-pillar approach should be implemented for all banks (including those in liquidation).
Source: Karlsdóttir and others (forthcoming).
Chapter 1 Introduction
Systemic financial crises have been a recurring feature of economies in modern times. Panics, wherein collapsing trust in the banking system and creditor runs have significant negative consequences for economic activity—rare events in any one country—have occurred relatively frequently across the IMF membership. Common causes include high leverage, booming credit, an erosion of underwriting standards, exposure to rapidly rising property prices and other asset bubbles, excessive exposure to the government, inadequate supervision, and often a high external current account deficit. Financial distress typically lasts several years and is associated with large economic contractions and high fiscal costs (Laeven and Valencia 2018). Figure 1 shows the prevalence of systemic financial crises over the past 30 years, including the number of crisis episodes each year. The global financial crisis (GFC) was just such a panic, albeit one that transcended national and regional boundaries.
IMF staff experience in helping countries manage systemic banking crises has evolved over time. Major financial sector problems have been addressed in the context of IMF-supported programs primarily in emerging market economies, developing countries and, more recently, in some advanced economies during the GFC. The IMF approach to managing these events was summarized in a 2003 paper (Hoelscher and Quintyn 2003) before there was international consensus on legal frameworks, preparedness, and policy approaches, and when practices varied widely across the membership. The principles outlined in that paper built on staff experience in a range of countries—notably, Indonesia, Republic of Korea, Russia, and Thailand in the late 1990s; and Argentina, Ecuador, Turkey, and Uruguay in the early 2000s. It emphasized that managing a systemic banking crisis is a complex, multiyear process and presented tools available as part of a comprehensive framework for addressing systemic banking problems while minimizing taxpayers’ costs. Although these core concepts and principles remain largely valid today, they merit a revisit following the experiences and lessons learned from the GFC.
The GFC shared similarities with past systemic crises, albeit with an impact felt well beyond directly affected countries (Claessens and others 2010). As in previous episodes of financial distress, the countries most affected by the GFC—the US starting in 2008 and several countries in Europe—saw creditor runs and contagion across institutions, significant fiscal and quasi-fiscal outlays, and a sharp contraction in credit and economic activity (see Figure 1). The reason the impact was more widely felt across the global economy: the crisis originated in advanced economies with large financial sectors. These countries embodied a substantial portion of global economic output, trade, and financial activity and affected internationally active financial firms providing significant cross-border services. The speed of transmission of financial distress across borders was unprecedented, given the complex and opaque financial linkages between financial firms. These factors introduced new challenges, as they impacted the effectiveness of many existing crisis management tools.
Reflecting these new challenges, individual country responses during the GFC differed from past experiences in important respects (Table 1):
The size and scope of liquidity support provided by major central banks was unprecedented. More liquidity was provided to more counterparties for longer periods against a wider range of collateral. Much of this support was through liquidity facilities open to all market participants, while some was provided as emergency liquidity assistance (ELA) to individual institutions. This occurred against the backdrop of accommodative monetary policy and quantitative easing.
Explicit liability guarantees were more selectively deployed than in past crises, when blanket guarantees covering a wide set of liabilities were more commonly used by authorities. During the GFC (with some notable exceptions), explicit liability guarantees typically applied only to specific institutions, new debt issuance, specific asset classes, or were capped (for example, a higher level of deposit insurance). However, implicit guarantees were widespread, as demonstrated by the extensive public solvency support provided to financial institutions and markets. Systemic financial institutions were rarely liquidated or resolved,1 and, of those that were, some proved destabilizing for the broader financial system. This trend reflected in part inadequate powers to resolve such firms in an orderly way.
Difficulties in achieving effective cross-border cooperation in resolution between authorities in different countries came to the fore, given the global footprint of some weak institutions. The lack of mechanisms to enforce resolution measures on a cross-border basis and cooperate more broadly led, in some cases, to the breakup of cross-border groups into national components.
More emphasis was placed on banks’ ability to manage nonperforming assets internally or through market disposals, with less reliance on centralized asset management companies (AMCs)—public agencies that purchase and manage nonperforming loans (NPLs). Protracted weak growth in some countries, the large scale of the problem, and gaps in legal frameworks also meant that progress in addressing distressed assets and deleveraging private sector balance sheets was slower in some countries than in previous crises.
Table 1. Lessons on the Design of the Financial Safety Net•What is Similar? ••What is New?
• Escalating early intervention and enforcement measures •• More intrusive supervision and early intervention powers
• Special resolution regimes for banks •• A new international standard on resolution regimes for systemic financial institutions requiring a range of resolution powers and tools
• Establishing deposit insurance (if prior conditions enable)1 with adequate ex ante funding, available to fund resolution on a least cost basis •• An international standard on deposit insurance, requiring ex ante funding and no coinsurance
•• Desirability of depositor preference
• Capacity to provide emergency liquidity to banks, at the discretion of the central bank •• Liquidity assistance frameworks with broader eligibility conditions, collateral, and safeguards
1 IMF staff does not recommend establishing a deposit insurance system in countries with weak banking supervision, ineffective resolution regimes, and identifiably weak banks. Doing so would expose a nascent scheme to significant risk, (when it has yet to build adequate funding and operational capacity) and could undermine depositor confidence.
The GFC was a watershed. Policymakers were confronted with the gaps and weaknesses in their legal and policy frameworks to address bank liquidity and solvency problems, their understanding of systemic risk in institutions and markets, and domestic and international cooperation. Under these constraints, the policy responses that were deployed put substantial public resources at risk. While ultimately successful in stabilizing financial systems and the macroeconomy, the fiscal and economic costs were high. The far-reaching impact of the GFC provided impetus for a major overhaul of financial sector oversight (Financial Stability Forum 2008; IMF 2018). The regulatory reform agenda agreed to by the Group of Twenty leaders in 2009 elevated the discussions to the highest policy level and kept international attention focused on establishing a stronger set of globally consistent rules. The new architecture aimed to (1) enhance capital buffers and reduce leverage and financial procyclicality; (2) contain funding mismatches and currency risk; (3) enhance the regulation and supervision of large and interconnected institutions, including by expanding the supervisory perimeter; (4) improve the supervision of a complex financial system; (5) align governance and compensation practices of banks with prudent risk taking; (6) overhaul resolution regimes of large financial institutions; and (7) introduce macroprudential policies. Through its multilateral and bilateral surveillance of its membership, including the Financial Sector Assessment Program (FSAP), Article IV missions, and its Global Financial Stability Reports, the IMF has contributed to implementing the regulatory reform agenda.
This paper summarizes the general principles, strategies, and techniques for preparing for and managing systemic banking crises, based on the views and experience of IMF staff, considering developments since the GFC. The paper does not summarize the causes of the GFC, its evolution, or the policy responses adopted; these concepts have been well documented elsewhere.2 Moreover, it does not cover the full reform agenda since the crisis, rather, only two parts—one on key elements of a legal and operational framework for crisis preparedness (the “financial safety net”) and the other on operational strategies and techniques to manage systemic crises if they occur. Each section summarizes relevant lessons learned during the GFC and other recent episodes of financial distress, merging them with preexisting advice to give a complete picture of the main elements of IMF staff advice to member countries on operational aspects of crisis preparedness and management. The advice builds on and is consistent with international financial standards, tailored to country-specific circumstances based on IMF staff crisis experience. The advice recognizes that every crisis is different and that managing systemic failures is exceptionally challenging, both operationally and politically. Nonetheless, better-prepared authorities are less likely to resort to bailing out bank shareholders and creditors when facing such circumstances.
Part I, on crisis preparedness, outlines the design and operational features of a well-designed financial safety net. It discusses how staff advice on these issues has evolved, drawing from the international standards and good practice that emerged in the aftermath of the GFC. Effective financial safety nets play an important role in minimizing the risk of systemwide financial distress—by increasing the likelihood that failing financial institutions can be resolved without triggering financial instability. However, they cannot eliminate that risk, particularly at times of severe stress.
Part II, on crisis management, discusses aspects of a policy response to a full-blown banking crisis. It details the evolution of IMF advice in light of what worked well—or less well—during the GFC, reflecting the experience of IMF staff in actual crisis situations. The narrative is organized around policies for dealing with three distinct aspects3 of systemic banking crisis:
* Containment—strategies and techniques to stem creditor runs and stabilize financial sector liquidity in the acute phase of panic and high uncertainty. This phase is typically short-lived, with an escalating policy response as needed to avoid the collapse of the financial system.
* Restructuring and resolution—strategies and techniques to diagnose bank soundness and viability, and to recapitalize or resolve failing financial institutions, which are typically implemented over the following year or more, depending on the severity of the situation.
* Dealing with distressed assets—strategies and techniques to clean up private sector balance sheets that first identify and then remove impediments to effective resolution of distressed assets, with implementation likely to stretch over several years.
IMF member countries have continued to cope with financial panics and widespread financial sector weakness. The IMF remains fully engaged on these issues, often in the context of IMF-supported programs, with a significant focus on managing systemic problems and financial sector reforms. Staff continue to provide support and advice on supervisory practice, resolution, deposit insurance, and emergency liquidity in IMF member countries learning from experience and adapt policy advice to developments and country-specific circumstances.
Box 9. Dealing with Excessive Related-Party Exposures
Excessive related-party exposures present a major risk to financial stability. Related-party loans that go unreported conceal credit and concentration risk and may be on preferred terms, reducing bank profitability and solvency. Persistently high related-party exposures also hold down economic growth by tying up capital that could otherwise be used to provide lending to legitimate, creditworthy businesses on an arms-length basis. Related-party exposures complicate bank resolution, as shareholders whose rights have been suspended have an incentive to default on their loans to the bank.
Opaque bank ownership greatly facilitates the hiding of related-party exposures and transactions. Opaque ownership is associated with poor governance, AML/CFT violations, and fraudulent activities. Banks without clear ultimate beneficial owners cannot count on shareholder support in times of crisis, and the quality of their capital cannot be verified. Moreover, unknown owners cannot be held accountable for criminal actions leading to a bank’s failure.
Resolving these problems requires a three-pillar approach. Legal reforms are needed to lay the foundation for targeted bank diagnostics and effective enforcement actions:
* Legal reforms to introduce international standards for transparent disclosure and monitoring of bank owners and related parties—including prudent limits, strict conflict of interest rules on the processes and procedures for dealing with related parties, and escalating enforcement measures. Non-transparent ownership should be made a legal ground for license revocation or resolution, and the supervisor authorized to presume a related party under certain circumstances. This shifts from supervisors to banks the “burden of proof”—to demonstrate that a suspicious transaction is not with a related party.
* Bank diagnostics are targeted at identifying ultimate beneficial owners and related-party exposures and transactions and assessing compliance with prudential lending limits for related-party and large exposures. The criteria for identification include control, economic dependency, and acting in concert. Identification of related-party transactions should also consider their risk-related features, such as the existence of preferential terms, the quality of documentation, and internal controls over the transactions.
* Enforcement actions are taken to (1) remove unsuitable bank shareholders—that is, shareholders whose ultimate beneficial owner is not identified, or are otherwise found to be unsuitable; and (2) unwind excessive related-party exposures through repayment or disposal of the exposure, or resolution of the relationship (change in ownership of the bank or the borrower).
The three-pillar approach is best implemented in the context of a comprehensive financial sector strategy. There may not be enough time to implement legal reforms during early intervention or the resolution of systemic banks. In such situations, suspected related-party exposures and liabilities must be swiftly identified and ringfenced. Once the system is stabilized, however, the three-pillar approach should be implemented for all banks (including those in liquidation).
Source: Karlsdóttir and others (forthcoming).
Those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans
Musical taste, in-group favoritism, and social identity theory: Re-testing the predictions of the self-esteem hypothesis. Adam J Lonsdale. Psychology of Music, February 10, 2020. https://doi.org/10.1177/0305735619899158
Abstract: Musical taste is thought to function as a social “badge” of group membership, contributing to an individual’s sense of social identity. Following from this, social identity theory predicts that individuals should perceive those who share their musical tastes more favorably than those who do not. Social identity theory also asserts that this in-group favoritism is motivated by the need to achieve, maintain, or enhance a positive social identity and self-esteem (i.e., the “self-esteem hypothesis”). The findings of the present study supported both of these predictions. Participants rated fans of their favorite musical style significantly more favorably than fans of their least favorite musical style. The present findings also offer, for the first time, evidence of significant positive correlations between an individual’s self-esteem and the in-group bias shown to those who share their musical tastes. However, significant relationships with in-group identification also indicate that self-esteem is unlikely to be the sole factor responsible for this apparent in-group bias. Together these findings suggest that those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans.
Keywords: in-group bias, in-group favoritism, musical taste, self-esteem, social identity
Abstract: Musical taste is thought to function as a social “badge” of group membership, contributing to an individual’s sense of social identity. Following from this, social identity theory predicts that individuals should perceive those who share their musical tastes more favorably than those who do not. Social identity theory also asserts that this in-group favoritism is motivated by the need to achieve, maintain, or enhance a positive social identity and self-esteem (i.e., the “self-esteem hypothesis”). The findings of the present study supported both of these predictions. Participants rated fans of their favorite musical style significantly more favorably than fans of their least favorite musical style. The present findings also offer, for the first time, evidence of significant positive correlations between an individual’s self-esteem and the in-group bias shown to those who share their musical tastes. However, significant relationships with in-group identification also indicate that self-esteem is unlikely to be the sole factor responsible for this apparent in-group bias. Together these findings suggest that those who share our musical taste are likely to be regarded as in-group members and will be subject to in-group favoritism according to our self-esteem and how strongly we identify with our fellow music fans.
Keywords: in-group bias, in-group favoritism, musical taste, self-esteem, social identity
The higher the participants rated their own IQ, the higher their own ratings of EQ (EmotionalQ), attractiveness, and health; men overestimated more their IQ, attractiveness & health than women did, but not their EQ
Correlates of Self-Estimated Intelligence. Adrian Furnham and Simmy Grover. J. Intell. 2020, 8(1), 6; February 10 2020. https://www.mdpi.com/2079-3200/8/1/6
Abstract: This paper reports two studies examining correlates of self-estimated intelligence (SEI). In the first, 517 participants completed a measure of SEI as well as self-estimated emotional intelligence (SEEQ), physical attractiveness, health, and other ratings. Males rated their IQ higher (74.12 vs. 71.55) but EQ lower (68.22 vs. 71.81) than females but there were no differences in their ratings of physical health in Study 1. Correlations showed for all participants that the higher they rated their IQ, the higher their ratings of EQ, attractiveness, and health. A regression of self-estimated intelligence onto three demographic, three self-ratings and three beliefs factors accounted for 30% of the variance. Religious, educated males who did not believe in alternative medicine gave higher SEI scores. The second study partly replicated the first, with an N = 475. Again, males rated their IQ higher (106.88 vs. 100.71) than females, but no difference was found for EQ (103.16 vs. 103.74). Males rated both their attractiveness (54.79 vs. 49.81) and health (61.24 vs. 55.49) higher than females. An objective test-based cognitive ability and SEI were correlated r = 0.30. Correlations showed, as in Study 1, positive relationships between all self-ratings. A regression showed the strongest correlates of SEI were IQ, sex and positive self-ratings. Implications and limitations are noted.
Keywords: self-estimated; intelligence; sex differences; attitudes
Abstract: This paper reports two studies examining correlates of self-estimated intelligence (SEI). In the first, 517 participants completed a measure of SEI as well as self-estimated emotional intelligence (SEEQ), physical attractiveness, health, and other ratings. Males rated their IQ higher (74.12 vs. 71.55) but EQ lower (68.22 vs. 71.81) than females but there were no differences in their ratings of physical health in Study 1. Correlations showed for all participants that the higher they rated their IQ, the higher their ratings of EQ, attractiveness, and health. A regression of self-estimated intelligence onto three demographic, three self-ratings and three beliefs factors accounted for 30% of the variance. Religious, educated males who did not believe in alternative medicine gave higher SEI scores. The second study partly replicated the first, with an N = 475. Again, males rated their IQ higher (106.88 vs. 100.71) than females, but no difference was found for EQ (103.16 vs. 103.74). Males rated both their attractiveness (54.79 vs. 49.81) and health (61.24 vs. 55.49) higher than females. An objective test-based cognitive ability and SEI were correlated r = 0.30. Correlations showed, as in Study 1, positive relationships between all self-ratings. A regression showed the strongest correlates of SEI were IQ, sex and positive self-ratings. Implications and limitations are noted.
Keywords: self-estimated; intelligence; sex differences; attitudes
Non-reproducible: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals
Conservatives and liberals have similar physiological responses to threats. Bert N. Bakker, Gijs Schumacher, Claire Gothreau & Kevin Arceneaux. Nature Human Behaviour, February 10 2020. https://www.nature.com/articles/s41562-020-0823-z
Abstract: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals. This work launched an approach aimed at uncovering the biological roots of ideology. Despite wide-ranging scientific and popular impact, independent laboratories have not replicated the study. We conducted a pre-registered direct replication (n = 202) and conceptual replications in the United States (n = 352) and the Netherlands (n = 81). Our analyses do not support the conclusions of the original study, nor do we find evidence for broader claims regarding the effect of disgust and the existence of a physiological trait. Rather than studying unconscious responses as the real predispositions, alignment between conscious and unconscious responses promises deeper insights into the emotional roots of ideology.
Abstract: About a decade ago, a study documented that conservatives have stronger physiological responses to threatening stimuli than liberals. This work launched an approach aimed at uncovering the biological roots of ideology. Despite wide-ranging scientific and popular impact, independent laboratories have not replicated the study. We conducted a pre-registered direct replication (n = 202) and conceptual replications in the United States (n = 352) and the Netherlands (n = 81). Our analyses do not support the conclusions of the original study, nor do we find evidence for broader claims regarding the effect of disgust and the existence of a physiological trait. Rather than studying unconscious responses as the real predispositions, alignment between conscious and unconscious responses promises deeper insights into the emotional roots of ideology.
People rated their own faces as more attractive than others rated them, no matter if original or artificially rendered more masculine or feminine
Influence of sexual dimorphism on the attractiveness evaluation of one’s own face. Zhaoyi Li, Zhiguo Hu, Hongyan Liu. Vision Research, Volume 168, March 2020, Pages 1-8. https://doi.org/10.1016/j.visres.2020.01.005
Abstract: The present study aimed to explore the influence of sexual dimorphism on the evaluation of the attractiveness of one’s own face. In the experiment, a masculinized and a feminized version of the self-faces of the participants were obtained by transferring the original faces toward the average male or female face. The participants were required to rate the attractiveness of three types (original, masculine, feminine) of their own faces and the other participants’ faces in same-sex and opposite-sex contexts. The results revealed that the participants rated their own faces as more attractive than other participants rated them regardless of the sexually dimorphic type (original, masculine, feminine) or the evaluation context. More importantly, the male and female participants showed different preferences for the three types of self-faces. Specifically, in the same-sex context, the female participants rated their own original faces as significantly more attractive than the masculine and feminine faces, and the male participants rated their own masculine faces as significantly more attractive than the feminine faces; while in the opposite-sex context, no significant difference among the attractiveness scores of the three types of self-faces was found in both the male and female participants. The present study provides empirical evidence of the influence of sexual dimorphism on the evaluation of the attractiveness of self-faces.
Abstract: The present study aimed to explore the influence of sexual dimorphism on the evaluation of the attractiveness of one’s own face. In the experiment, a masculinized and a feminized version of the self-faces of the participants were obtained by transferring the original faces toward the average male or female face. The participants were required to rate the attractiveness of three types (original, masculine, feminine) of their own faces and the other participants’ faces in same-sex and opposite-sex contexts. The results revealed that the participants rated their own faces as more attractive than other participants rated them regardless of the sexually dimorphic type (original, masculine, feminine) or the evaluation context. More importantly, the male and female participants showed different preferences for the three types of self-faces. Specifically, in the same-sex context, the female participants rated their own original faces as significantly more attractive than the masculine and feminine faces, and the male participants rated their own masculine faces as significantly more attractive than the feminine faces; while in the opposite-sex context, no significant difference among the attractiveness scores of the three types of self-faces was found in both the male and female participants. The present study provides empirical evidence of the influence of sexual dimorphism on the evaluation of the attractiveness of self-faces.
We examined perceptions of the Dark Triad traits in 6 occupations; participants believed musicians & lawyers should be high in the Dark Triad, and teachers should be high in narcissism, but low in Machiavellianism & psychopathy
Insert a joke about lawyers: Evaluating preferences for the Dark Triad traits in six occupations. Cameron S. Kay, Gerard Saucier. Personality and Individual Differences, Volume 159, 1 June 2020, 109863. https://doi.org/10.1016/j.paid.2020.109863
Highlights
• We examined perceptions of the Dark Triad traits in six occupations.
• Participants believed musicians and lawyers should be high in the Dark Triad.
• Participants believed teachers should be high in narcissism.
• Overall, participants believed others should have the same dark traits they have.
Abstract: The current research examined how perceptions of the Dark Triad traits vary across occupations. Results from two studies (NTOTAL = 933) suggested that participants believe it is acceptable, if not advantageous, for lawyers and musicians to be high in the Dark Triad traits. Participants, likewise, indicated that teachers should be high in narcissism but low in Machiavellianism and psychopathy. Potentially, the performative aspects of narcissism are considered an asset for teachers, while Machiavellianism and psychopathy are considered a liability. The findings further indicated that, regardless of the occupation in question, people high in a specific Dark Triad trait believe others should also be high in that same trait. All results are considered in the context of the attraction-selection-attrition model.
Highlights
• We examined perceptions of the Dark Triad traits in six occupations.
• Participants believed musicians and lawyers should be high in the Dark Triad.
• Participants believed teachers should be high in narcissism.
• Overall, participants believed others should have the same dark traits they have.
Abstract: The current research examined how perceptions of the Dark Triad traits vary across occupations. Results from two studies (NTOTAL = 933) suggested that participants believe it is acceptable, if not advantageous, for lawyers and musicians to be high in the Dark Triad traits. Participants, likewise, indicated that teachers should be high in narcissism but low in Machiavellianism and psychopathy. Potentially, the performative aspects of narcissism are considered an asset for teachers, while Machiavellianism and psychopathy are considered a liability. The findings further indicated that, regardless of the occupation in question, people high in a specific Dark Triad trait believe others should also be high in that same trait. All results are considered in the context of the attraction-selection-attrition model.
Cultured meat safety: Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs; but with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells
The Myth of Cultured Meat: A Review. Sghaier Chriki and Jean-François Hocquette. Front. Nutr., February 7 2020. https://doi.org/10.3389/fnut.2020.00007
Abstract: To satisfy the increasing demand for food by the growing human population, cultured meat (also called in vitro, artificial or lab-grown meat) is presented by its advocates as a good alternative for consumers who want to be more responsible but do not wish to change their diet. This review aims to update the current knowledge on this subject by focusing on recent publications and issues not well described previously. The main conclusion is that no major advances were observed despite many new publications. Indeed, in terms of technical issues, research is still required to optimize cell culture methodology. It is also almost impossible to reproduce the diversity of meats derived from various species, breeds and cuts. Although these are not yet known, we speculated on the potential health benefits and drawbacks of cultured meat. Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs. On the other hand, with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells. Likewise, the control of its nutritional composition is still unclear, especially for micronutrients and iron. Regarding environmental issues, the potential advantages of cultured meat for greenhouse gas emissions are a matter of controversy, although less land will be used compared to livestock, ruminants in particular. However, more criteria need to be taken into account for a comparison with current meat production. Cultured meat will have to compete with other meat substitutes, especially plant-based alternatives. Consumer acceptance will be strongly influenced by many factors and consumers seem to dislike unnatural food. Ethically, cultured meat aims to use considerably fewer animals than conventional livestock farming. However, some animals will still have to be reared to harvest cells for the production of in vitro meat. Finally, we discussed in this review the nebulous status of cultured meat from a religious point of view. Indeed, religious authorities are still debating the question of whether in vitro meat is Kosher or Halal (e.g., compliant with Jewish or Islamic dietary laws).
---
Health and Safety
Advocates of in vitro meat claim that it is safer than conventional meat, based on the fact that lab-grown meat is produced in an environment fully controlled by researchers or producers, without any other organism, whereas conventional meat is part of an animal in contact with the external world, although each tissue (including muscles) is protected by the skin and/or by mucosa. Indeed, without any digestive organs nearby (despite the fact that conventional meat is generally protected from this), and therefore without any potential contamination at slaughter, cultured muscle cells do not have the same opportunity to encounter intestinal pathogens such as E. coli, Salmonella or Campylobacter (10), three pathogens that are responsible for millions of episodes of illness each year (19). However, we can argue that scientists or manufacturers are never in a position to control everything and any mistake or oversight may have dramatic consequences in the event of a health problem. This occurs frequently nowadays during industrial production of chopped meat.
Another positive aspect related to the safety of cultured meat is that it is not produced from animals raised in a confined space, so that the risk of an outbreak is eliminated and there is no need for costly vaccinations against diseases like influenza. On the other hand, we can argue that it is the cells, not the animals, which live in high numbers in incubators to produce cultured meat. Unfortunately, we do not know all the consequences of meat culture for public health, as in vitro meat is a new product. Some authors argue that the process of cell culture is never perfectly controlled and that some unexpected biological mechanisms may occur. For instance, given the great number of cell multiplications taking place, some dysregulation of cell lines is likely to occur as happens in cancer cells, although we can imagine that deregulated cell lines can be eliminated for production or consumption. This may have unknown potential effects on the muscle structure and possibly on human metabolism and health when in vitro meat is consumed (21).
Antibiotic resistance is known as one of the major problems facing livestock (7). In comparison, cultured meat is kept in a controlled environment and close monitoring can easily stop any sign of infection. Nevertheless, if antibiotics are added to prevent any contamination, even occasionally to stop early contamination and illness, this argument is less convincing.
Moreover, it has been suggested that the nutritional content of cultured meat can be controlled by adjusting fat composites used in the medium of production. Indeed, the ratio between saturated fatty acids and polyunsaturated fatty acids can be easily controlled. Saturated fats can be replaced by other types of fats, such as omega-3, but the risk of higher rancidity has to be controlled. However, new strategies have been developed to increase the content of omega-3 fatty acids in meat using current livestock farming systems (23). In addition, no strategy has been developed to endow cultured meat with certain micronutrients specific to animal products (such as vitamin B12 and iron) and which contribute to good health. Furthermore, the positive effect of any (micro)nutrient can be enhanced if it is introduced in an appropriate matrix. In the case of in vitro meat, it is not certain that the other biological compounds and the way they are organized in cultured cells could potentiate the positive effects of micronutrients on human health. Uptake of micronutrients (such as iron) by cultured cells has thus to be well understood. We cannot exclude a reduction in the health benefits of micronutrients due to the culture medium, depending on its composition. And adding chemicals to the medium makes cultured meat more “chemical” food with less of a clean label.
Abstract: To satisfy the increasing demand for food by the growing human population, cultured meat (also called in vitro, artificial or lab-grown meat) is presented by its advocates as a good alternative for consumers who want to be more responsible but do not wish to change their diet. This review aims to update the current knowledge on this subject by focusing on recent publications and issues not well described previously. The main conclusion is that no major advances were observed despite many new publications. Indeed, in terms of technical issues, research is still required to optimize cell culture methodology. It is also almost impossible to reproduce the diversity of meats derived from various species, breeds and cuts. Although these are not yet known, we speculated on the potential health benefits and drawbacks of cultured meat. Unlike conventional meat, cultured muscle cells may be safer, without any adjacent digestive organs. On the other hand, with this high level of cell multiplication, some dysregulation is likely as happens in cancer cells. Likewise, the control of its nutritional composition is still unclear, especially for micronutrients and iron. Regarding environmental issues, the potential advantages of cultured meat for greenhouse gas emissions are a matter of controversy, although less land will be used compared to livestock, ruminants in particular. However, more criteria need to be taken into account for a comparison with current meat production. Cultured meat will have to compete with other meat substitutes, especially plant-based alternatives. Consumer acceptance will be strongly influenced by many factors and consumers seem to dislike unnatural food. Ethically, cultured meat aims to use considerably fewer animals than conventional livestock farming. However, some animals will still have to be reared to harvest cells for the production of in vitro meat. Finally, we discussed in this review the nebulous status of cultured meat from a religious point of view. Indeed, religious authorities are still debating the question of whether in vitro meat is Kosher or Halal (e.g., compliant with Jewish or Islamic dietary laws).
---
Health and Safety
Advocates of in vitro meat claim that it is safer than conventional meat, based on the fact that lab-grown meat is produced in an environment fully controlled by researchers or producers, without any other organism, whereas conventional meat is part of an animal in contact with the external world, although each tissue (including muscles) is protected by the skin and/or by mucosa. Indeed, without any digestive organs nearby (despite the fact that conventional meat is generally protected from this), and therefore without any potential contamination at slaughter, cultured muscle cells do not have the same opportunity to encounter intestinal pathogens such as E. coli, Salmonella or Campylobacter (10), three pathogens that are responsible for millions of episodes of illness each year (19). However, we can argue that scientists or manufacturers are never in a position to control everything and any mistake or oversight may have dramatic consequences in the event of a health problem. This occurs frequently nowadays during industrial production of chopped meat.
Another positive aspect related to the safety of cultured meat is that it is not produced from animals raised in a confined space, so that the risk of an outbreak is eliminated and there is no need for costly vaccinations against diseases like influenza. On the other hand, we can argue that it is the cells, not the animals, which live in high numbers in incubators to produce cultured meat. Unfortunately, we do not know all the consequences of meat culture for public health, as in vitro meat is a new product. Some authors argue that the process of cell culture is never perfectly controlled and that some unexpected biological mechanisms may occur. For instance, given the great number of cell multiplications taking place, some dysregulation of cell lines is likely to occur as happens in cancer cells, although we can imagine that deregulated cell lines can be eliminated for production or consumption. This may have unknown potential effects on the muscle structure and possibly on human metabolism and health when in vitro meat is consumed (21).
Antibiotic resistance is known as one of the major problems facing livestock (7). In comparison, cultured meat is kept in a controlled environment and close monitoring can easily stop any sign of infection. Nevertheless, if antibiotics are added to prevent any contamination, even occasionally to stop early contamination and illness, this argument is less convincing.
Moreover, it has been suggested that the nutritional content of cultured meat can be controlled by adjusting fat composites used in the medium of production. Indeed, the ratio between saturated fatty acids and polyunsaturated fatty acids can be easily controlled. Saturated fats can be replaced by other types of fats, such as omega-3, but the risk of higher rancidity has to be controlled. However, new strategies have been developed to increase the content of omega-3 fatty acids in meat using current livestock farming systems (23). In addition, no strategy has been developed to endow cultured meat with certain micronutrients specific to animal products (such as vitamin B12 and iron) and which contribute to good health. Furthermore, the positive effect of any (micro)nutrient can be enhanced if it is introduced in an appropriate matrix. In the case of in vitro meat, it is not certain that the other biological compounds and the way they are organized in cultured cells could potentiate the positive effects of micronutrients on human health. Uptake of micronutrients (such as iron) by cultured cells has thus to be well understood. We cannot exclude a reduction in the health benefits of micronutrients due to the culture medium, depending on its composition. And adding chemicals to the medium makes cultured meat more “chemical” food with less of a clean label.
Monday, February 10, 2020
Mexican drug cartels: We see a positive connection between cartel presence & better socioeconomic outcomes at the municipality level; results help understand why drug lords have great support in the communities in which they operate
Following the poppy trail: Origins and consequences of Mexican drug cartels. Tommy E. Murphy, Martín A. Rossi. Journal of Development Economics, Volume 143, March 2020, 102433. https://doi.org/10.1016/j.jdeveco.2019.102433
Highlights
• We study the origins, and economic and social consequences of Mexican drug cartels.
• The location of current cartels is strongly linked to the location of Chinese migration at the beginning of the 20th century.
• We report a positive connection between cartel presence and better socioeconomic outcomes at the municipality level.
• Our results help to understand why drug lords have great support in the local communities in which they operate.
Abstract: This paper studies the origins, and economic and social consequences of some of the most prominent drug trafficking organizations in the world: the Mexican cartels. It first traces the current location of cartels to the places where Chinese migrated at the beginning of the 20th century, discussing and documenting how both events are strongly connected. Information on Chinese presence at the beginning of the 20th century is then used to instrument for cartel presence today, to identify the effect of cartels on society. Contrary to what seems to happen with other forms of organized crime, the IV estimates in this study indicate that at the local level there is a positive link between cartel presence and better socioeconomic outcomes (e.g. lower marginalization rates, lower illiteracy rates, higher salaries), better public services, and higher tax revenues, evidence that is consistent with the known stylized fact that drug lords tend have great support in the local communities in which they operate.
JEL classification: N36, O15
Highlights
• We study the origins, and economic and social consequences of Mexican drug cartels.
• The location of current cartels is strongly linked to the location of Chinese migration at the beginning of the 20th century.
• We report a positive connection between cartel presence and better socioeconomic outcomes at the municipality level.
• Our results help to understand why drug lords have great support in the local communities in which they operate.
Abstract: This paper studies the origins, and economic and social consequences of some of the most prominent drug trafficking organizations in the world: the Mexican cartels. It first traces the current location of cartels to the places where Chinese migrated at the beginning of the 20th century, discussing and documenting how both events are strongly connected. Information on Chinese presence at the beginning of the 20th century is then used to instrument for cartel presence today, to identify the effect of cartels on society. Contrary to what seems to happen with other forms of organized crime, the IV estimates in this study indicate that at the local level there is a positive link between cartel presence and better socioeconomic outcomes (e.g. lower marginalization rates, lower illiteracy rates, higher salaries), better public services, and higher tax revenues, evidence that is consistent with the known stylized fact that drug lords tend have great support in the local communities in which they operate.
JEL classification: N36, O15
Increasingly, evidence suggests aggressive video games have little impact on player behavior in the realm of aggression and violence, but most professional guild policy statements failed to reflect these data
Aggressive Video Games Research Emerges from its Replication Crisis (Sort of). Christopher J Ferguson. Current Opinion in Psychology, February 10 2020. https://doi.org/10.1016/j.copsyc.2020.01.002
Highlights
• Previous research on aggressive video games (AVGs) suffered from high false positive rates.
• New, preregistered studies suggest AVGs have little impact on player aggression.
• Prior meta-analyses overestimated the evidence for effects.
• Professional guild statements by the American Psychological Association and American Academy of Pediatrics are inaccurate.
• Consumers may not mimic behaviors seen in fictional media.
Abstract: The impact of aggressive video games (AVGs) on aggression and violent behavior among players, particularly youth, has been debated for decades. In recent years, evidence for publication bias, questionable researcher practices, citation bias and poor standardization of many measures and research designs has indicated that the false positive rate among studies of AVGs has been high. Several studies have undergone retraction. A small recent wave of preregistered studies has largely returned null results for outcomes related to youth violence as well as outcomes related to milder aggression. Increasingly, evidence suggests AVGs have little impact on player behavior in the realm of aggression and violence. Nonetheless, most professional guild policy statements (e.g. American Psychological Association) have failed to reflect these changes in the literature. Such policy statements should be retired or revised lest they misinform the public or do damage to the reputation of these organizations.
Highlights
• Previous research on aggressive video games (AVGs) suffered from high false positive rates.
• New, preregistered studies suggest AVGs have little impact on player aggression.
• Prior meta-analyses overestimated the evidence for effects.
• Professional guild statements by the American Psychological Association and American Academy of Pediatrics are inaccurate.
• Consumers may not mimic behaviors seen in fictional media.
Abstract: The impact of aggressive video games (AVGs) on aggression and violent behavior among players, particularly youth, has been debated for decades. In recent years, evidence for publication bias, questionable researcher practices, citation bias and poor standardization of many measures and research designs has indicated that the false positive rate among studies of AVGs has been high. Several studies have undergone retraction. A small recent wave of preregistered studies has largely returned null results for outcomes related to youth violence as well as outcomes related to milder aggression. Increasingly, evidence suggests AVGs have little impact on player behavior in the realm of aggression and violence. Nonetheless, most professional guild policy statements (e.g. American Psychological Association) have failed to reflect these changes in the literature. Such policy statements should be retired or revised lest they misinform the public or do damage to the reputation of these organizations.
The Nuclear Family Was a Mistake: Loneliness, lack of support, fragility
The Nuclear Family Was a Mistake. David Brooks. The Atlantic. Mar 2020. https://www.theatlantic.com/magazine/archive/2020/03/the-nuclear-family-was-a-mistake/605536/
The family structure we’ve held up as the cultural ideal for the past half century has been a catastrophe for many. It’s time to figure out better ways to live together.
Excerpts:
This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.
If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.
...
Ever since I started working on this article, a chart has been haunting me [https://www.pewforum.org/2019/12/12/religion-and-living-arrangements-around-the-world/pf_12-12-19_religion-households-00-02/]. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.
That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.
For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.
I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.
For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.
The family structure we’ve held up as the cultural ideal for the past half century has been a catastrophe for many. It’s time to figure out better ways to live together.
Excerpts:
This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.
If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.
...
Ever since I started working on this article, a chart has been haunting me [https://www.pewforum.org/2019/12/12/religion-and-living-arrangements-around-the-world/pf_12-12-19_religion-households-00-02/]. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.
That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.
For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.
I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.
For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.
Human populations vary substantially & unexpectedly in both the range and pattern of facial sexually dimorphic traits; European & South American populations display larger levels of facial sexual dimorphism than African populations
Kleisner, Karel, Petr Tureček, S. Craig Roberts, Jan Havlicek, Jaroslava V. Valentova, Robert M. Akoko, Juan David Leongómez, et al. 2020. “How and Why Patterns of Sexual Dimorphism in Human Faces Vary Across the World.” PsyArXiv. February 10. doi:10.31234/osf.io/7vdm
Abstract: Sexual selection, including mate choice and intrasexual competition, is responsible for the evolution of some of the most elaborated and sexually dimorphic traits in animals. Although there is clear sexual dimorphism in the shape of human faces, it is not clear whether this is similarly due to mate choice, or whether mate choice affects only part of the facial shape difference between men and women. Here we explore these questions by investigating patterns of both facial shape and facial preference across a diverse set of human populations. We find evidence that human populations vary substantially and unexpectedly in both the range and pattern of facial sexually dimorphic traits. In particular, European and South American populations display larger levels of facial sexual dimorphism than African populations. Neither cross-cultural differences in facial shape variation, differences in body height between sexes, nor differing preferences for facial sex-typicality across countries, explain the observed patterns of facial dimorphism. Altogether, the association between morphological sex-typicality and attractiveness is moderate for women and weak (or absent) for men. Analysis that distinguishes between allometric and non-allometric components reveals that non-allometric sex-typicality is preferred in women’s faces but not in faces of men. This might be due to different regimes of ongoing sexual selection acting on men, such as stronger intersexual selection for body height and more intense intrasexual physical competition, compared with women.
Abstract: Sexual selection, including mate choice and intrasexual competition, is responsible for the evolution of some of the most elaborated and sexually dimorphic traits in animals. Although there is clear sexual dimorphism in the shape of human faces, it is not clear whether this is similarly due to mate choice, or whether mate choice affects only part of the facial shape difference between men and women. Here we explore these questions by investigating patterns of both facial shape and facial preference across a diverse set of human populations. We find evidence that human populations vary substantially and unexpectedly in both the range and pattern of facial sexually dimorphic traits. In particular, European and South American populations display larger levels of facial sexual dimorphism than African populations. Neither cross-cultural differences in facial shape variation, differences in body height between sexes, nor differing preferences for facial sex-typicality across countries, explain the observed patterns of facial dimorphism. Altogether, the association between morphological sex-typicality and attractiveness is moderate for women and weak (or absent) for men. Analysis that distinguishes between allometric and non-allometric components reveals that non-allometric sex-typicality is preferred in women’s faces but not in faces of men. This might be due to different regimes of ongoing sexual selection acting on men, such as stronger intersexual selection for body height and more intense intrasexual physical competition, compared with women.
Caffeine improved global processing, without effect on local information processing, alerting, spatial attention & executive or phonological functions; also was accompanied by faster text reading speed of meaningful sentences
Caffeine improves text reading and global perception. Sandro Franceschini et al. Journal of Psychopharmacology, October 3, 2019. https://doi.org/10.1177/0269881119878178
Abstract
Background: Reading is a unique human skill. Several brain networks involved in this complex skill mainly involve the left hemisphere language areas. Nevertheless, nonlinguistic networks found in the right hemisphere also seem to be involved in sentence and text reading. These areas do not deal with phonological information, but are involved in verbal and nonverbal pattern information processing. The right hemisphere is responsible for global processing of a scene, which is needed for developing reading skills.
Aims: Caffeine seems to affect global pattern processing specifically. Consequently, our aim was to discover if it could enhance text reading skill.
Methods: In two mechanistic studies (n=24 and n=53), we tested several reading skills, global and local perception, alerting, spatial attention and executive functions, as well as rapid automatised naming and phonological memory, using a double-blind, within-subjects, repeated-measures design in typical young adult readers.
Results: A single dose of 200 mg caffeine improved global processing, without any effect on local information processing, alerting, spatial attention and executive or phonological functions. This improvement in global processing was accompanied by faster text reading speed of meaningful sentences, whereas single word/pseudoword or pseudoword text reading abilities were not affected. These effects of caffeine on reading ability were enhanced by mild sleep deprivation.
Conclusions: These findings show that a small quantity of caffeine could improve global processing and text reading skills in adults.
Keywords: Visual perception, reading enhancement, parallel processing, psychostimulant, context processing
Check also Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. https://www.bipartisanalliance.com/2020/02/those-who-consumed-200-mg-of-caffeine.html
And Surprise: Consuming 1–5 cups of coffee/day was related to lower mortality among never smokers; they forgot to discount/adjust for pack-years of smoking, healthy & unhealthy foods, & added sugar
And Inverse association between caffeine intake and depressive symptoms in US adults: data from National Health and Nutrition Examination Survey (NHANES) 2005–2006. Sohrab Iranpour, Siamak Sabour. Psychiatry Research, Nov 2018. https://doi.org/10.1016/j.psychres.2018.11.004
Abstract
Background: Reading is a unique human skill. Several brain networks involved in this complex skill mainly involve the left hemisphere language areas. Nevertheless, nonlinguistic networks found in the right hemisphere also seem to be involved in sentence and text reading. These areas do not deal with phonological information, but are involved in verbal and nonverbal pattern information processing. The right hemisphere is responsible for global processing of a scene, which is needed for developing reading skills.
Aims: Caffeine seems to affect global pattern processing specifically. Consequently, our aim was to discover if it could enhance text reading skill.
Methods: In two mechanistic studies (n=24 and n=53), we tested several reading skills, global and local perception, alerting, spatial attention and executive functions, as well as rapid automatised naming and phonological memory, using a double-blind, within-subjects, repeated-measures design in typical young adult readers.
Results: A single dose of 200 mg caffeine improved global processing, without any effect on local information processing, alerting, spatial attention and executive or phonological functions. This improvement in global processing was accompanied by faster text reading speed of meaningful sentences, whereas single word/pseudoword or pseudoword text reading abilities were not affected. These effects of caffeine on reading ability were enhanced by mild sleep deprivation.
Conclusions: These findings show that a small quantity of caffeine could improve global processing and text reading skills in adults.
Keywords: Visual perception, reading enhancement, parallel processing, psychostimulant, context processing
Check also Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. https://www.bipartisanalliance.com/2020/02/those-who-consumed-200-mg-of-caffeine.html
And Surprise: Consuming 1–5 cups of coffee/day was related to lower mortality among never smokers; they forgot to discount/adjust for pack-years of smoking, healthy & unhealthy foods, & added sugar
Dietary research on coffee: Improving adjustment for confounding. David R Thomas, Ian D Hodges. Current Developments in Nutrition, nzz142, December 26 2019. https://www.bipartisanalliance.com/2019/12/surprise-consuming-15-cups-of-coffeeday.html
And Inverse association between caffeine intake and depressive symptoms in US adults: data from National Health and Nutrition Examination Survey (NHANES) 2005–2006. Sohrab Iranpour, Siamak Sabour. Psychiatry Research, Nov 2018. https://doi.org/10.1016/j.psychres.2018.11.004
Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior, both for men and women
Motivations for Suicide: Converging Evidence from Clinical and Community Samples. Alexis M. May, Mikayla C. Pachkowski, E. David Klonsky. Journal of Psychiatric Research, February 10 2020. https://doi.org/10.1016/j.jpsychires.2020.02.010
Highlights
• Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior.
• Regardless of the time since attempt, pain and hopelessness were critical motivations.
• Pain and hopelessness were the strongest attempt motivations for both men and women.
• The Inventory of Motivations for Suicide Attempts (IMSA) quickly assesses individual motivations.
Abstract: Understanding what motivates suicidal behavior is critical to effective prevention and clinical intervention. The Inventory of Motivations for Suicide Attempts (IMSA) is a self-report measure developed to assess a wide variety of potential motivations for suicide. The purpose of this study is to examine the measure’s psychometric and descriptive properties in two distinct populations: 1) adult psychiatric inpatients (n = 59) with recent suicide attempts (median of 3 days prior) and 2) community participants assessed online (n = 222) who had attempted suicide a median of 5 years earlier. Findings were very similar across both samples and consistent with initial research on the IMSA in outpatients and undergraduates who had attempted suicide. First, the individual IMSA scales demonstrated good internal reliability and were well represented by a two factor superordinate structure: 1) Internal Motivations and 2) Communication Motivations. Second, in both samples unbearable mental pain and hopelessness were the most common and strongly endorsed motivations, while interpersonal influence was the least endorsed. Finally, motivations were similar in men and women -- a pattern that previous work was not in a position to examine. Taken together with previous work, findings suggest that the nature, structure, and clinical correlates of suicide attempt motivations remain consistent across diverse individuals and situations. The IMSA may serve as a useful tool in both research and clinical contexts to quickly assess individual suicide attempt motivations.
Highlights
• Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior.
• Regardless of the time since attempt, pain and hopelessness were critical motivations.
• Pain and hopelessness were the strongest attempt motivations for both men and women.
• The Inventory of Motivations for Suicide Attempts (IMSA) quickly assesses individual motivations.
Abstract: Understanding what motivates suicidal behavior is critical to effective prevention and clinical intervention. The Inventory of Motivations for Suicide Attempts (IMSA) is a self-report measure developed to assess a wide variety of potential motivations for suicide. The purpose of this study is to examine the measure’s psychometric and descriptive properties in two distinct populations: 1) adult psychiatric inpatients (n = 59) with recent suicide attempts (median of 3 days prior) and 2) community participants assessed online (n = 222) who had attempted suicide a median of 5 years earlier. Findings were very similar across both samples and consistent with initial research on the IMSA in outpatients and undergraduates who had attempted suicide. First, the individual IMSA scales demonstrated good internal reliability and were well represented by a two factor superordinate structure: 1) Internal Motivations and 2) Communication Motivations. Second, in both samples unbearable mental pain and hopelessness were the most common and strongly endorsed motivations, while interpersonal influence was the least endorsed. Finally, motivations were similar in men and women -- a pattern that previous work was not in a position to examine. Taken together with previous work, findings suggest that the nature, structure, and clinical correlates of suicide attempt motivations remain consistent across diverse individuals and situations. The IMSA may serve as a useful tool in both research and clinical contexts to quickly assess individual suicide attempt motivations.
Minimal Relationship between Local Gyrification (wrinkles in the cerebral cortex) and General Cognitive Ability in Humans
Minimal Relationship between Local Gyrification and General Cognitive Ability in Humans. Samuel R Mathias et al. Cerebral Cortex, bhz319, February 9 2020. https://doi.org/10.1093/cercor/bhz319
Abstract: Previous studies suggest that gyrification is associated with superior cognitive abilities in humans, but the strength of this relationship remains unclear. Here, in two samples of related individuals (total N = 2882), we calculated an index of local gyrification (LGI) at thousands of cortical surface points using structural brain images and an index of general cognitive ability (g) using performance on cognitive tests. Replicating previous studies, we found that phenotypic and genetic LGI–g correlations were positive and statistically significant in many cortical regions. However, all LGI–g correlations in both samples were extremely weak, regardless of whether they were significant or nonsignificant. For example, the median phenotypic LGI–g correlation was 0.05 in one sample and 0.10 in the other. These correlations were even weaker after adjusting for confounding neuroanatomical variables (intracranial volume and local cortical surface area). Furthermore, when all LGIs were considered together, at least 89% of the phenotypic variance of g remained unaccounted for. We conclude that the association between LGI and g is too weak to have profound implications for our understanding of the neurobiology of intelligence. This study highlights potential issues when focusing heavily on statistical significance rather than effect sizes in large-scale observational neuroimaging studies.
A novel finding of the present study was that LGI was heritable across the cortex, extending a previous study that established the heritability of whole-brain GI (Docherty et al. 2015). This finding was not particularly surprising because many features of brain morphology are heritable. Nevertheless, it was necessary to establish the heritability of LGI before calculating genetic LGI–g correlations, which are only meaningful if both LGI and g are heritable traits. The previous study estimated the heritability of GI to be 0.71, which is much greater than most of the heritability estimates for LGI observed in GOBS or HCP. This result is also not surprising, because GI is likely to be contaminated by less measurement error than LGI. Heritabilities of all other traits were consistent with those published in previous studies.
The present study represents a replication of previous work and provides several important extensions to our understanding of the relationship between gyrification and cognition. First, we replicated previous work by finding positive and significant phenotypic LGI–g correlations (e.g., Gregory et al. 2016). Furthermore, we found that genetic LGI–g correlations were positive and significant (but only in HCP), suggesting that the relationship between gyrification and intelligence may be driven by pleiotropy. Since environmental LGI–g correlations were not significant, their net sign differed across GOBS and HCP, and their spatial patterns showed no consistency across samples, it is reasonable to conclude that they mostly reflected measurement error rather than meaningful shared environmental contributions to LGI and g.
In our view, the most important finding from the present study is that all LGI–g correlations, even the significant ones, were weak. Phenotypically, LGI at a typical vertex poorly predicted g. Even when the predictive ability of all LGIs was considered together via ridge regression, at least 89% of the variance of g remained unaccounted for. Phenotypic and genetic LGI–g correlations were weaker than ICV–g correlations in the same participants, and about the same as area–g correlations. Partialing out ICV or area further reduced LGI–g correlations.
The volume of cortical mantle is often computed as the product of its area and thickness, but at the resolution of meshes typically used to represent the cortex, the variability of area is higher than the variability of thickness such that surface area is the primary contributor to the variability of cortical volume (Winkler et al. 2010), and therefore of its relationship to other measurements; the same holds, more strongly even, for parcellations of the cortex in large anatomical or functional regions. This means that the association between overall brain volume and cognitive abilities reported by previous studies (e.g., Pietschnig et al. 2015) is probably primarily driven by area–g correlations (Vuoksimaa et al. 2015). LGI is strongly correlated with area (Gautam et al. 2015; Hogstrom et al. 2013), which explains why partialing out either ICV or area reduced phenotypic and genetic LGI–g correlations in the present study. Thus, we conclude, based on our results, that the association between gyrification and cognitive abilities to a large extent reflects the already well-established relationship between surface area and cognitive abilities, and that the particular association between the unique portion of gyrification and cognitive abilities is extremely small.
The above conclusion is consistent with that of a previous twin study (Docherty et al. 2015), which examined genetic associations between overall cortical surface area, whole-brain GI, and cognitive abilities. The authors concluded that the genetic GI–g correlation could be more or less fully explained by the area–g correlation. It has been argued previously that focusing on whole-brain GI may miss important neuroanatomical specificity; however, our findings suggest that Docherty et al.’s conclusion holds for both local and global gyrification.
The P-FIT is a popular hypothesis concerning which brain regions matter most for human cognition (Jung and Haier 2007). The P-FIT was initially proposed to explain activation patterns observed during functional MRI experiments, but has been extended to aspects of brain structure. Previous studies have suggested that the association between gyrification and cognitive abilities may be stronger in P-FIT regions than the rest of the brain (Green et al. 2018; Gregory et al. 2016). However, when we tested this hypothesis, we actually found evidence to the contrary. Since neuroanatomical patterns of phenotypic and genetic LGI–g correlations were consistent across GOBS and HCP, this unexpected finding was unlikely to have been caused by a lack of specificity, such as if LGI–g correlations were distributed randomly over the cortex. Instead, while LGI–g correlations exhibited a characteristic neuroanatomical pattern, this pattern did not match the P-FIT. A potential limitation of the present study in this regard is that there is no widely accepted method of matching Brodmann areas (used to define P-FIT regions) to surface-based ROIs (used to group vertices). Therefore, one could argue that our selection of P-FIT regions was incorrect. While our selection was based on that of a previous study (Green et al. 2018), we nevertheless reperformed our analysis several times with different selections of P-FIT regions, and the results remained the same. Importantly, although we argue that the P-FIT is not a good model for the association between gyrification—a purely structural aspect of cortical organization—and cognitive abilities, our results should not be used to criticize the P-FIT as a hypothesis of the brain’s functional organization, because function does not necessarily follow structure.
Most of our results were consistent across samples. However, estimates of heritability and genetic correlations were generally weaker in GOBS than HCP. Notably, some genetic LGI–g correlations were strong enough to surpass the FDR-corrected threshold for significance in HCP, but not GOBS. Such differences could be related to study design. One limitation of all family studies is that polygenic effects are susceptible to inflation due to shared environmental factors, which would cause overestimation of both heritability and genetic correlations. It could be argued that extended-pedigree studies, such as GOBS, are less susceptible to this kind of inflation than twin studies, such as HCP, because there are usually fewer shared environmental factors between distantly related individuals than twins (Almasy and Blangero 2010); this reduction in inflation comes at the expense of a reduction in power to detect polygenic effects, which could also explain the lack of significant genetic correlations in GOBS. It is unlikely that differences in results between samples were caused by differences in scanner or scanning protocol (Han et al. 2006). Furthermore, while GOBS and HCP participants completed different cognitive batteries, both were comprehensive in terms of measured cognitive abilities, ensuring that g indexed a similar construct in both samples.
With the recent emergence of large, open-access data sets and international consortia, neuroimaging and genetics studies have entered a new era characterized by samples comprising many thousands of participants. In such large studies, trivial effects may be labeled as statistically significant. This observation is not new (Berkson 1938) and numerous solutions have been proposed, such as adopting more stringent significance criteria (Benjamin et al. 2018), scaling criteria by sample size (Mudge et al. 2012), testing interval-null rather than point-null hypotheses (Morey and Rouder 2011), and, most radically, abandoning the notion of statistical significance altogether (McShane et al. 2019). One could argue that these solutions suffer from their own drawbacks and are unlikely to be adopted by the scientific mainstream in near future. Therefore, in the meantime, we believe that it is imperative to judge, at least qualitatively, whether the sizes of statistically significant effects are large enough to justify one’s conclusions, particularly when these conclusions may have broad, overarching implications. This idea is not new either (Kelley and Preacher 2012) but deserves to be restated. Based on the results of the present study, we are inclined to believe that gyrification minimally explains variation in cognitive abilities and therefore has somewhat limited implications for our understanding of the neurobiology of human intelligence.
Abstract: Previous studies suggest that gyrification is associated with superior cognitive abilities in humans, but the strength of this relationship remains unclear. Here, in two samples of related individuals (total N = 2882), we calculated an index of local gyrification (LGI) at thousands of cortical surface points using structural brain images and an index of general cognitive ability (g) using performance on cognitive tests. Replicating previous studies, we found that phenotypic and genetic LGI–g correlations were positive and statistically significant in many cortical regions. However, all LGI–g correlations in both samples were extremely weak, regardless of whether they were significant or nonsignificant. For example, the median phenotypic LGI–g correlation was 0.05 in one sample and 0.10 in the other. These correlations were even weaker after adjusting for confounding neuroanatomical variables (intracranial volume and local cortical surface area). Furthermore, when all LGIs were considered together, at least 89% of the phenotypic variance of g remained unaccounted for. We conclude that the association between LGI and g is too weak to have profound implications for our understanding of the neurobiology of intelligence. This study highlights potential issues when focusing heavily on statistical significance rather than effect sizes in large-scale observational neuroimaging studies.
Discussion
In the present study, we analyzed data from two samples of related individuals to examine the association between gyrification and general cognitive ability. We used a popular automatic method to calculate LGI across the cortex from MRI images (Schaer et al. 2008), and calculated g from performance on batteries of cognitive tests. We estimated the heritability of height, ICV, and g, as well as the heritability LGI, area, and thickness at all vertices. We estimated phenotypic, genetic, and environmental LGI–g correlations, as well as partial LGI–g correlations with height, ICV, area (at the same vertex), and thickness (at the same vertex) as potential confounding variables. We estimated the amount of phenotypic variance of g explained by all LGIs together via ridge regression, and examined the across-sample consistency of neuroanatomical specificity in heritability of LGI, area, and thickness, as well as LGI–g correlations. Finally, we tested whether heritability estimates and LGI–g correlations were stronger in regions implicated by the P-FIT, a model of the neurological basis of human intelligence (Jung and Haier 2007).A novel finding of the present study was that LGI was heritable across the cortex, extending a previous study that established the heritability of whole-brain GI (Docherty et al. 2015). This finding was not particularly surprising because many features of brain morphology are heritable. Nevertheless, it was necessary to establish the heritability of LGI before calculating genetic LGI–g correlations, which are only meaningful if both LGI and g are heritable traits. The previous study estimated the heritability of GI to be 0.71, which is much greater than most of the heritability estimates for LGI observed in GOBS or HCP. This result is also not surprising, because GI is likely to be contaminated by less measurement error than LGI. Heritabilities of all other traits were consistent with those published in previous studies.
The present study represents a replication of previous work and provides several important extensions to our understanding of the relationship between gyrification and cognition. First, we replicated previous work by finding positive and significant phenotypic LGI–g correlations (e.g., Gregory et al. 2016). Furthermore, we found that genetic LGI–g correlations were positive and significant (but only in HCP), suggesting that the relationship between gyrification and intelligence may be driven by pleiotropy. Since environmental LGI–g correlations were not significant, their net sign differed across GOBS and HCP, and their spatial patterns showed no consistency across samples, it is reasonable to conclude that they mostly reflected measurement error rather than meaningful shared environmental contributions to LGI and g.
In our view, the most important finding from the present study is that all LGI–g correlations, even the significant ones, were weak. Phenotypically, LGI at a typical vertex poorly predicted g. Even when the predictive ability of all LGIs was considered together via ridge regression, at least 89% of the variance of g remained unaccounted for. Phenotypic and genetic LGI–g correlations were weaker than ICV–g correlations in the same participants, and about the same as area–g correlations. Partialing out ICV or area further reduced LGI–g correlations.
The volume of cortical mantle is often computed as the product of its area and thickness, but at the resolution of meshes typically used to represent the cortex, the variability of area is higher than the variability of thickness such that surface area is the primary contributor to the variability of cortical volume (Winkler et al. 2010), and therefore of its relationship to other measurements; the same holds, more strongly even, for parcellations of the cortex in large anatomical or functional regions. This means that the association between overall brain volume and cognitive abilities reported by previous studies (e.g., Pietschnig et al. 2015) is probably primarily driven by area–g correlations (Vuoksimaa et al. 2015). LGI is strongly correlated with area (Gautam et al. 2015; Hogstrom et al. 2013), which explains why partialing out either ICV or area reduced phenotypic and genetic LGI–g correlations in the present study. Thus, we conclude, based on our results, that the association between gyrification and cognitive abilities to a large extent reflects the already well-established relationship between surface area and cognitive abilities, and that the particular association between the unique portion of gyrification and cognitive abilities is extremely small.
The above conclusion is consistent with that of a previous twin study (Docherty et al. 2015), which examined genetic associations between overall cortical surface area, whole-brain GI, and cognitive abilities. The authors concluded that the genetic GI–g correlation could be more or less fully explained by the area–g correlation. It has been argued previously that focusing on whole-brain GI may miss important neuroanatomical specificity; however, our findings suggest that Docherty et al.’s conclusion holds for both local and global gyrification.
The P-FIT is a popular hypothesis concerning which brain regions matter most for human cognition (Jung and Haier 2007). The P-FIT was initially proposed to explain activation patterns observed during functional MRI experiments, but has been extended to aspects of brain structure. Previous studies have suggested that the association between gyrification and cognitive abilities may be stronger in P-FIT regions than the rest of the brain (Green et al. 2018; Gregory et al. 2016). However, when we tested this hypothesis, we actually found evidence to the contrary. Since neuroanatomical patterns of phenotypic and genetic LGI–g correlations were consistent across GOBS and HCP, this unexpected finding was unlikely to have been caused by a lack of specificity, such as if LGI–g correlations were distributed randomly over the cortex. Instead, while LGI–g correlations exhibited a characteristic neuroanatomical pattern, this pattern did not match the P-FIT. A potential limitation of the present study in this regard is that there is no widely accepted method of matching Brodmann areas (used to define P-FIT regions) to surface-based ROIs (used to group vertices). Therefore, one could argue that our selection of P-FIT regions was incorrect. While our selection was based on that of a previous study (Green et al. 2018), we nevertheless reperformed our analysis several times with different selections of P-FIT regions, and the results remained the same. Importantly, although we argue that the P-FIT is not a good model for the association between gyrification—a purely structural aspect of cortical organization—and cognitive abilities, our results should not be used to criticize the P-FIT as a hypothesis of the brain’s functional organization, because function does not necessarily follow structure.
Most of our results were consistent across samples. However, estimates of heritability and genetic correlations were generally weaker in GOBS than HCP. Notably, some genetic LGI–g correlations were strong enough to surpass the FDR-corrected threshold for significance in HCP, but not GOBS. Such differences could be related to study design. One limitation of all family studies is that polygenic effects are susceptible to inflation due to shared environmental factors, which would cause overestimation of both heritability and genetic correlations. It could be argued that extended-pedigree studies, such as GOBS, are less susceptible to this kind of inflation than twin studies, such as HCP, because there are usually fewer shared environmental factors between distantly related individuals than twins (Almasy and Blangero 2010); this reduction in inflation comes at the expense of a reduction in power to detect polygenic effects, which could also explain the lack of significant genetic correlations in GOBS. It is unlikely that differences in results between samples were caused by differences in scanner or scanning protocol (Han et al. 2006). Furthermore, while GOBS and HCP participants completed different cognitive batteries, both were comprehensive in terms of measured cognitive abilities, ensuring that g indexed a similar construct in both samples.
With the recent emergence of large, open-access data sets and international consortia, neuroimaging and genetics studies have entered a new era characterized by samples comprising many thousands of participants. In such large studies, trivial effects may be labeled as statistically significant. This observation is not new (Berkson 1938) and numerous solutions have been proposed, such as adopting more stringent significance criteria (Benjamin et al. 2018), scaling criteria by sample size (Mudge et al. 2012), testing interval-null rather than point-null hypotheses (Morey and Rouder 2011), and, most radically, abandoning the notion of statistical significance altogether (McShane et al. 2019). One could argue that these solutions suffer from their own drawbacks and are unlikely to be adopted by the scientific mainstream in near future. Therefore, in the meantime, we believe that it is imperative to judge, at least qualitatively, whether the sizes of statistically significant effects are large enough to justify one’s conclusions, particularly when these conclusions may have broad, overarching implications. This idea is not new either (Kelley and Preacher 2012) but deserves to be restated. Based on the results of the present study, we are inclined to believe that gyrification minimally explains variation in cognitive abilities and therefore has somewhat limited implications for our understanding of the neurobiology of human intelligence.
Those who consumed 200 mg of caffeine showed significantly enhanced problem-solving abilities; caffeine had no significant effects on creative generation or on working memory
Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. doi:10.31234/osf.io/6g9av
Abstract: Caffeine is the most widely consumed psychotropic drug in the world, with numerous studies documenting the effects of caffeine on people’s alertness, vigilance, mood, concentration, and attentional focus. The effects of caffeine on creative thinking, however, remain unknown. In a randomized placebo-controlled between-subject double-blind design the present study investigated the effect of moderate caffeine consumption on creative problem solving (i.e., convergent thinking) and creative idea generation (i.e., divergent thinking). We found that participants who consumed 200 mg of caffeine (approximately one 12 oz cup of coffee, n = 44), compared to those in the placebo condition (n = 44), showed significantly enhanced problem-solving abilities. Caffeine had no significant effects on creative generation or on working memory. The effects remained after controlling for participants’ caffeine expectancies, whether they believed they consumed caffeine or a placebo, or for changes in mood. Possible mechanisms and future directions are discussed.
Abstract: Caffeine is the most widely consumed psychotropic drug in the world, with numerous studies documenting the effects of caffeine on people’s alertness, vigilance, mood, concentration, and attentional focus. The effects of caffeine on creative thinking, however, remain unknown. In a randomized placebo-controlled between-subject double-blind design the present study investigated the effect of moderate caffeine consumption on creative problem solving (i.e., convergent thinking) and creative idea generation (i.e., divergent thinking). We found that participants who consumed 200 mg of caffeine (approximately one 12 oz cup of coffee, n = 44), compared to those in the placebo condition (n = 44), showed significantly enhanced problem-solving abilities. Caffeine had no significant effects on creative generation or on working memory. The effects remained after controlling for participants’ caffeine expectancies, whether they believed they consumed caffeine or a placebo, or for changes in mood. Possible mechanisms and future directions are discussed.
Sunday, February 9, 2020
The neoliberal pressures in the form of raising achievement benchmarks contributes to the reversal of the gender gap in physics & computer science courses, having less girls in STEM
Explaining a reverse gender gap in advanced physics and computer science course‐taking: An exploratory case study comparing Hebrew‐speaking and Arabic‐speaking high schools in Israel. Halleli Pinson Yariv Feniger Yael Barak. Journal of Research in Science Teaching, January 30 2020. https://doi.org/10.1002/tea.21622
Abstract: In the past three decades in high‐income countries, female students have outperformed male students in most indicators of educational attainment. However, the underrepresentation of girls and women in science courses and careers, especially in physics, computer sciences, and engineering, remains persistent. What is often neglected by the vast existing literature is the role that schools, as social institutions, play in maintaining or eliminating such gender gaps. This explorative case study research compares two high schools in Israel: one Hebrew‐speaking state school that serves mostly middleclass students and exhibits a typical gender gap in physics and computer science; the other, an Arabic‐speaking state school located in a Bedouin town that serves mostly students from a lower socioeconomic background. In the Arabic‐speaking school over 50% of the students in the advanced physics and computer science classes are females. The study aims to explain this seemingly counterintuitive gender pattern with respect to participation in physics and computer science. A comparison of school policies regarding sorting and choice reveals that the two schools employ very different policies that might explain the different patterns of participation. The Hebrew‐speaking school prioritizes self‐fulfillment and “free‐choice,” while in the Arabic‐speaking school, staff are much more active in sorting and assigning students to different curricular programs. The qualitative analysis suggests that in the case of the Arabic‐speaking school the intersection between traditional and collectivist society and neoliberal pressures in the form of raising achievement benchmarks contributes to the reversal of the gender gap in physics and computer science courses.
Abstract: In the past three decades in high‐income countries, female students have outperformed male students in most indicators of educational attainment. However, the underrepresentation of girls and women in science courses and careers, especially in physics, computer sciences, and engineering, remains persistent. What is often neglected by the vast existing literature is the role that schools, as social institutions, play in maintaining or eliminating such gender gaps. This explorative case study research compares two high schools in Israel: one Hebrew‐speaking state school that serves mostly middleclass students and exhibits a typical gender gap in physics and computer science; the other, an Arabic‐speaking state school located in a Bedouin town that serves mostly students from a lower socioeconomic background. In the Arabic‐speaking school over 50% of the students in the advanced physics and computer science classes are females. The study aims to explain this seemingly counterintuitive gender pattern with respect to participation in physics and computer science. A comparison of school policies regarding sorting and choice reveals that the two schools employ very different policies that might explain the different patterns of participation. The Hebrew‐speaking school prioritizes self‐fulfillment and “free‐choice,” while in the Arabic‐speaking school, staff are much more active in sorting and assigning students to different curricular programs. The qualitative analysis suggests that in the case of the Arabic‐speaking school the intersection between traditional and collectivist society and neoliberal pressures in the form of raising achievement benchmarks contributes to the reversal of the gender gap in physics and computer science courses.
Check also Disentangling physics from the norms of patriarchal white supremacy must begin with an honest accounting of the roots of the Western scientific project in the project of slavery:
Making Black Women Scientists under White Empiricism: The Racialization of Epistemology in Physics. Chanda Prescod-Weinstein. Signs, 2020, vol. 45, no. 2. https://www.bipartisanalliance.com/2019/12/disentangling-physics-from-norms-of.html
And The harsher grading policies in STEM courses disproportionately affect women; restrictions on grading policies that equalize average grades across classes helps to close the STEM gender gap as well as increasing overall enrollment:
Equilibrium Grade Inflation with Implications for Female Interest in STEM Majors. Thomas Ahn, Peter Arcidiacono, Amy Hopson, James R. Thomas. NBER Working Paper No. 26556. December 2019. https://www.bipartisanalliance.com/2019/12/the-harsher-grading-policies-in-stem.html
Pictorial Cigarette Pack Warnings Increase Some Risk Appraisals But Not Risk Beliefs
Pictorial Cigarette Pack Warnings Increase Some Risk Appraisals But Not Risk Beliefs: A Meta-Analysis. Seth M Noar, Jacob A Rohde, Joshua O Barker, Marissa G Hall, Noel T Brewer. Human Communication Research, hqz016, February 3 2020. https://doi.org/10.1093/hcr/hqz016
Abstract: Pictorial warnings on cigarette packs motivate smokers to quit, and yet the warnings’ theoretical mechanisms are not clearly understood. To clarify the role that risk appraisals play in pictorial warnings’ impacts, we conducted a meta-analysis of the experimental literature. We meta-analyzed 57 studies, conducted in 13 countries, with a cumulative N of 42,854. Pictorial warnings elicited greater cognitive elaboration (e.g., thinking about the risks of smoking; d = 1.27; p < .001) than text-only warnings. Pictorial warnings also elicited more fear and other negative affect (d = .60; p < .001). In contrast, pictorial warnings had no impact on perceived likelihood of harm (d = .03; p = .064), perceived severity (d = .16; p = .244), or experiential risk (d = .06; p = .449). Thus, while pictorial warnings increase affective and some cognitive risk appraisals, they do not increase beliefs about disease risk. We discuss the role of negative affect in warning effectiveness and the implications for image selection and warning implementation.
Abstract: Pictorial warnings on cigarette packs motivate smokers to quit, and yet the warnings’ theoretical mechanisms are not clearly understood. To clarify the role that risk appraisals play in pictorial warnings’ impacts, we conducted a meta-analysis of the experimental literature. We meta-analyzed 57 studies, conducted in 13 countries, with a cumulative N of 42,854. Pictorial warnings elicited greater cognitive elaboration (e.g., thinking about the risks of smoking; d = 1.27; p < .001) than text-only warnings. Pictorial warnings also elicited more fear and other negative affect (d = .60; p < .001). In contrast, pictorial warnings had no impact on perceived likelihood of harm (d = .03; p = .064), perceived severity (d = .16; p = .244), or experiential risk (d = .06; p = .449). Thus, while pictorial warnings increase affective and some cognitive risk appraisals, they do not increase beliefs about disease risk. We discuss the role of negative affect in warning effectiveness and the implications for image selection and warning implementation.
Saturday, February 8, 2020
A more egalitarian society (more equal distribution of unpaid care & domestic work) correlates with women being increasingly supportive of a large and encompassing welfare state, much more than men
The gender gap in welfare state attitudes in Europe: The role of unpaid labour and family policy. Mikael Goossen. Journal of European Social Policy, February 5, 2020. https://doi.org/10.1177/0958928719899337
Abstract: Previous research has shown a prevailing ‘modern gender gap’ in socio-political attitudes in advanced capitalist economies. While numerous studies have confirmed gender differences in attitudes towards the welfare state in Europe, few have addressed the reason for this rift in men’s and women’s views about the role of government in ensuring the general welfare of citizens. In this article, I examine the relationship between gender equality in unpaid labour, family policy and the gender gap in welfare state attitudes. Based on data from 21 countries participating in the European Social Survey (ESS) Round 4, and using a mix of country- and individual-level regression models and multilevel models, I find that there is a clear relationship between country-level gender equality in unpaid labour and gender differences in support of an encompassing welfare state. A more equal distribution of unpaid care and domestic work correlates with women being increasingly supportive of a large and encompassing welfare state, in comparison with men. This pattern holds when controlling for individual-level economic risk and resources, cultural factors such as trust and social values traditionally related to the support of an encompassing welfare state, and beliefs about welfare state efficiency and consequences for society in general. This pattern is evident for countries with a low level of familistic policies, while no distinguishable pattern is discernible for highly familistic countries. These findings have implications for the perception of gender as an emergent social cleavage with respect to welfare state attitudes. The results are discussed in the light of institutional theories on policy feedback, familism, social role theory and previous findings relating to modernization theory and ‘gender realignment’.
Keywords: Attitudes, comparative research, division of labour, family policy, gender gap, gender roles, welfare state
Abstract: Previous research has shown a prevailing ‘modern gender gap’ in socio-political attitudes in advanced capitalist economies. While numerous studies have confirmed gender differences in attitudes towards the welfare state in Europe, few have addressed the reason for this rift in men’s and women’s views about the role of government in ensuring the general welfare of citizens. In this article, I examine the relationship between gender equality in unpaid labour, family policy and the gender gap in welfare state attitudes. Based on data from 21 countries participating in the European Social Survey (ESS) Round 4, and using a mix of country- and individual-level regression models and multilevel models, I find that there is a clear relationship between country-level gender equality in unpaid labour and gender differences in support of an encompassing welfare state. A more equal distribution of unpaid care and domestic work correlates with women being increasingly supportive of a large and encompassing welfare state, in comparison with men. This pattern holds when controlling for individual-level economic risk and resources, cultural factors such as trust and social values traditionally related to the support of an encompassing welfare state, and beliefs about welfare state efficiency and consequences for society in general. This pattern is evident for countries with a low level of familistic policies, while no distinguishable pattern is discernible for highly familistic countries. These findings have implications for the perception of gender as an emergent social cleavage with respect to welfare state attitudes. The results are discussed in the light of institutional theories on policy feedback, familism, social role theory and previous findings relating to modernization theory and ‘gender realignment’.
Keywords: Attitudes, comparative research, division of labour, family policy, gender gap, gender roles, welfare state
Power play in BDSM: Submissives showed increases in cortisol & endocannabinoid levels, & dominants only showing increased endocannabinoid levels
Wuyts E, De Neef N, Coppens V, et al. Between Pleasure and Pain: A Pilot Study on the Biological Mechanisms Associated With BDSM Interactions in Dominants and Submissives. J Sex Med 2020;XX:XXX–XXX. https://doi.org/10.1016/j.jsxm.2020.01.001
Abstract
Background BDSM is an abbreviation used to reference the concepts of bondage and discipline, dominance and submission, sadism, and masochism, enacted by power exchanges between consensual partners.
Aim To shed light upon the rewarding biological mechanisms associated with BDSM interactions.
Methods A group of 35 BDSM couples (dominant and submissive counterparts) were recruited and tested during a BDSM interaction, with an additional control group of 27 non-BDSM interested people tested in a normal social interaction.
Outcomes We compared the evolution of the stress and reward hormone levels of cortisol, beta-endorphins, and endocannabinoids (2AG and anandamide) in a group of BDSM practitioners before and after an active BDSM interaction with the levels in control individuals.
Results We showed that submissives showed increases in cortisol and endocannabinoid levels due to the BDSM interaction, with dominants only showing increased endocannabinoid levels when the BDSM interaction was associated with power play.
Clinical Implications This study effectively provides a link between behavior that many think of as aberrant on one hand, and biological pleasure experience on the other, in the hope that it may relieve some of the stigma these practitioners still endure.
Strengths & Limitations It is one of the first and largest studies of its kind, but is still limited in sample size and only represents a specific population of Flemish BDSM practitioners.
Conclusion Even though this is one of the first studies of its kind, we can conclude that there is a clear indication for increased pleasure in submissives when looking at biological effects of a BDSM interaction, which was related to the increases in experienced stress.
Key Words: BDSMSadismMasochismCortisolEndocannabinoidsBeta-endorphin
Abstract
Background BDSM is an abbreviation used to reference the concepts of bondage and discipline, dominance and submission, sadism, and masochism, enacted by power exchanges between consensual partners.
Aim To shed light upon the rewarding biological mechanisms associated with BDSM interactions.
Methods A group of 35 BDSM couples (dominant and submissive counterparts) were recruited and tested during a BDSM interaction, with an additional control group of 27 non-BDSM interested people tested in a normal social interaction.
Outcomes We compared the evolution of the stress and reward hormone levels of cortisol, beta-endorphins, and endocannabinoids (2AG and anandamide) in a group of BDSM practitioners before and after an active BDSM interaction with the levels in control individuals.
Results We showed that submissives showed increases in cortisol and endocannabinoid levels due to the BDSM interaction, with dominants only showing increased endocannabinoid levels when the BDSM interaction was associated with power play.
Clinical Implications This study effectively provides a link between behavior that many think of as aberrant on one hand, and biological pleasure experience on the other, in the hope that it may relieve some of the stigma these practitioners still endure.
Strengths & Limitations It is one of the first and largest studies of its kind, but is still limited in sample size and only represents a specific population of Flemish BDSM practitioners.
Conclusion Even though this is one of the first studies of its kind, we can conclude that there is a clear indication for increased pleasure in submissives when looking at biological effects of a BDSM interaction, which was related to the increases in experienced stress.
Key Words: BDSMSadismMasochismCortisolEndocannabinoidsBeta-endorphin
Time to Orgasm in Women in a Monogamous Stable Heterosexual Relationship: Mean reported time was 13.41 ± 7.67 min; 17% of the participants had never experienced the orgasm
Bhat GS, Shastry A. Time to Orgasm in Women in a Monogamous Stable Heterosexual Relationship. J Sex Med 2020;XX:XXX–XXX. https://www.sciencedirect.com/science/article/pii/S1743609520300308
Abstract
Background Orgasm in women is a complex phenomenon, and the sparse data about time to orgasm (TitOr) in women are an impediment to the research on this complex phenomenon.
Aim To evaluate the stopwatch measured TitOr in women in a monogamous stable heterosexual relationship.
Methods
The study was conducted through web-based and personal interview using a questionnaire, which addressed the issues related to TitOr. Sexually active women older than 18 years and women in a monogamous stable heterosexual relationship were included in the study. Those with comorbidities such as diabetes, hypertension, asthma, psychiatric illness, sexual dysfunction and those with partners with sexual dysfunction were excluded. The participants reported stopwatch measured TitOr after adequate sexual arousal over an 8-week period. The data analysis was performed using GraphPad software (©2018 GraphPad Software, Inc, USA).
Outcomes The outcomes included stopwatch measured average TitOr in women.
Results The study period was from October 2017 to September 2018 with a sample size of 645. The mean age of the participants was 30.56 ± 9.36 years. The sample was drawn from 20 countries, with most participants from India, the United Kingdom, the Netherlands, and the United States of America. The mean reported TitOr was 13.41 ± 7.67 minutes (95% confidence interval: 12.76 minutes–14.06 minutes). 17% of the participants had never experienced the orgasm. Penovaginal intercourse was insufficient to reach orgasm in the majority, in whom it was facilitated by certain positions and maneuvers.
Clinical Implications The knowledge of stopwatch measured TitOr in women in real-life setting helps to define, treat, and understand female sexual function/dysfunction better and it also helps to plan treatment of male ejaculatory dysfunction, as reported ejaculatory latency in healthy men is much less than the reported TitOr here.
Strengths & limitations Use of stopwatch to measure TitOr and a large multinational sample are the strength of the study. The absence of a crosscheck mechanism to check the accuracy of the stopwatch measurement is the limitation of the study.
Conclusion Stopwatch measured average TitOr in the sample of women in our study, who were in a monogamous stable heterosexual relationship, is 13.41 minutes (95% confidence interval: 12.76 minutes–14.06 minutes) and certain maneuvers as well as positions during penovaginal intercourse help achieving orgasm, more often than not.
Key Words: OrgasmTime to OrgasmOrgasmic LatencyFemale Sexual Dysfunction
Abstract
Background Orgasm in women is a complex phenomenon, and the sparse data about time to orgasm (TitOr) in women are an impediment to the research on this complex phenomenon.
Aim To evaluate the stopwatch measured TitOr in women in a monogamous stable heterosexual relationship.
Methods
The study was conducted through web-based and personal interview using a questionnaire, which addressed the issues related to TitOr. Sexually active women older than 18 years and women in a monogamous stable heterosexual relationship were included in the study. Those with comorbidities such as diabetes, hypertension, asthma, psychiatric illness, sexual dysfunction and those with partners with sexual dysfunction were excluded. The participants reported stopwatch measured TitOr after adequate sexual arousal over an 8-week period. The data analysis was performed using GraphPad software (©2018 GraphPad Software, Inc, USA).
Outcomes The outcomes included stopwatch measured average TitOr in women.
Results The study period was from October 2017 to September 2018 with a sample size of 645. The mean age of the participants was 30.56 ± 9.36 years. The sample was drawn from 20 countries, with most participants from India, the United Kingdom, the Netherlands, and the United States of America. The mean reported TitOr was 13.41 ± 7.67 minutes (95% confidence interval: 12.76 minutes–14.06 minutes). 17% of the participants had never experienced the orgasm. Penovaginal intercourse was insufficient to reach orgasm in the majority, in whom it was facilitated by certain positions and maneuvers.
Clinical Implications The knowledge of stopwatch measured TitOr in women in real-life setting helps to define, treat, and understand female sexual function/dysfunction better and it also helps to plan treatment of male ejaculatory dysfunction, as reported ejaculatory latency in healthy men is much less than the reported TitOr here.
Strengths & limitations Use of stopwatch to measure TitOr and a large multinational sample are the strength of the study. The absence of a crosscheck mechanism to check the accuracy of the stopwatch measurement is the limitation of the study.
Conclusion Stopwatch measured average TitOr in the sample of women in our study, who were in a monogamous stable heterosexual relationship, is 13.41 minutes (95% confidence interval: 12.76 minutes–14.06 minutes) and certain maneuvers as well as positions during penovaginal intercourse help achieving orgasm, more often than not.
Key Words: OrgasmTime to OrgasmOrgasmic LatencyFemale Sexual Dysfunction
"It's not about the money. It's about sending a message!": Unpacking the Components of Revenge
"It's not about the money. It's about sending a message!": Unpacking the Components of Revenge. Andras Molnar, Shereen J ChaudhryGeorge Loewenstein. SSRN, January 24 2020. http://ssrn.com/abstract=3524910
Abstract: We examine whether belief-based preferences--caring about what transgressors believe--play a crucial role in punishment decisions: Do punishers want to make sure that transgressors understand why they are being punished, and is this desire to affect beliefs often prioritized over distributive and retributive preferences? We test whether punishers derive utility from three distinct sources: material outcomes (their own and the transgressor's payoff), affective states (the transgressor's suffering), and cognitive states (the transgressor's beliefs about the cause of that suffering). In a novel, preregistered experiment (N = 1, 959) we demonstrate that consideration for transgressors' beliefs affects punishment decisions on its own, regardless of the considerations for material outcomes (distributional preferences) and affective states (retributive preferences). By contrast, we find very little evidence for pure retributive preferences (i.e., to merely inflict suffering on transgressors). We also show that people who would otherwise enact harsh punishments, are willing to punish less severely, if by doing so they can tell the transgressor why they are punishing them. Finally, we demonstrate that the preference for affecting transgressors' beliefs cannot be explained by deterrence motives (i.e., to make transgressors behave better in the future).
Keywords: Punishment, Belief-based utility
JEL Classification: D03, C70
Abstract: We examine whether belief-based preferences--caring about what transgressors believe--play a crucial role in punishment decisions: Do punishers want to make sure that transgressors understand why they are being punished, and is this desire to affect beliefs often prioritized over distributive and retributive preferences? We test whether punishers derive utility from three distinct sources: material outcomes (their own and the transgressor's payoff), affective states (the transgressor's suffering), and cognitive states (the transgressor's beliefs about the cause of that suffering). In a novel, preregistered experiment (N = 1, 959) we demonstrate that consideration for transgressors' beliefs affects punishment decisions on its own, regardless of the considerations for material outcomes (distributional preferences) and affective states (retributive preferences). By contrast, we find very little evidence for pure retributive preferences (i.e., to merely inflict suffering on transgressors). We also show that people who would otherwise enact harsh punishments, are willing to punish less severely, if by doing so they can tell the transgressor why they are punishing them. Finally, we demonstrate that the preference for affecting transgressors' beliefs cannot be explained by deterrence motives (i.e., to make transgressors behave better in the future).
Keywords: Punishment, Belief-based utility
JEL Classification: D03, C70
Behavioural Changes in Mice after Getting Accustomed to the Mirror
Behavioural Changes in Mice after Getting Accustomed to the Mirror. Hiroshi Ueno et al. Behavioural Neurology, Volume 2020 |Article ID 4071315 | 12 pages, Feb 3 2020. https://doi.org/10.1155/2020/4071315
Abstract: Patients with brain function disorders due to stroke or dementia may show inability to recognize themselves in the mirror. Although the cognitive ability to recognize mirror images has been investigated in many animal species, the animal species that can be used for experimentation and the mechanisms involved in recognition remain unclear. We investigated whether mice have the ability to recognize their mirror images. Demonstrating evidence of this in mice would be useful for researching the psychological and biological mechanisms underlying this ability. We examined whether mice preferred mirrors, whether plastic tapes on their heads increased their interest, and whether mice accustomed to mirrors learnt its physical phenomenon. Mice were significantly more interested in live stranger mice than mirrors. Mice with tape on their heads spent more time before mirrors. Becoming accustomed to mirrors did not change their behaviour. Mice accustomed to mirrors had significantly increased interest in photos of themselves over those of strangers and cage-mates. These results indicated that mice visually recognized plastic tape adherent to reflected individuals. Mice accustomed to mirrors were able to discriminate between their images, cage-mates, and stranger mice. However, it is still unknown whether mice recognize that the reflected images are of themselves.
Abstract: Patients with brain function disorders due to stroke or dementia may show inability to recognize themselves in the mirror. Although the cognitive ability to recognize mirror images has been investigated in many animal species, the animal species that can be used for experimentation and the mechanisms involved in recognition remain unclear. We investigated whether mice have the ability to recognize their mirror images. Demonstrating evidence of this in mice would be useful for researching the psychological and biological mechanisms underlying this ability. We examined whether mice preferred mirrors, whether plastic tapes on their heads increased their interest, and whether mice accustomed to mirrors learnt its physical phenomenon. Mice were significantly more interested in live stranger mice than mirrors. Mice with tape on their heads spent more time before mirrors. Becoming accustomed to mirrors did not change their behaviour. Mice accustomed to mirrors had significantly increased interest in photos of themselves over those of strangers and cage-mates. These results indicated that mice visually recognized plastic tape adherent to reflected individuals. Mice accustomed to mirrors were able to discriminate between their images, cage-mates, and stranger mice. However, it is still unknown whether mice recognize that the reflected images are of themselves.
4. Discussion
In this study, we applied a plastic tape to the heads of mice and investigated their change in interest towards the mirror. The interest in the mirror when the plastic tape was applied to the heads of mice significantly increased from before they were accustomed to the mirror to after they were accustomed to the mirror. Furthermore, we found that mice frequently contacted the mirror, suggesting that they could distinguish the image on the mirror from the faces of the cage-mate and stranger mice.
Animals that are thought to be able to perceive their reflections in the mirror as their own figures, in many cases, follow four steps when faced with a mirror: (1) make social reactions, (2) explore the physical sense (such as checking the back of the mirror), (3) perform repetitive actions to test the mirror, and (4) understand that the image reflected is their own [9]. In the tests used in this study, mice did not show social reactions or exploratory behaviours of reacting positively to mirrors as did chimpanzees, dogs, and fish in previous studies. The interest of the mice to the opaque board was comparable to that to the mirror. Previous reports show that the mirror slightly disgusted the mice, and that unlike with other animal species, mirrors are not environmentally enriching material for mice [22, 23]. For this reason, chambers composed of mirrors are used to test the effects of anxiolytic drugs in mice [24, 25]. The difference in our results may be due to the differences in the familiarity of the mice to the mirror, the reflective state of the plastic breeding home cages, single breeding versus mass breeding, and the illuminance of the experimental environment. Specular reflection provides only visual information, whereas live animals provide multiple sensory information. Therefore, live animals have richer stimuli than mirrors. Since mice are animals that prioritize olfaction rather than vision and hearing, it is considered reasonable that their interest in the mirror without smell quickly diminishes.
In this study, all the mice showed a stronger interest in the live stranger mouse than in the mirror. Previous studies have reported that mice are more interested in mirrors than stranger mice [26]. The difference in these results may have been because of accustoming the mice to the mirror, which may have affected the result. However, it is reasonable that the mice would show interest in live stranger mice that provide multiple sensory information rather than in specular reflections that provide only visual information. Moreover, even if mice do not understand the reflection in the mirror as the reflected image of itself, it is a natural reaction to ignore the mirror image as a harmless stimulus for themselves rather than to recognize it as a homogeneous individual to react with socially [27]. In fact, the mental process and cognitive ability of the mouse in response to the mirror are unknown, and further research is needed for elucidation. This study shows that mice are not interested in mirrors.
We applied small plastic tapes to the heads of mice for the tape on the head test. Mice with the tapes on their heads showed an increased interest in the mirror. Even after becoming accustomed to the mirror, the interest of the taped mouse to the mirrors remained high. However, the taped mice did not show behaviour suggestive of trying to eliminate the tape. During the mark test, the mark may not be perceived as abnormal by the animals, and they may not feel the impulse to touch it. Pigs have been reported to be able to recognize their movements in the mirror in a very short time, and to be self-conscious [28]. However, it is thought that since pigs are accustomed to the state of mud being attached to their bodies, even if they are marked during experiments, they do not mind this. This does not mean that they are not self-perceiving. Even if they can perceive themselves, if the motivation to remove dirt from their faces is small, they would not show action suggestive of trying to touch the mark. It may be possible that mice, like pigs, do not feel the necessity to remove foreign objects attached to their bodies. The mark test is a compound task in which the ability of the subject to use tools, recognize itself, and detect visual information is questioned. The mark test has been used as a method for confirming the presence or absence of self-awareness, but opinion is divided on its validity [16]. The mark test represents one aspect of self-recognition that has, in recent years, been considered to be different from the overall self-recognition that human beings experience. Moreover, it may not be so meaningful to target an animal that mainly uses a sense other than vision during the mirror test [29]. The results of the present study did not necessarily indicate the existence of self-recognition capability in mice. However, it showed that mice visually perceived unusual states through the mirror. Further research is needed to clarify factors that increase the interest of mice towards mirrors.
Other than the head, the throat is another location that can be used to attach the tape. A similar study analysed the behaviour of a magpie with a tape attached to the throat [10]. However, the throat is a motile part of the animal body, and a tape attached to the throat would provide a tactile stimulus. However, the skin on the head is immobile, and a level of similar tactile stimulus would be implausible.
This study showed that, by spending time with a mirror in the home cage, mice changed their interest in photos of themselves over photos of cage-mates and stranger mice. This mouse behaviour indicated that by learning through the mirror, the mice recognized that the image in the mirror was different from the figure of their cage-mate mice. It should be noted that the results of this study are not evidence that mice recognize the images in the mirror as their own. However, some animals have shown that their self-perception of the mirror image can be enabled through experience [30, 31]. It has been reported that self-perception of the mirror image occurs naturally in rhesus macaques after training for accurate visual-specific receptor association to mirror images [32, 33]. It has also been reported that pigeons pass the mark test after thorough training by voluntary and mirror-based pecking [30, 34]. In recent years, it has been reported that the cleaner wrasse (Labroides dimidiatus) is able to self-recognize by learning [12]. In addition, it is indicated that the age of acquiring mirror image self-recognition in humans is related to the frequency of postnatally experiencing the mirror and to cultural differences [35]. Among infants in Africa, where opportunities to see mirrors are less frequent than in developed countries, it is reported that the age at which mirror images of the self can be clearly perceived is somewhat higher. The results of this study are consistent with previous reports, indicating the possibility that more animals show that there is a sense of “self” than we think. Since mirror image self-recognition increased as the mirror was experienced more frequently, it is considered that changing the cognitive evaluation of the mirror at the stage when the reflecting property of the mirror and the reflecting object are learnt becomes the turning point.
The mental processes of mice and other animal species, such as apes are unknown, and it is difficult to decipher the cognitive abilities of such animals. Our results show that the mouse is an animal that alters recognition to the mirror by learning. Further research is needed to clarify the mirror recognition of the self by mice. Having a mouse as an effective model for behavioural research, such as mirror self-recognition, opens doors to study aspects of this behaviour that would otherwise be impossible to study.
Patients who suffer from failure of brain function due to a stroke or dementia may show symptoms of being able to recognize the images of their family members and others in the mirror, while not being able to recognize their own images. This phenomenon is called mirror self-misidentification [36]. Mirror self-misidentification is also a symptom of dissociative disorder [37]. However, the mechanism by which this occurs is not clear. Furthermore, when the function of the upper part of the medial prefrontal cortex is temporarily stopped by the transcranial magnetic stimulation method, the person manifests symptoms of being unable to recognize themselves when looking at the mirror [38]. These reports suggest that specific neural circuits are involved in the perception of mirror images of oneself in humans. It is therefore useful to develop a method to clarify these neural mechanisms, to treat cranial nerve disease, and to further clarify the evolutionary basis of the cognitive ability of recognizing mirror images of oneself. New knowledge obtained by conducting experiments on animals focus on whether or not mirror self-recognition is possible for those specific species. Furthermore, many other questions on the neural infrastructure remain. This study shows the potential of using mice for elucidating neural circuits.
Greek Cultural Context: About half of men preferring several sexual partners, not minding lower mate malue; about a third of women preferred several lifetime sexual partners, minding the mate value
Desire for Sexual Variety in the Greek Cultural Context. Menelaos Apostolou. The Cyprus Review, Vol. 31 No. 1 (2019), Feb 4, 2020. http://cyprusreview.org/index.php/cr/article/view/628
Abstract: Human beings exhibit considerable variation in their approach towards the number of sexual partners they wish to have. One consistent predictor of this variation has been sex—men desire more sexual partners than women. The current study aims to examine whether this effect is present in the Greek cultural context. In particular, a sample of 1414 Greek and Greek-Cypriot participants were asked about their desired number of sexual partners at different stages in their lives. It was found that men preferred significantly more partners than women. It was further found that men were divided in their preferences, with about half preferring several, and about half preferring a few lifetime sexual partners. On the other hand, about three-thirds of women preferred a few lifetime sexual partners with about one-third preferring several lifetime partners.
Abstract: Human beings exhibit considerable variation in their approach towards the number of sexual partners they wish to have. One consistent predictor of this variation has been sex—men desire more sexual partners than women. The current study aims to examine whether this effect is present in the Greek cultural context. In particular, a sample of 1414 Greek and Greek-Cypriot participants were asked about their desired number of sexual partners at different stages in their lives. It was found that men preferred significantly more partners than women. It was further found that men were divided in their preferences, with about half preferring several, and about half preferring a few lifetime sexual partners. On the other hand, about three-thirds of women preferred a few lifetime sexual partners with about one-third preferring several lifetime partners.
Results provided very limited evidence for the existence of opinion-based homogeneity on YouTube, even when the whole network was divided into sub-networks
Opinion-based Homogeneity on YouTube: Combining Sentiment and Social Network Analysis. Daniel Röchert+German Neubaum+Björn Ross+Florian Brachten+Stefan Stieglitz. Computational Communication Research, February 3, 2020. https://computationalcommunication.org/ccr/article/view/15
Abstract: The growing complexity of political communication online goes along with increasing methodological challenges to process communication data properly in order to investigate public concerns such as the existence of echo chambers. To cover the full range of political diversity in online communication, we argue that it is necessary to focus on specific political issues. This study proposes an innovative combination of computational methods, including natural language processing and social network analysis, that serves as a model for future research on the evolution of opinion climates in online networks. Data were gathered on YouTube, enabling the assessment of users’ expressed opinions on two political issues. Results provided very limited evidence for the existence of opinion-based homogeneity on YouTube. This was true even when the whole network was divided into sub-networks. Findings are discussed in light of current computational communication research and the vigorous debate on echo chambers in online networks.
Keywords: machine-learning, echo chamber, social network analysis, computational science
Abstract: The growing complexity of political communication online goes along with increasing methodological challenges to process communication data properly in order to investigate public concerns such as the existence of echo chambers. To cover the full range of political diversity in online communication, we argue that it is necessary to focus on specific political issues. This study proposes an innovative combination of computational methods, including natural language processing and social network analysis, that serves as a model for future research on the evolution of opinion climates in online networks. Data were gathered on YouTube, enabling the assessment of users’ expressed opinions on two political issues. Results provided very limited evidence for the existence of opinion-based homogeneity on YouTube. This was true even when the whole network was divided into sub-networks. Findings are discussed in light of current computational communication research and the vigorous debate on echo chambers in online networks.
Keywords: machine-learning, echo chamber, social network analysis, computational science
Cities have a negative impact on navigation ability: evidence from 38 countries
Cities have a negative impact on navigation ability: evidence from 38 countries. Antoine Coutrot, Ed Manley, Demet Yesiltepe, Ruth C Dalton, Jan M Wiener, Christoph Holscher, Michael Hornberger, Hugo J Spiers. bioRxiv, Feb 5 2020. https://doi.org/10.1101/2020.01.23.917211
Abstract: Cultural and geographical properties of the environment have been shown to deeply influence cognition and mental health. However, how the environment experienced during early life impacts later cognitive abilities remains poorly understood. Here, we used a cognitive task embedded in a video game to measure non-verbal spatial navigation ability in 442,195 people from 38 countries across the world. We found that on average, people who reported having grown up in cities have worse navigation skills than those who grew-up outside cities, even when controlling for age, gender, and level of education. The negative impact of cities was stronger in countries with low average Street Network Entropy, i.e. whose cities have a griddy layout. The effect was smaller in countries with more complex, organic cities. This evidences the impact of the environment on human cognition on a global scale, and highlights the importance of urban design on human cognition and brain function.
Abstract: Cultural and geographical properties of the environment have been shown to deeply influence cognition and mental health. However, how the environment experienced during early life impacts later cognitive abilities remains poorly understood. Here, we used a cognitive task embedded in a video game to measure non-verbal spatial navigation ability in 442,195 people from 38 countries across the world. We found that on average, people who reported having grown up in cities have worse navigation skills than those who grew-up outside cities, even when controlling for age, gender, and level of education. The negative impact of cities was stronger in countries with low average Street Network Entropy, i.e. whose cities have a griddy layout. The effect was smaller in countries with more complex, organic cities. This evidences the impact of the environment on human cognition on a global scale, and highlights the importance of urban design on human cognition and brain function.
Friday, February 7, 2020
Physiology predicting ideology: The relationship have not fully replicated in more recent, well-powered replications
Physiology predicts ideology. Or does it? The current state of political psychophysiology research. Kevin B Smith, Clarisse Warren. Current Opinion in Behavioral Sciences, Volume 34, August 2020, Pages 88-93. https://doi.org/10.1016/j.cobeha.2020.01.001
Political scientists are increasingly adopting psychophysiological research modalities to investigate the biomarkers of political attitudes and behavior. A good deal of this research focuses on the sympathetic branch of the autonomic nervous system. This makes a good deal of sense as psychophysiologists have long associated the sympathetic nervous system (SNS) with the sorts of implicit emotional-cognitive processing theorized to underpin a range of political attitudes. This review assesses the literature examining the relationship between political attitudes and individual-level variation in SNS activation, especially in response to disgust/threat stimuli where non-physiological research provides the basis for a strong a priori hypothesis for the existence of such a relationship. The empirical record for this relationship proves to be mixed, with a number of studies supporting the base theoretical expectations, but failed replications and questions about what is actually being measured also raising questions about the generalizability of the findings.
Political scientists are increasingly adopting psychophysiological research modalities to investigate the biomarkers of political attitudes and behavior. A good deal of this research focuses on the sympathetic branch of the autonomic nervous system. This makes a good deal of sense as psychophysiologists have long associated the sympathetic nervous system (SNS) with the sorts of implicit emotional-cognitive processing theorized to underpin a range of political attitudes. This review assesses the literature examining the relationship between political attitudes and individual-level variation in SNS activation, especially in response to disgust/threat stimuli where non-physiological research provides the basis for a strong a priori hypothesis for the existence of such a relationship. The empirical record for this relationship proves to be mixed, with a number of studies supporting the base theoretical expectations, but failed replications and questions about what is actually being measured also raising questions about the generalizability of the findings.
Introduction
A considerable research literature suggests that political attitudes and behaviors are genetically, which is to say biologically, influenced. Well-powered analyses using a variety of methodological approaches—including twin studies, adoption studies, and genome wide association studies—converge on the inference that a non-trivial amount of variation in political beliefs and behaviors systematically maps onto genetic variation [1, 2, 3, 4]. While the evidence of biological influences on political phenotypes is persuasive, the specific downstream mechanisms explaining this link have only recently begun to be systematically investigated. Ideology is a complex social phenotype. Its heritable components are almost certainly polygenic in nature, and the biological mechanisms that presumably mediate between genes and political beliefs almost certainly interact with environmental influences in ways that are far from fully understood. How does biology actually influence political traits? Currently, the only honest answer to this question is that we are not completely sure.
While no comprehensive, universally accepted answer exists to this question, a rough theoretical model emerged over the past decade or so to guide investigations of the link between biology and ideology. Succinctly, this model assumes genetic variation leads to individual-level differences in the physiological systems that not only play a key role in extracting and processing information from the external environment, but also in generating automatic emotional and behavioral responses to a given environmental situation or stimuli. The essential idea is that genes build biological information processing systems, there is individual-level variation in those systems due to both genetic and environmental influences, these individual-level variations lead to differences in implicit emotional and cognitive responses to environmental stimuli, and those differences reflect physiologically instantiated predispositions that at least partially drive political preferences [5, 6, 7].
An obvious general hypothesis generated by this framework is that individual-level differences in physiological responses to particular stimuli should predict political beliefs and behavior. An initial tranche of mostly small-N studies focusing on the sympathetic branch of the autonomic nervous system (ANS) reported evidence of exactly such links, though the reported physiology/ideology relationships centered more on certain attitudes associated with social conservatism rather than ideology more broadly conceived. Those relationships, however, have not fully replicated in more recent, well-powered replications. This raises questions not just about the specific relationships tested, but the broader theoretical framework generating the hypotheses.
We review this research and, especially in light of recent replication failures, examine implications for future research. We suggest one way forward is to narrow both the theoretical and empirical approach, focusing on physiological responses to more narrowly targeted stimuli and how those responses do or do not predict more specific political attitudes.
The ANS as a basis to investigate Links between physiology and ideology
The primary purpose of the ANS is to maintain homeostasis between external and internal environments through the regulation and coordination of bodily functions like digestion, respiration, and cardiac activity. These regulatory functions are largely automatic and implicit and occur outside of conscious awareness. The ANS has two primary branches, the sympathetic nervous system (SNS; colloquially known as the ‘fight or flight’ system) and the parasympathetic nervous system (PNS; the ‘rest and digest’ system).
Most research targeting the ANS to investigate the physiological underpinnings of ideology has focused on the SNS. This is in no small part because of the availability of a well-validated measure of SNS arousal, electrodermal activity (EDA), that can be obtained reliably and relatively cheaply using little more than a bioamplifier and sensors capable of capturing skin conductance.
The SNS represents a good target for testing links between physiology and ideology because psychophysiologists have long associated SNS activity with various aspects of automatic emotional-cognitive processing [8]. Some of those processes are, prima facie, good candidates to map onto political attitudes and behaviors. For example, recent literature reviews suggest the attitudinal and behavioral differences between liberals and conservatives are systematic, are anchored in traits such as negativity bias and in-group versus out-group bias, and almost certainly have a neurobiological basis [9]. That neurobiological basis clearly encompasses the ANS because it is already associated with some of these traits. It is well established, for example, that negative stimuli evoke a greater SNS response than non-negative stimuli, and this includes stimuli such as negative news stories that are relevant to politics [10, 11, 12, 13]. So variation in skin conductance seems to reliably capture (among other things) individual-level variation in negativity bias, and individual-level variation in negativity bias is widely hypothesized to systematically co-vary with political beliefs. This is all consistent with the hypothesis that SNS response to particular types of negatively valenced stimuli will predict political beliefs.
Exactly such arguments have already been made in relation to specific categories of aversive stimuli, especially threat and disgust. Numerous studies using non-physiological (self-report) measures have repeatedly found disgust and threat sensitivity correlate with political attitudes [14, 15, 16, 17, 18, 19], and a meta-analytic review encompassing 134 samples from 16 countries concludes there is consistent evidence that both subjective perceptions and objective experiences of fearful or threating stimuli correlate with conservatism [20]. Theoretically, this relationship is assumed to be anchored in evolved implicit processes [21]. In short, there is consistent and persuasive evidence that variation in threat and disgust responses as captured by self-report batteries is predictive of political attitudes. As there is little doubt such stimuli also evoke SNS arousal [17,22], a clear physiology-based hypothesis is suggested, that is, that individual-level variation in EDA response to such stimuli should predict political beliefs. There now exists a fairly extensive literature focused on testing these sorts of hypotheses that, in effect, combine what is known about how responses to non-political stimuli map onto political beliefs, and how the SNS is known to respond to similar sorts of stimuli.
Subscribe to:
Posts (Atom)