Monday, September 23, 2019

Persistence of pain in humans and other mammals

Persistence of pain in humans and other mammals. Amanda C. de C. Williams. Proceedings of the Royal Society B, Volume 374, Issue 1785, September 23 2019. https://doi.org/10.1098/rstb.2019.0276

Abstract: Evolutionary models of chronic pain are relatively undeveloped, but mainly concern dysregulation of an efficient acute defence, or false alarm. Here, a third possibility, mismatch with the modern environment, is examined. In ancestral human and free-living animal environments, survival needs urge a return to activity during recovery, despite pain, but modern environments allow humans and domesticated animals prolonged inactivity after injury. This review uses the research literature to compare humans and other mammals, who share pain neurophysiology, on risk factors for pain persistence, behaviours associated with pain, and responses of conspecifics to behaviours. The mammal populations studied are mainly laboratory rodents in pain research, and farm and companion animals in veterinary research, with observations of captive and free-living primates. Beyond farm animals and rodent models, there is virtually no evidence of chronic pain in other mammals. Since evidence is sparse, it is hard to conclude that it does not occur, but its apparent absence is compatible with the mismatch hypothesis.

Humans’ left cheek portrait bias extends to chimpanzees: Depictions of chimps on Instagram

Humans’ left cheek portrait bias extends to chimpanzees: Depictions of chimps on Instagram. Annukka K. Lindell. Laterality: Asymmetries of Body, Brain and Cognition, Sep 22 2019. https://doi.org/10.1080/1357650X.2019.1669631

ABSTRACT: When posing for portraits, humans favour the left cheek. This preference is argued to stem from the left cheek’s greater expressivity: as the left hemiface is predominantly controlled by the emotion-dominant right hemisphere, it expresses emotion more intensely than the right hemiface. Whether this left cheek bias extends to our closest primate relatives, chimpanzees, has yet to be determined. Given that humans and chimpanzees share the same oro-facial musculature and contralateral cortical innervation of the face, it appears probable that humans would also choose to depict chimps showing the more emotional left cheek. This paper thus examined portrait biases in images of chimpanzees. Two thousand photographs were sourced from Instagram’s “Most Recent” feed using the #chimpanzee, and coded for pose orientation (left, right) and portrait type (head and torso, full body). As anticipated, there were significantly more left cheek (57.2%) than right cheek images (42.8%), with the bias observed across both head and torso and full body portraits. Thus humans choose to depict chimpanzees just as we depict ourselves: offering the left cheek. As such, these data confirm that the left cheek bias is observed in both human and non-human primates, consistent with an emotion-based account of the orientation preference.

KEYWORDS: Left, right, emotion, photo, primate


In face perception, reducing visual input greatly increases perceived attractiveness; left/right half faces look far more attractive than bilaterally symmetric whole faces

Face perception loves a challenge: Less information sparks more attraction. Javid Sadr, Lauren Krowicki. Vision Research, Volume 157, April 2019, Pages 61-83. https://doi.org/10.1016/j.visres.2019.01.009

Highlights
•    In face perception, reducing visual input greatly increases perceived attractiveness.
•    This “partial information effect” occurs with blur, contrast reduction, and occlusion.
•    Left/right half faces look far more attractive than bilaterally symmetric whole faces.
•    There are no male/female differences in this “less is more” enhancement effect.

Abstract: Examining hedonic questions of processing fluency, objective stimulus clarity, and goodness-of-fit in face perception, across three experiments (blur, contrast, occlusion) in which subjects performed the simple, natural task of rank-ordering faces by attractiveness, we find a very consistent and powerful effect of reduced visual input increasing perceived attractiveness. As images of faces are blurred (i.e., as higher spatial frequencies are lost, mimicking at-a-distance, eccentric, or otherwise unaccommodated viewing, tested down to roughly 6 cycles across the face), reduced in contrast (linearly, down to 33% of the original image’s), and even half-occluded, the viewer’s impression of the faces’ attractiveness, relative to non- or less-degraded faces, is greatly enhanced. In this regard, the blur manipulation exhibits a classic exponential profile, the contrast manipulation follows a simple linear trend. Given the far superior attractiveness of half-occluded faces, which have no symmetry whatsoever, we also see that it may be incorrect to claim that facial symmetry is attractive and perhaps more accurate that asymmetry may be unattractive. As tested with a total of 200 novel female faces over three experiments, we find absolutely no male/female differences in this “partial information effect” of enhanced subjective attraction, nor do we find differences across the repetition of the task through to a second block of trials in which the faces are re-encountered and no longer novel. Finally, whereas objective stimulus quality is reduced, we suggest a positive hedonic experience arises as a subjective phenomenological index of enhanced perceptual goodness-of-fit, counter-intuitively facilitated by what may be stimulus-distilling image-level manipulations.

Humans have likely spent the vast majority of our history as a species in relatively egalitarian, small-scale societies; this does not mean humans are by nature egalitarian, but that ecological & demographic conditions suppressed dominance

Making and unmaking egalitarianism in small-scale human societies. Chris von Rueden. Current Opinion in Psychology, Volume 33, June 2020, Pages 167-171. https://doi.org/10.1016/j.copsyc.2019.07.037

Highlights
•    Modal political organization of ancestral human societies is egalitarianism.
•    Role of prestige in human hierarchy is a contributing factor to egalitarianism.
•    Historical shifts to greater inequality include coercive and non-coercive forces.

Abstract: Humans have likely spent the vast majority of our history as a species in relatively egalitarian, small-scale societies. This does not mean humans are by nature egalitarian. Rather, the ecological and demographic conditions common to small-scale societies favored the suppression of steep, dominance-based hierarchy and incentivized relatively shallow, prestige-based hierarchy. Shifts in ecological and demographic conditions, particularly with the spread of agriculture, weakened constraints on coercion.

Check also: Romanticizing the Hunter-Gatherer, Despite the Girl Infanticide, the Homicide Rate, etc.
Romanticizing the Hunter-Gatherer. William Buckner. Quillette, December 16, 2017.  https://www.bipartisanalliance.com/2017/12/romanticizing-hunter-gatherer-despite.html

Zimmermann's World Resources And Industries, 1st edition, 1933

Zimmermann's World Resources And Industries, 1st edition, 1933
https://drive.google.com/open?id=10USDzBnuR0GxZS30wvuWBlZ7DyZ1oAiB

Erich Walter Zimmermann, resource economist, was born in Mainz, Germany, on July 31, 1888 and died in Austin, United States of America, on February 16, 1961. He was an economist at the University of North Carolina and later the University of Texas.
Zimmermann of the Institutional school of economics[1] called his real world theory the functional theory of mineral resources. His followers have coined the term resourceship to describe the theory.[2] Unlike traditional descriptive inventories, Zimmermann's method offered a synthetic assessment of the human, cultural, and natural factors that determine resource availability.
Zimmermann rejected the assumption of fixity. Resources are not known, fixed things; they are what humans employ to service wants at a given time. To Zimmermann (1933, 3; 1951, 14), only human "appraisal" turns the "neutral stuff" of the earth into resources.[3] What are resources today may not be tomorrow, and vice versa. According to Zimmermann, "resources are not, they become."[4] "According to the definition of ew Zimmerman, the word ,"resource " does not refer to a thing but to a function which a thing may perform to an operation in which it may take part,namely,the function or operation of attaining a given end such a satisfying a want.

Bibliography

  • “Resources of the South”, The South-Atlantic Quarterly (July 1933)
  • World Resources and Industries: A Functional Appraisal of the Availability of Agricultural and Industrial Resources (1933) New York: Harper & Brothers
  • World Resources and Industries, 2nd revised ed. (1951) New York: Harper & Brothers

Sunday, September 22, 2019

More than 40% of our participants experienced difficulties in starting or keeping an intimate relationship; poor flirting skills, poor mate signal-detection ability, and high shyness were associated with poor performance in mating

Mating Performance: Assessing Flirting Skills, Mate Signal-Detection Ability, and Shyness Effects. Menelaos Apostolou et al. Evolutionary Psychology, September 22, 2019. https://doi.org/10.1177/1474704919872416

Abstract: Several people today experience poor mating performance, that is, they face difficulties in starting and/or keeping an intimate relationship. On the basis of an evolutionary theoretical framework, it was hypothesized that poor mating performance would be predicted by poor flirting skills, poor mate signal-detection ability, and high shyness. By employing a sample of 587 Greek-speaking men and women, we found that more than 40% of our participants experienced difficulties in starting and/or keeping an intimate relationship. We also found that poor flirting skills, poor mate signal-detection ability, and high shyness were associated with poor performance in mating, especially with respect to starting an intimate relationship. The effect sizes and the odds ratios indicated that flirting skills had the largest effect on mating performance, followed by the mate signal-detection ability and shyness.

Keywords: mating performance, mating, mismatch, flirting, shyness

Is There a Relationship Between Cyber-Dependent Crime, Autistic-Like Traits and Autism?

Is There a Relationship Between Cyber-Dependent Crime, Autistic-Like Traits and Autism? Katy-Louise Payne et al. Journal of Autism and Developmental Disorders, October 2019, Volume 49, Issue 10, pp 4159–4169. https://link.springer.com/article/10.1007/s10803-019-04119-5

Abstract; International law enforcement agencies have reported an apparent preponderance of autistic individuals amongst perpetrators of cyber-dependent crimes, such as hacking or spreading malware (Ledingham and Mills in Adv Autism 1:1–10, 2015). However, no empirical evidence exists to support such a relationship. This is the first study to empirically explore potential relationships between cyber-dependent crime and autism, autistic-like traits, explicit social cognition and perceived interpersonal support. Participants were 290 internet users, 23 of whom self-reported being autistic, who completed an anonymous online survey. Increased risk of committing cyber-dependent crime was associated with higher autistic-like traits. A diagnosis of autism was associated with a decreased risk of committing cyber-dependent crime. Around 40% of the association between autistic-like traits and cyber-dependent crime was mediated by advanced digital skills.

Keywords: Cyber-dependent crime Digital skills Autism Autistic-like traits Explicit social cognition Interpersonal support

Ledingham and Mills (2015) define cybercrime as “The illegal use of computers and the internet, or crime committed by means of computers and the internet.” Within the legal context (e.g. in the USA, UK, Australia, New Zealand, Germany, the Netherlands and Denmark; Ledingham and Mills 2015), there are two distinct types of cybercrime: (1) cyber-dependent crime, which can only be committed using computers, computer networks or other forms of information communication technology (ICT). These include the creation and spread of malware for financial gain, hacking to steal important personal or industry data and distributed denial of service (DDoS) attacks to cause reputational damage; and (2) cyber-enabled crime such as fraud, which can be conducted online or offline, but online may take place at unprecedented scale and speed (McGuire and Dowling 2013; The National Crime Agency: NCA 2016). In England and Wales, all forms of cybercrime were included in the Office for National Statistics crime estimates for the first time in 2016, which resulted in a near doubling of the crime rate. Cyber-dependent crime specifically represented 20% of UK crime (Office for National Statistics 2017) and in England and Wales in 2018, 976,000 cyber-dependent computer misuse incidents were reported (computer viruses and unauthorised access, including hacking: Office for National Statistics 2019). Furnell et al. (2015) propose that it is more important to understand the factors leading to cyber-dependent incidents and how to prevent them, than to focus on metrics such as specific costs to the global economy. Having interviewed cyber-dependent criminals, the NCA’s intelligence assessment (2017) identified that perpetrators are likely to be teenage males who are unlikely to be involved in traditional crime and also that autism spectrum disorder (ASD, hereafter autism) appears to be more prevalent amongst cyber-dependent criminals than the general populace—though this remains unproven. No socio-demographic bias has yet been identified amongst cyber-dependent offenders or those on the periphery of criminality.

This apparent relationship between cyber-dependent crime and autism is echoed in a survey of six international law enforcement agencies’ (UK; USA; Australia; New Zealand; Germany; the Netherlands; Denmark) experiences and contact with autistic1 cybercriminals (Ledingham and Mills 2015), which indicated that some autistic individuals commit cyber-dependent offences. Offences committed included: hacking; creating coding to enable a crime to be committed; creating, deploying or managing a bot or bot-net; and malware (Ledingham and Mills 2015). This was a small-scale study, limiting the generalisability of findings, but it does indicate a presence of autistic offenders within cyber-dependent crime populations, although the link between autism and cyber-dependent crime remains largely speculative as cyber-dependent criminality may be evidenced within a wide range of populations. Further clarification of any relationship between autism and cyber-dependent crime is required before any conclusions can be inferred.

Studies in Asia, Europe, and North America have identified an average prevalence of autism of between 1% and 2% (CDC 2018). Autism is a long-term condition predominately diagnosed in males, characterised by persistent deficits in social communication and interaction coupled with restricted and repetitive patterns of behaviour, interests or activities (American Psychiatric Association 2013; CDC 2018). One possibility is that the anecdotal evidence of apparent autism-like behaviour in cyber-dependent criminals may actually be reflecting people with high levels of autistic-like traits who do not have a diagnosis of autism (Brosnan in press). Autistic-like traits refer to behavioural traits such as social imperviousness, directness in conversation, lack of imagination, affinity for solitude, and difficulty displaying emotions (Gernsbacher et al. 2017). Autistic-like traits are argued to vary continuously across the general population, with studies reporting that autistic groups typically have higher levels of autistic-like traits than non-autistic comparison groups (Baron-Cohen et al. 2001, 2006; Constantino and Todd 2003; Kanne et al. 2012; Plomin et al. 2009; Posserud et al. 2006; Skuse et al. 2009; see also Bölte et al. 2011; Gernsbacher et al. 2017; Ronald and Hoekstra 2011; Ruzich et al. 2015a for meta-analysis). Autistic-like traits are typically assessed through self-report measures such as the 50-item Autism Spectrum Quotient (AQ: Baron-Cohen et al. 2001; see also Baghdadli et al. 2017). Ruzich et al.’s (2015a) meta-analysis of responses to the AQ from almost 7000 non-autistic and 2000 autistic respondents identified that non-autistic males had significantly higher levels of autistic-like traits than non-autistic females, and that autistic people had significantly higher levels of autistic-like traits compared to the non-autistic males (with no sex difference within the autistic sample). A clinical cut-off of a score of 26 on the AQ has been proposed to be suggestive of autism (Woodbury-Smith et al. 2005b), and whilst there are similarities between those with and without a diagnosis of autism who score above the cut-off the AQ, the AQ is not diagnostic. Importantly, there are also differences between those with and without a diagnosis of autism who scored above the cut-off (Ashwood et al. 2016; Bralton et al. 2018; Focquaert and Vanneste 2015; Lundqvist and Lindner 2017; see also Frith 2014).

With respect to cyber-dependent crime, some members of both autistic and high autistic-like trait groups will have developed advanced digital skills that are likely to be required to commit cyber-dependent crime. Indeed a specific relationship between ‘autism and the technical mind’ has been previously speculated by Baron-Cohen (2012; see also Wei et al. 2013). Moreover, computer science students and those employed in technology are two of the groups who typically possess higher levels of autistic-like traits (Baron-Cohen et al. 2001; Billington et al. 2007; Ruzich et al. 2015b). These relationships are potentially significant, as cyber-dependent criminal activity requires an advanced level of cyber-related skills (such as proficiency in programming in Java, C/C++, disassemblers, and assembly language and programming knowledge of scripting languages [PHP, Python, Perl, or Shell]; Insights 2018). Thus, there may be an association between autistic-like traits and the potential to develop the advanced digital skills required for cyber-dependent crime.

Assessing the relationship between autistic-like traits and cyber deviancy in a sample of college students, Seigfried-Spellar et al. (2015) found that of 296 university students, 179 (60%) engaged in some form of cyber-deviant behaviour (such as hacking, cyberbullying, identity theft, and virus writing) and the AQ distinguished between those who did and those who did not self-report cyber-deviant behaviour, with higher AQ scores among those reporting cyber-deviant behaviours. The authors also reported that if they used a cut-off score on the AQ of 26 to indicate high levels of autistic-like traits associated with autism, then 7% of the computer non-deviants and 6% of the computer deviants scored in this range. The authors concluded that ‘based on these findings alone, there is no evidence of a significant link between clinical levels of [autism] and computer deviance in the current sample. Nevertheless, the current study did find evidence for computer deviants reporting more autistic-like traits, according to the AQ, compared to computer non-deviants’. However, ‘cyber-deviant’ behaviour in Seigfried-Spellar et al.’s study included both cyber-enabled crimes such as cyberbullying and identity theft, as well as cyber-dependent crimes such as hacking and virus writing. This requires a more nuanced examination as there may be important differences in the relationship between autistic-like traits and cyber-dependent crime compared with cyber-enabled crime.

Cyber-enabled crime is an online variant of traditional crimes (such as fraud) and shares common motivations such as financial gain, whereas the motivations for cyber-dependent crime can be based around a sense of challenge in hacking into a system or enhanced reputation and credibility within hacker communities (NCA 2017). This may be pertinent for the relationship between cyber-dependent crime specifically and autism or autistic-like traits, since cyber-dependent criminals typically have not engaged in traditional crime (NCA 2017) and autism has been associated with generally being law abiding and low rates of criminality (Blackmore et al. 2017; Ghaziuddin et al. 1991; Heeramun et al. 2017; Howlin 2007; Murrie et al. 2002; Wing 1981; Woodbury-Smith et al. 2005a, 2006). In addition, several studies have suggested that autistic internet-users can demonstrate a preference for mediating social processes online, such as preferring to use social media over face-to-face interaction to share interests (Brosnan and Gavin 2015; Gillespie-Lynch et al. 2014; van der Aa et al. 2016). This may be significant, as it has been suggested that social relationships developed online are key to progressing into cyber-dependent crime, with forum interaction and reputation development being key drivers of cyber-dependent criminality (NCA 2017).

Finally, failing to appreciate the impact of crime upon others may be a relevant factor, as autism has been argued to reflect a diminished social cognition (e.g., theory of mind, Baron-Cohen et al. 1985). It has been suggested that there are two levels of social cognition; namely, a quicker and less conscious implicit social cognition, and a more conscious, slower and controlled explicit social cognition (Frith and Frith 2008; see also Heyes 2014). Autistic individuals are often not impaired in explicit social cognition, but are reportedly impaired on implicit social cognition (Callenmark et al. 2014; see also Dewey 1991; Frith and Happé 1999). This profile is also reflected in non-social cognition such as reasoning (Brosnan et al. 2016, 2017; Lewton et al. 2018) which may be better characterised as impaired processing of automatic, cognitively efficient heuristics (Brosnan and Ashwin 2018; Happé et al. 2017). Explicit social cognition is therefore a more pertinent measure of the potential to consider the impact of crime upon others.

The aim of the present study was to explore the apparent relationship identified by international law enforcement agencies between autistic-like traits and cyber-dependent crime. To do this, we conducted an online survey exploring autistic-like traits, cyber-related activities (legal and illegal) as well as perceived interpersonal support and explicit theory of mind. Our research question addressed whether higher autistic-like traits, lower explicit theory of mind and lower perceived interpersonal support would increase the risk of committing cyber-dependent crime. We also addressed whether autistic-like traits would be associated with cyber-dependent crime and whether this relationship would be mediated by advanced digital skills. Given the findings associating higher levels of law-abiding behaviour with autism, we also speculated that autism may represent a group of individuals with higher levels of autistic-like traits, but without a higher risk of committing cyber-dependent crime.


---

Discussion

International law enforcement agencies report an apparent relationship between autism and cyber-dependent crime, although any such link remains unproven (Ledingham and Mills ; NCA ). This was the first study to empirically explore whether autism, autistic-like traits, explicit social cognition, interpersonal support and digital skills were predictors of cyber-dependent criminality. Whilst higher levels of autistic-like traits were associated with a greater risk of committing cyber-dependent crime, a self-reported diagnosis of autism was associated with a decreased risk of committing cyber-dependent crime. Around 40% of the association between autistic-like traits and cyber-dependent crime was attributable to greater levels of advanced digital skills. Basic digital skills were also found to be a mediator between autistic-like traits and cyber-dependent crime, although they accounted for a smaller proportion of the association than advanced digital skills.
These findings are consistent with the proposal that the apparent association between autism and cyber-dependent crime identified by law enforcement agencies may be reflecting higher levels of autistic-like traits amongst cybercriminals but that this does not necessarily equate to autism being a risk factor for cybercrime. This confusion may well arise because typically, autistic people do report higher levels of autistic-like traits than the general population (Ruzich et al. ). Cyber-dependent crime may therefore represent an area that distinguishes high autistic-trait non-autistic groups from autistic groups, consistent with proposal that people with autism differ qualitatively from non-autistic people who are nevertheless high in autistic-like traits (see Ashwood et al. ; Frith ). The finding that autistic respondents were less likely to commit cyber-dependent crime is also consistent with literature suggesting that autistic people are generally as law abiding, if not more so, than the general population. Lower levels of criminality are shown, at least for certain types of crime (Blackmore et al. ; Cheely et al. ; Ghaziuddin et al. ; Heeramun et al. ; Howlin ; King and Murphy ; Murrie et al. ; Wing ; Woodbury-Smith et al. , ; but see, Rava et al. ; Tint et al. ).
Thus, there is evidence that higher AQ scores are associated with higher levels of cyber-dependent crime regardless of an autism diagnosis. As this association was independent from the autism diagnosis, there may be something about autistic-like traits beyond the diagnostic criteria for autism that relates to cyber-dependent criminal activity. The mediation analysis suggests that an association between autistic-like traits and advanced digital skills may represent a key factor. We cautiously state above that those reporting an autism diagnosis were less likely to report cyber-dependent crime. Cautiously, as this could be for various reasons beyond high AQ and autism being different things, including a diagnosis of autism leading to some protection (e.g., more support leading to less potential criminal behaviour; see Heeramun et al. ). Importantly, however, there are potential selection issues in relation to individuals who respond to an invitation to complete an online survey on this topic, thus the possibility of selection bias cannot be ruled out. We do not know how many did not respond to the invitations (and therefore could not identify a response rate, for example) and the apparent protective effect could be a chance finding due to small numbers. Future research using larger samples can address such concerns and until that time the suggestion that autism may be protective should be considered speculative, especially as the data is self-reported and diagnostic status could not be independently verified in the present study.
Previous research has identified higher levels of autistic-like traits being present within scientific disciplines in which computer science students and employees are included (Baron-Cohen et al. ; Billington et al. ; Ruzich et al. ). This study is the first to specify a direct relationship between higher levels of autistic-like traits and advanced digital skills. In addition to being a pre-requisite for committing cyber-dependent crimes, these skills are essential for the cyber security industry which will have an estimated 3.5 million unfulfilled jobs by 2021 (Morgan ). This study suggests that targeting groups high in autistic-like traits would be a beneficial strategy to meet this employment need. Given the employment difficulties that can be faced by members of the autistic community (Buescher et al. ; Knapp et al. ; see also Gotham et al. ; Hendricks ; Howlin ; Levy and Perry ; National Autistic Society ; Taylor et al. ; Shattuck et al. ) and that around 46% of autistic adults who are employed are either over-educated or exceed the skill level needed for the roles they are in Baldwin et al. (), targeting the autistic community for cyber security employment may be particularly beneficial.
Notwithstanding the limitations described above, this may be particularly pertinent as this study found that a diagnosis of autism was associated with reduced cyber-dependent criminality. This would be consistent with perceptions of autistic strengths of honesty and loyalty (de Schipper et al. )—ideal attributes within employment settings. Importantly, this is not to suggest that all autistic people are good with technology, or that all autistic people should seek employment within cyber security industries (see Milton ). Rather, this study highlights that in a particularly challenging employment context, some members of the autistic community may be ideally suited to such employment opportunities and emphasises the need for employers to ensure that their recruitment methods and working environments are autism-friendly and inclusive (see Hedley et al. for review).
The direct link between autistic-like traits and cyber-dependent crime is also consistent with previous research (Seigfried-Spellar et al. ) and may extend to a relationship with cyber-enabled crime (such as online fraud). Seigfried-Spellar et al. () explored relationships between autistic-like traits and cyber-deviancy more broadly defined than cyber-dependent crime. Future research could explore whether the level of autistic-like traits, mediated by advanced digital skills, also relates to cyber-enabled crime, and whether there are any direct effects that are specific to cyber-dependent crime. Seigfried-Spellar et al. () and the present study were both cross-sectional studies. The mediation of advanced digital skills between autistic-like traits and cyber-dependent crime has been assumed in the present study, but this could be best established in longitudinal research. Exploring prison populations to identify if ‘traditional’ crime was related to autistic-like traits found no differences between prisoners and the general population (Underwood et al. ), which may suggest that autistic-like traits are associated with cybercrime specifically (that is, cyber-dependent crime and potentially cyber-enabled crime).
Sex, age, non-verbal IQ, explicit social cognition and perceived interpersonal support did not significantly relate to cyber-dependent criminal activity, which serves to highlight the salience of autistic-like traits. A potential limitation is that explicit social cognition was assessed, but not implicit social cognition. Based on the autism literature (Callenmark et al. ; Dewey ; Frith and Happé ), we would not necessarily expect difficulties with explicit social cognition in groups with high autistic-like traits. Implicit social cognition was also assessed by Callenmark et al. using interviews after the IToSK. Such interviews, however, do not readily extend to the online context and future research could explore any role of implicit social cognition in cyber-dependent crime. However, recent accounts of implicit social cognition have questioned whether such a system exists and findings from such measures can better be attributed to general attentional processes (Conway et al. ; Heyes ; Santiesteban et al. , , ).
Future research should also focus on autistic communities as well as those convicted of cyber-dependent and cyber-enabled crimes to further develop our understanding of this area, an important aspect of which is the potential strengths some members of the autistic community can bring to cyber security employment.

Generalizable and Robust TV Advertising Effects: Substantially smaller advertising elasticities compared to the results documented in the literature

Shapiro, Bradley and Hitsch, Guenter J. and Tuchman, Anna, Generalizable and Robust TV Advertising Effects (September 17, 2019). SSRN: http://dx.doi.org/10.2139/ssrn.3273476

Abstract: We provide generalizable and robust results on the causal sales effect of TV advertising for a large number of products in many categories. Such generalizable results provide a prior distribution that can improve the advertising decisions made by firms and the analysis and recommendations of policy makers. A single case study cannot provide generalizable results, and hence the literature provides several meta-analyses based on published case studies of advertising effects. However, publication bias results if the research or review process systematically rejects estimates of small, statistically insignificant, or “unexpected” advertising elasticities. Consequently, if there is publication bias, the results of a meta-analysis will not reflect the true population distribution of advertising effects. To provide generalizable results, we base our analysis on a large number of products and clearly lay out the research protocol used to select the products. We characterize the distribution of all estimates, irrespective of sign, size, or statistical significance. To ensure generalizability, we document the robustness of the estimates. First, we examine the sensitivity of the results to the assumptions made when constructing the data used in estimation. Second, we document whether the estimated effects are sensitive to the identification strategies that we use to claim causality based on observational data. Our results reveal substantially smaller advertising elasticities compared to the results documented in the extant literature, as well as a sizable percentage of statistically insignificant or negative estimates. If we only select products with statistically significant and positive estimates, the mean and median of the advertising effect distribution increase by a factor of about five. The results are robust to various identifying assumptions, and are consistent with both publication bias and bias due to non-robust identification strategies to obtain causal estimates in the literature.

Keywords: Advertising, Publication Bias, Generalizability
JEL Classification: L00, L15, L81, M31, M37, B41, C55, C52, C81, C18

To prevent the bysiness-cycle instabilities and ethical issues of laissez-faire, the mandarins in developing economies try to skip the laissez-faire stage, starting at a greater regulation level

Premature Imitation and India’s Flailing State. Shruti Rajagopalan and Alexander Tabarrok. The Independent Review, v. 24, n. 2, Fall 2019, pp. 165–186. http://www.independent.org/pdf/tir/tir_24_2_01_rajagopalan.pdf

There is the same contrast even between people; between the few highly westernized, trousered, natives educated in western universities, speaking western languages, and glorifying in Beethoven, Mill, Marx or Einstein, and the great mass of their countrymen who live in quite other worlds. —W. Arthur Lewis, “Economic Development with Unlimited Supplies of Labour”

Lant Pritchett(2009) has called India a flailing state. A flailing state is what happens when the principal cannot control its agents. The flailing state cannot implement its own plans and may have its plans actively subverted when its agents work at cross purposes. The Indian state flails because it is simultaneously too large and too small: too large because the Indian government attempts to legislate and regulate every aspect of citizens’ lives and too small because it lacks the resources and personnel to rule according to its ambitions. To explain the mismatch between the Indian state’s ambitions and its abilities, we point to the premature demands by Indian elite for policies more appropriate to a developed country. We illustrate with four case studies on maternity leave, housing policy, open defecation, and education policy. We then conclude by discussing how the problem of limited state capacity points to presumptive laissez-faire as a preferred governing and learning environment for developing countries. Matt Andrews, Lant Pritchett, and Michael Woolcock (2017) point to one explanation for India’s flailing state. In order to satisfy external actors, the Indian state and other recipients of foreign funding often take on tasks that overwhelm state capacity, leading to premature load bearing. As these authors put it, “By starting off with unrealistic expectations of the range, complexity, scale, and speed with which organizational capability can be built, external actors set both themselves and (more importantly) the governments they are attempting to assist to fail” (62). The expectations of external actors are only one source of imitation, however. Who people read, listen to, admire, learn from, and wish to emulate is also key. We argue that another factor driving inappropriate imitation is that the Indian intelligentsia—the top people involved in politics, the bureaucracy, universities, think tanks, foundations, and so forth—are closely connected with Anglo-American elites, sometimes even more closely than they are to the Indian populace. As a result, the Indian elite initiates and supports policies that appear to it to be normal even though such policies may have little relevance to the Indian population as a whole and may be wildly at odds with Indian state capacity. This kind of mimicry of what appear to be the best Western policies and practices is not necessarily ill intentioned. It might not be pursued to pacify external or internal actors, and it is not a deliberate attempt to exclude the majority of citizens from the democratic policy-making process. It is simply one by-product of the background within which the Indian intellectual class operates. The Indian elites are more likely, because of their background, to engage with global experts in policy dialogues that have little relevance to the commoner in India.

In the next sections, we discuss the flailing state and the demographics of the Indian elite. We then illustrate with case studies on maternity leave, housing policy, open defecation, and right-to-education policy how India passes laws and policies that make sense to the elite but are neither relevant nor beneficial to the vast majority of Indians. We conclude with a discussion of the optimal governing and learning environment when state capacity is limited.


Conclusion: Limited State Capacity Calls for Presumptive Laissez-Faire

The Indian state does not have enough capacity to implement all the rules and regulations that elites, trying to imitate the policies of developed economies, desire. The result is premature load bearing and a further breakdown in state capacity. It doesn’t follow that rule by non elites would be better. It could be worse. Nevertheless, there are some lessons about what kinds of things can and cannot be done with limited state capacity. States with limited capacity have great difficulty implementing tasks with performance goals that are difficult to measure and contested. In any bureaucracy, the agents involved ask themselves whether to perform according to the bureaucracy’s goals  or to their own. Incentives can ideally be structured so that goals align. But when states have limited capacity and performance goals are difficult to state or measure, it becomes easier for agents to act in their own interests.

At the broadest level, this suggests that states with limited capacity should rely more on markets even when markets are imperfect—presumptive laissez-faire. The market test isn’t perfect, but it is a test. Markets are the most salient alternative to state action, so when the cost of state action increases, markets should be used more often. Imagine, for example, that U.S. government spending had to be cut by a factor of ten. Would it make sense to cut all programs by 90 percent? Unlikely. Some programs and policies are of great value, but others should be undertaken only when state capacity and GDP per capita are higher. As Edward Glaeser quips, “A country that cannot provide clean water for its citizens should not be in the business of regulating film dialogue” (2011). A U.S. government funded at one-tenth the current level would optimally do many fewer things. So why doesn’t the Indian government do many fewer things? Indeed, when we look across time, we see governments providing more programs as average incomes rise. Over the past two hundred years, for example, the U.S. government has grown larger and taken on more tasks as U.S. average incomes have increased. But when we look across countries today, we do not see this pattern. Poor countries do not have notably smaller governments than rich countries. Indeed, poor countries often regulate more than rich countries (Djankov et al. 2002).

The differing patterns make sense from the perspective of the folk wisdom of much development economics. From this perspective, the fact that the developed economies might have started out more laissez-faire is an irrelevant historical observation. In fact, according to this view, because the developed economies have already evidently learned that laissez-faire led to inefficiencies, business-cycle instabilities, and environmental, distributional, and other ethical problems, it makes sense for the less-developed economies to skip the laissez-faire stage. Thus, the folk wisdom of development economics holds that what a developing economy learns from the history of developed economies is to avoid the mistakes of relative laissez-faire and begin with greater regulation.

In the alternative view put forward here, relative laissez-faire is a step to development, perhaps even a necessary step, even if the ultimate desired end point of development is a regulated, mixed economy. Presumptive laissez-faire is the optimal form of government for states with limited capacity and also the optimal learning environment for states to grow capacity. Under laissez-faire, wealth, education, trade, and trust can grow, which in turn will allow for greater regulation.

Moral exemplars are often held up as objects to be admired, which is thought to induce to emulate virtuous conduct; but admiration induces passivity rather than an incentive to self-improvement

From 2018... The Vice of Admiration. Jan-Willem van der Rijt. Philosophy, Volume 93, Issue 1, January 2018 , pp. 69-90. https://doi.org/10.1017/S0031819117000353

Abstract: Moral exemplars are often held up as objects to be admired. Such admiration is thought beneficial to the admirer, inducing him or her to emulate virtuous conduct, and deemed flattering to the admired. This paper offers a critical examination of admiration from a broadly Kantian perspective, arguing that admiration – even of genuine moral exemplars – violates the duty of self-respect. It also provides an explanation for the fact that moral exemplars themselves typically shun admiration. Lastly, it questions the assumption that admiration leads to emulation on the basis of scientific findings that indicate that admiration induces passivity in the admirer rather than an incentive to self-improvement.

Australian banknotes: 15–35 per cent are used to facilitate legitimate transactions, 4–7 per cent are used for transactions in the shadow economy, 5–10 per cent are lost; the others are hoarded

Where's the Money An Investigation into the Whereabouts and Uses of Australian Banknotes. Richard Finlay  Andrew Staib  Max Wakefield. The Australian Economic Review, September 20 2019. https://doi.org/10.1111/1467-8462.12342

Abstract: The Reserve Bank of Australia is the sole issuer and redeemer of Australian banknotes, but between issuance and destruction there is little information about where banknotes go or what they are used for. We estimate the whereabouts and uses of Australian banknotes, and find that 15–35 per cent are used to facilitate legitimate transactions, 4–7 per cent are used for transactions in the shadow economy, while the remainder are non‐transactional. The vast majority of non‐transactional banknotes are likely to be hoarded, and we estimate that 5–10 per cent of outstanding banknotes are lost or collected.


The Association Between Repeated Romantic Rejection and Change in Ideal Standards (which are lowered, & more flexibility is introduced), & Lower Self-Perceived Mate Value

The Association Between Romantic Rejection and Change in Ideal Standards, Ideal Flexibility, and Self-Perceived Mate Value. Nicolyn H. Charlot, Rhonda N. Balzarini, and Lorne J. Campbell. Social Psychology, September 20, 2019. https://doi.org/10.1027/1864-9335/a000392

Abstract. Research has shown that ideal romantic standards predict future partner characteristics and influence existing relationships, but how standards develop and change among single individuals has yet to be explored. Guided by the Ideal Standards Model (ISM), the present study sought to determine whether repeated experiences of romantic rejection and acceptance over time were associated with change in ideal standards, ideal flexibility, and self-perceived mate value (N = 208). Results suggest repeated experiences of rejection correspond to decreases in ideal standards and self-perceived mate value and increases in ideal flexibility, though no effects emerged for acceptance. Given the predictive nature of ideal standards and the link rejection has with such, findings from this study contribute to a greater understanding of relationship formation processes.

Keywords: romantic relationships, ideal standards, ideal flexibility, self-perceived mate value, rejection, acceptance

Long-term, stable marriages of midlife Americans were characterized by a linear increase in relationship satisfaction over 20 years & a linear decline in sexual satisfaction in the same time frame

Relationship and sexual satisfaction: A developmental perspective on bidirectionality. Christopher Quinn-Nilas. Journal of Social and Personal Relationships, September 19, 2019. https://doi.org/10.1177/0265407519876018

Abstract: Researchers have investigated the directionality between relationship and sexual satisfaction; however, there remains no definitive conclusion. Previous longitudinal studies have not conceptualized relationship and sexual satisfaction as systematic developmental processes and have focused on predicting scores at later time points. Instead, researchers should be concerned with understanding how relationship and sexual satisfaction change together over time. The objective of this study was to use longitudinal data from midlife American marriages to test the directionality of the association between relationship satisfaction and sexual satisfaction. Multivariate latent growth curve modeling of 1,456 midlife Americans married for 20 years from the Midlife in the United States study was used to compare directionality models. Findings support that long-term, stable marriages of midlife Americans at the sample level were characterized by a linear increase in relationship satisfaction over 20 years and a linear decline in sexual satisfaction during the same time frame. A co-change model, wherein relationship and sexual satisfaction changed together over time, fit the data best. Trajectory correlations showed that changes in relationship and sexual satisfaction were strongly interconnected. High initial levels of sexual satisfaction protected against declines in relationship satisfaction over 20 years. Results support that relationship and sexual satisfaction change together over time and highlight that the longitudinal association between these outcomes is dynamic rather than static.

Keywords: Marriage, midlife Americans, MIDUS, multivariate growth curve, relationship satisfaction, sexual satisfaction

---
Strengths
The MLGCM methodology is a key strength of this study because it allowed the examination of bidirectional associations between sexual and relationship satisfaction growth factors and allowed each outcome to be estimated as its own developmental process. Furthermore, this study is novel because it approached the question of directionality between relationship and sexual satisfaction by systematically testing competing unidirectional and bidirectional models using characteristics of change trajectories rather than static scores. This is partly buttressed by the large sample drawn from a nationally representative study that meets standards for statistical power (Hertzog et al., 2006). The 20-year longtime horizon underlying the MIDUS study has made possible inferences covering a substantial amount of time.

Limitations
Despite the noted strengths, several limitations of the study are noteworthy. This study used single indicator measurement to assess relationship and sexual satisfaction. Additionally, several limitations are introduced by having only three time points. Having three time points does not allow for the thorough testing of nonlinear or more complex trends which could be assessed using a daily diary format (e.g., Day et al., 2015).Even though the duration of the MIDUS study was long, which allowed for long-term inferences, this structure neglects the more idiosyncratic microdevelopments and microchanges that may occur ona smallertime scale (i.e.,day to day,weektoweek, month to month, yeartoyear).
The original MIDUS data were from a nationally representative sample obtained with random-digit dialing, but it should be noted that there may be a selection effect inherent to the current study’s inclusion criteria (participants who stayed married to the same person for the entire duration). In the current study, the regressive bidirectional model (Model 8) was discarded because the design of the MIDUS study did not contain a meaningful intercept point. Although this is true in the present analyses, in other study designs that contain substantive intercepts (i.e., studies that begin at the start of a marriage), the regressive model need not be discarded. In such a case, a more thorough assessment can be made between the correlated model and the regressive model. Another limitation concerns the wording of the sexual satisfaction question, which asked about sexual satisfaction broadly rather than specifically about the marital relationship.


Heritability of alcohol consumption in adulthood, Finland twins: The youngest show greater non‐shared environmental & additive genetic influences than the old ones

Birth cohort effects on the quantity and heritability of alcohol consumption in adulthood: a Finnish longitudinal twin study. Suvi Virtanen  Jaakko Kaprio  Richard Viken  Richard J. Rose  Antti Latvala. Addiction, December 19 2018. https://doi.org/10.1111/add.14533

Abstract
Aims: To estimate birth cohort effects on alcohol consumption and abstinence in Finland and to test differences between birth cohorts in genetic and environmental sources of variation in Finnish adult alcohol use.

Design: The Older Finnish Twin Cohort longitudinal survey study 1975–2011.

Setting: Finland.

Participants: A total of 26 121 same‐sex twins aged 18–95 years (full twin pairs at baseline n = 11 608).

Measurements: Outcome variables were the quantity of alcohol consumption (g/month) and abstinence (drinking zero g/month). Predictor variables were 10‐year birth cohort categories and socio‐demographic covariates. In quantitative genetic models, two larger cohorts (born 1901–20 and 1945–57) were compared.

Findings: Multi‐level models in both sexes indicated higher levels of alcohol consumption in more recent birth cohorts and lower levels in earlier cohorts, compared with twins born 1921–30 (all P < 0.003). Similarly, compared with twins born 1921–30, abstaining was more common in earlier and less common in more recent cohorts (all P < 0.05), with the exception of men born 1911–20. Birth cohort differences in the genetic and environmental variance components in alcohol consumption were found: heritability was 21% [95% confidence interval (CI) = 0–56%] in the earlier‐born cohort of women [mean age 62.8, standard deviation (SD) = 5.3] and 51% (95% CI = 36–56%) in a more recent cohort (mean age 60.2, SD = 3.7) at the age of 54–74. For men, heritability was 39% (95% CI = 27–45%) in both cohorts. In alcohol abstinence, environmental influences shared between co‐twins explained a large proportion of variation in the earlier‐born cohort (43%, 95% CI = 23–63%), whereas non‐shared environmental (54%, 95% CI = 39–72%) and additive genetic influences (40%, 95% CI = 13–61%) were more important among more recent cohorts of men and women.

Conclusion: The contribution of genetic and environmental variability to variability in alcohol consumption in the Finnish population appears to vary by birth cohort.

Saturday, September 21, 2019

Punish or Protect? How Close Relationships Shape Responses to Moral Violations

Punish or Protect? How Close Relationships Shape Responses to Moral Violations. Aaron C. Weidman et al. Personality and Social Psychology Bulletin, September 19, 2019. https://doi.org/10.1177/0146167219873485

Abstract: People have fundamental tendencies to punish immoral actors and treat close others altruistically. What happens when these tendencies collide—do people punish or protect close others who behave immorally? Across 10 studies (N = 2,847), we show that people consistently anticipate protecting close others who commit moral infractions, particularly highly severe acts of theft and sexual harassment. This tendency emerged regardless of gender, political orientation, moral foundations, and disgust sensitivity and was driven by concerns about self-interest, loyalty, and harm. We further find that people justify this tendency by planning to discipline close others on their own. We also identify a psychological mechanism that mitigates the tendency to protect close others who have committed severe (but not mild) moral infractions: self-distancing. These findings highlight the role that relational closeness plays in shaping people’s responses to moral violations, underscoring the need to consider relational closeness in future moral psychology work.

Keywords: moral psychology, close relationships, loyalty, harm, self-distancing

Classroom Size and the Prevalence of Bullying and Victimization: Smaller classrooms sees more bullying

Classroom Size and the Prevalence of Bullying and Victimization: Testing Three Explanations for the Negative Association. Claire F. Garandeau, Takuya Yanagida, Marjolijn M. Vermande, Dagmar Strohmeier and Christina Salmivalli. Front. Psychol., September 20 2019. https://doi.org/10.3389/fpsyg.2019.02125

Abstract: Classroom size - i.e., the number of students in the class - is a feature of the classroom environment often found to be negatively related to bullying or victimization. This study examines three possible explanations for this negative association: (a) it is due to measurement effects and therefore only found for peer-reports (Hypothesis 1), (b) bullying perpetrators are more popular and have more friends in smaller classrooms (Hypothesis 2), (c) targets of bullying are more popular and have more friends in larger classrooms (Hypothesis 3). Multilevel regression analyses were conducted on a sample from Austria (1,451 students; Mage = 12.31; 77 classes) and a sample from the Netherlands (1,460 students; Mage = 11.06; 59 classes). Results showed that classroom size was negatively associated with peer-reported bullying and victimization in both samples, and with self-reported bullying and victimization in the Dutch sample only, suggesting partial support for Hypothesis 1. Students high in bullying were found to be more popular in smaller than in larger classrooms in the Austrian sample. The negative link between victimization and popularity was found to be stronger in smaller classrooms than in larger classrooms in the Dutch sample. However, classroom size was not found to moderate links between bullying or victimization and friendship in either sample. Hypotheses 2 and 3 were supported, but only for popularity and in a single sample. Further research is needed to better understand the higher prevalence of bullying found in smaller classrooms in many studies.

Introduction
The prevalence of bullying and victimization in classrooms is not merely the result of individual characteristics of the bullying perpetrators and their targets but is influenced by features of the classroom environment (Saarento et al., 2015). These contextual characteristics include the anti-bullying attitudes and behaviors of peer bystanders (Salmivalli et al., 2011) and of teachers (Veenstra et al., 2014; Oldenburg et al., 2015), as well as aspects of the peer social network, such as the degree of status hierarchy in the classroom (Garandeau et al., 2014). Classroom size - i.e., the number of students in the class - is a structural feature that has often been investigated in relation to academic achievement (see Finn et al., 2003), with smaller classrooms often found to be beneficial for academic performance (Hoxby, 2000; Shin and Raudenbush, 2011) and even earnings later in life (Fredriksson et al., 2013). Intuitively, we would expect the same advantageous effects of small classrooms on bullying. Smaller classrooms should logically protect against bullying thanks to higher adult/child ratios, allowing a more effective monitoring of children’s negative behaviors by school personnel.
Classroom size has been investigated in many studies on victimization and bullying, often as a control variable rather than a main predictor of interest. Surprisingly, very few studies found evidence of a protective effect of smaller classroom networks on bullying or victimization (Whitney and Smith, 1993; Khoury-Kassabri et al., 2004). The large majority of studies examining the link between classroom size and bullying or victimization found them to be either negatively associated (e.g., Vervoort et al., 2010) or unrelated (e.g., Thornberg et al., 2017). However, the reason why bullying and victimization would be more prevalent in smaller classrooms remains unclear.
The present study aims to test for three possible explanations for this negative association: First, the negative association may not reflect an actual social phenomenon but result from a measurement effect, related to the way peer-reported scores are computed. In this case, the prevalence-size link should be negative for peer-reported, but not for self-reported bullying and victimization (Hypothesis 1). Second, it is possible that bullying perpetrators enjoy higher status and are more socially connected in smaller classrooms, which in turn facilitates their bullying behavior. Engaging in bullying may be associated with higher perceived popularity and more friendships in smaller than in larger classrooms (Hypothesis 2). Third, victims may have less social support and fewer opportunities for friendships in smaller classrooms, which in turn could contribute to the maintenance of their victimization. Being victimized may be associated with lower perceived popularity and fewer friendships in smaller than in larger classrooms (Hypothesis 3). These hypotheses will be tested with large samples from two countries, using both self-reports and peer-reports of bullying and victimization.

How Should We Measure City Size? Theory and Evidence Within and Across Rich and Poor Countries

How Should We Measure City Size? Theory and Evidence Within and Across Rich and Poor Countries. Remi Jedwab, Prakash Loungani, and Anthony Yezer. IMF Working Papers, WP/19/203. Sep 2019. https://www.imf.org/en/Publications/WP/Issues/2019/09/20/How-Should-We-Measure-City-Size-Theory-and-Evidence-Within-and-Across-Rich-and-Poor-Countries-48671

Abstract: It is obvious that holding city population constant, differences in cities across the world are enormous. Urban giants in poor countries are not large using measures such as land area, interior space or value of output. These differences are easily reconciled mathematically as population is the product of land area, structure space per unit land (i.e., heights), and population per unit interior space (i.e., crowding). The first two are far larger in the cities of developed countries while the latter is larger for the cities of developing countries. In order to study sources of diversity among cities with similar population, we construct a version of the standard urban model (SUM) that yields the prediction that the elasticity of city size with respect to income could be similar within both developing countries and developed countries. However, differences in income and urban technology can explain the physical differences between the cities of developed countries and developing countries. Second, using a variety of newly merged data sets, the predictions of the SUM for similarities and differences of cities in developed and developing countries are tested. The findings suggest that population is a sufficient statistic to characterize city differences among cities within the same country, not across countries.

JEL Classification Numbers: R13; R14; R31; R41; R42; O18; O2; O33

We cannot know these things with certainty, but it seems nonhuman primates don't understand themselves to be playing roles with intentional coordination or division of labor

The role of roles in uniquely human cognition and sociality. Michael Tomasello. Journal of the Theory of Social Behavior, August 16 2019. https://doi.org/10.1111/jtsb.12223

Abstract: To understand themselves as playing a social role, individuals must understand themselves to be contributing to a cooperative endeavor. Psychologically, the form of cooperation required is a specific type that only humans may possess, namely, one in which individuals form a joint or collective agency to pursue a common end. This begins ontogenetically not with the societal level but rather with more local collaboration between individuals. Participating in collaborative endeavors of this type leads young children, cognitively, to think in terms of different perspectives on a joint focus of attention ‐ including ultimately an objective perspective ‐ and to organize their experience in terms of a relational‐thematic‐narrative dimension. Socially, such participation leads young children to an understanding of self‐other equivalence with mutual respect among collaborative partners and, ultimately, to a normative (i.e. moral) stance toward “we” in the community within which one is forming a moral role or identity. The dual‐level structure of shared endeavors/realities with individual roles/perspectives is responsible for many aspects of the human species' most distinctive psychology.

---
2.1 Social roles in great apes and early humans?

Humans' nearest primate relatives, chimpanzees and bonobos, live in complex social groups. From an external (functionalist) perspective it is of course possible to speak of the various roles individuals are playing in the group. But does this notion have any meaning for them? Does it make sense, from their point of view, to say that the dominant male chimpanzee is playing the role of peacemaker in the group?
While we cannot know these things with certainty, the proposal here is that neither chimpanzees nor bonobos (nor any other nonhuman primates) understand themselves to be playing roles in anything. Although many, perhaps most, of their social interactions are competitive (even if bonobos are less aggressive), they also cooperate in some ways, and so the notion of role is at least potentially applicable. As a frequently occurring example, if one chimpanzee begins fighting with another, it often happens that the friends of each of the combatants join into the fray on the side of their friend. It is unlikely that they see themselves as playing roles in this coalition. More likely, each individual is participating for her own individual goal, sometimes helping the other in that context. But they are basically just fighting side by side, without intentional coordination or division of labor toward a common goal. As another example, when chimpanzee or bonobo pairs are engaged in mutual grooming, we could say from the outside that one is in the groomer role and one is the groomee role. But again this interpretation may be totally our own; they may just be searching for fleas and enjoying being cleaned, respectively. And, for whatever it is worth, both agonistic coalitions and grooming are social interactions that are performed by all kinds of other species of mammals and even birds.
By far the most plausible candidate for an understanding of social roles in nonhuman primates is chimpanzee group hunting. What happens prototypically is that a small party of male chimpanzees spies a red colobus monkey somewhat separated from its group, which they then proceed to surround and capture. Normally, one individual begins the chase and others scramble to the monkey's possible escape routes. Boesch (2005) has claimed that there are roles involved here: the chaser, the blocker, and the ambusher, for instance. Other fieldworkers have not described the hunts in such terms, noting that during the process (which can last anywhere from a few minutes to half an hour) individuals seem to switch from chasing to blocking and to ambushing from minute to minute (Mitani, personal communication). In the end, one individual actually captures the monkey, and he obtains the most and best meat. But because he cannot dominate the carcass on his own, all participants (and many bystanders) usually get some meat (depending on their dominance and the vigor with which they harass the captor; Gilby, 2006). Tomasello, Carpenter, Call, Behne, and Moll (2005) thus propose a “lean” reading of this activity, based on the hypothesis that the participants do not have a joint goal of capturing the monkey together – and thus there are no individual roles toward that end. Instead, each individual is attempting to capture the monkey on its own (since captors get the most meat), and they take into account the behavior, and perhaps intentions, of the other chimpanzees as these affect their chances of capture. In general, it is not clear that the process is fundamentally different from the group hunting of other social mammals, such as lions and wolves and hyenas, either socially or cognitively. Experimental support for this interpretation will be presented below.
The evolutionary hypothesis is that at some point in human evolution, early humans began collaborating with one another in some new ways involving shared goals and individual roles. The cognitive and motivational structuring of such collaborative activities is best described by philosophers of action such as Bratman (2014), Searle (2010), and Gilbert (2014), in terms of human skills and motivations of shared intentionality. The basic idea is that humans are able to form with others a shared agent ‘we’, which then can have various kinds of we‐intentions. In Bratman's formulation, for example, two individuals engage in what he calls a shared cooperative activity when they each have the goal that they do something together and they both know together in common conceptual ground that they have this shared goal. This generates roles, that is, what “we” expect each of “you” and “me” to do in order for us to reach our shared goal. Gilbert (2014) highlights the normative dimension of such roles. When two participants make a joint commitment to cooperate, for example, each pledges to the other that she will play her role faithfully until they have reached their shared goal. If either of them shirks her role they will together, as a shared agent, chastise her – a kind of collaborative self‐regulation of the shared agency. This special form of cooperative organization scales up to much larger social structures and institutions such as governments or universities, in which there are cooperative goals and well‐defined roles that individuals must play to maintain the institution's cooperative functioning.
Tomasello (2014, 2016) provides a speculative evolutionary account of how humans came to engage with one another in acts of shared intentionality. There were two steps. The first step came with early humans (i.e., beginning with the genus Homo some 2 million years ago to approximately.4 million years ago). Due to a change in their feeding ecology ‐ perhaps due to more intense competition from other species for their normal foods ‐ early humans were forced to collaborate with one another to obtain new kinds of resources not available to their competitors (e.g., large game and also plant resources requiring multiple individuals for harvesting). In these early collaborative activities, early human individuals understood their interdependence ‐ that each needed the other ‐ and this led them to structure their collaborative activities via skills and motivations of joint intentionality: the formation of a joint agency to pursue joint goals via individual roles. As partners were collaborating toward a joint goal, they were jointly attending to things relevant to their joint goal – with each retaining her own individual perspective (and monitoring the other's perspective) at the same time. Such joint attention means not only that individuals are attending to the same situation, but each knows that each is also attending to their partner's attention to the relevant situation, etc.: there is recursive perspective‐taking. When individuals experienced things in joint attention those experiences entered their common ground as joint experience or knowledge, so that in the future they both knew that they both knew certain things.
The second step came with modern humans (i.e., beginning with Homo sapiens sapiens some 200,000 years ago). Due to increasing group sizes and competition with other groups, humans began organizing themselves into distinctive cultures. In this context, a cultural group may be thought of as one big collaborative activity aimed at group survival, as all individuals in the group were dependent on one another for many necessities, including group defense. To coordinate with others, including in‐group strangers, it was necessary to conform to the cultural practices established for just such coordination. Knowledge of these cultural practices was not just in the personal common ground of two individuals who had interacted in the appropriate circumstances previously, as with early humans, but rather such knowledge was in the cultural common ground of the group: each individual knew that all other members of the group knew these things and knew that they knew them as well even if they had never before met. Making such cultural practices formal and explicit in the public space turned them into full‐blown cultural institutions, with well‐defined roles (from professional roles to the most basic role of simply being a group member in good standing) that must be played for their maintenance. The new cognitive skills and motivations underlying the shift to truly cultural lifeways were thus not between individuals but between the individual and the group – involving a kind of collective agency ‐ and so may be referred to as collective intentionality.
The proposal is thus that the notion of social role, as understood by participants in a social or cultural interaction, came into existence in human evolution with the emergence of shared intentionality, as the psychological infrastructure for engaging in especially rich forms of collaborative, even cultural, activities. The notion of social role is thus indissociable, psychologically speaking, from cooperation. The evolutionary precursor to the notion of a societal role, as typically conceived by sociologists and social psychologists, is thus the notion of an individual role in a small‐scale collaborative activity; societal roles in larger‐scale cultural institutions build on this psychological foundation.

A million loyalty card transactions: Disconnect between predicting pro-environmental attitudes (against plastic bags) & a specific ecological behaviour measured objectively in the real world

Lavelle-Hill, Rosa E., Gavin Smith, Peter Bibby, David Clarke, and James Goulding. 2019. “Psychological and Demographic Predictors of Plastic Bag Consumption in Transaction Data.” PsyArXiv. September 20. doi:10.31234/osf.io/nv57c
Abstract: Despite the success of plastic bag charges in the UK, there are still around a billion single-use plastic bags bought each year in England alone, and the government have made plans to increase the levy from 5 to 10 pence. Previous research has identified motivations for bringing personal bags to the supermarket, but little is known about the individuals who are continuing to frequently purchase single-use plastic bags after the levy. In this study, we harnessed over a million loyalty card transaction records from a high-street health and beauty retailer linked to 12,968 questionnaire responses measuring demographics, shopping motivations, and individual differences. We utilised an exploratory machine learning approach to expose the demographic and psychological predictors of frequent plastic bag consumption. In the transaction data we identified 2,326 frequent single-use plastic bag buyers, which we matched randomly to infrequent buyers to create the balanced sub-sample we used for modelling (N=4,652). Frequent bag buyers spent more money in store, were younger, more likely to be male, less frugal, open to new experiences, and displeased with their appearance compared with infrequent bag buyers. Statistical regional differences also occurred. Interestingly, environmental concerns did not predict plastic bag consumption, highlighting the disconnect between predicting pro-environmental attitudes and a specific ecological behaviour measured objectively in the real world.



Friday, September 20, 2019

New Coal Plants Totaling over 579 GW in the Pipeline, Say NGOs

NGOs Release New Global Coal Exit List for Finance Industry. Urgewald, Sep 20 2019. https://urgewald.org/medien/ngos-release-new-global-coal-exit-list-finance-industry

Berlin, September 19, 2019
Companies Driving the World’s Coal Expansion Revealed

One day before the Global Climate Strike, Urgewald and 30 partner NGOs have released a new update of the “Global Coal Exit List” (GCEL), the world’s most comprehensive database of companies operating along the thermal coal value chain.

“Our 2019 data shows that the time for patient engagement with the coal industry has definitely run out,” says Heffa Schuecking, director of Urgewald. While the world’s leading climate scientists and the United Nations have long warned that coal-based energy production must be rapidly phased out, over 400 of the 746 companies on the Global Coal Exit List are still planning to expand their coal operations. “It is high time for banks, insurers, pension funds and other investors to take their money out of the coal industry,” says Schuecking.

The Global Coal Exit List (GCEL) was first launched in November 2017 and has played an influential role in shaping the coal divestment actions of many large investors, especially in Europe. Over 200 financial institutions are now registered users of the database and investors representing close to US$ 10 trillion in assets are using one or more of the GCEL’s divestment criteria to screen coal companies out of their portfolios.

The database covers the largest coal plant operators and coal producers; companies that generate over 30% of their revenues or power from coal and all companies that are planning to expand coal mining, coal power or coal infrastructure.

According to ET Index research: “The Global Coal Exit List produced by Urgewald is an excellent tool for understanding asset stranding and energy transition risks. The tool provides one of the most comprehensive and in-depth databases for coal generation and expansion.”

And the Insurer Zurich says: “The GCEL is a valuable input to implement our coal policy on the insurance side as it is the only data source that also assesses private companies.”


Overview of the 2019 GCEL

The database provides key statistics on 746 companies and over 1,400 subsidiaries, whose activities range from coal exploration and mining, coal trading and transport, to coal power generation and manufacturing of equipment for coal plants. Most of the information in the database is drawn from original company sources such as annual reports, investor presentations and company websites. All in all, the companies listed in the GCEL represent 89% of the world’s thermal coal production and almost 87% of the world’s installed coal-fired capacity.


New Coal Plants Totaling over 579 GW in the Pipeline

While the global coal plant pipeline shrank by over 50% in the past 3 years, new coal plants are still planned or under development in 60 countries around the world. If built, these projects would add over 579 GW to the global coal plant fleet, an increase of almost 29%.[1] The 2019 GCEL identifies 259 coal plant developers. Over half of these companies are not traditional coal-based utilities, and are therefore often missed by financial institutions’ coal exclusion policies. Typical examples are companies like the Hong Kong based textiles producer Texhong – which plans to build a 2,100 MW coal power station in Vietnam – or very diversified companies, like Japan’s Sumitomo Corporation, which is developing new coal plants in Bangladesh, Vietnam and Indonesia. As Kimiko Hirata from the Japanese NGO Kiko Network notes, “All five of Japan’s largest trading houses are on the Global Coal Exit List as they are still building new coal-fired capacity either at home or abroad.”

One of the most unexpected coal plant developers is the Australian company “Shine Energy”, which describes its mission as “helping Australia transition to a renewable future.” At the same time, Shine Energy is planning to develop a 1,000 MW coal power station “to return one of North Queensland’s oldest coal mining towns to its former glory,” as an Australian news outlet says.[2]
Coal Mining and Coal Infrastructure Expansion

Over 200 companies on the GCEL are still expanding their coal mining activities often in the face of enormous resistance by local communities. While some large coal miners such as South32 have begun offloading their thermal coal assets, most of the world’s largest coal producers are still in expansion mode. Out of the 30 companies which account for half of the world’s thermal coal production, 24 are pursuing plans to still increase their coal production. Glencore, the world’s 8th largest coal producer, was recently applauded by climate-concerned investors for agreeing to set a cap of 150 million tons for its annual coal production. In actual fact this still leaves plenty of space for a production increase: Glencore’s 2018 coal production was 129 million tons.

In many regions of the world, the development of new coal mines is dependent on the development of new coal transport infrastructure such as railways or coal port terminals. 34 companies on the GCEL are identified as coal infrastructure expansionists. These include the Indian company Essar, which is building a coal export terminal in Mozambique, or Russia’s VostokCoal, which is building two coal port terminals on the fragile Tamyr peninsula in order to begin mining one of the world’s largest hard coal deposits in the Russian Arctic.

The fact that Adani Ports & Special Economic Zones – a subsidiary of the coal-heavy Adani Group – was able to recently raise US$ 750 million through a bond issue, shows that financial institutions still have a blind spot in regards to the role logistics and transport companies play for the expansion of the coal industry.[3] “Our research shows that the expansion of coal mining, coal transport and coal power all go hand in hand. If we want to avoid throwing more fuel into the fire, the finance industry needs to follow the example set by Crédit Agricole,” says Schuecking. In its new policy from June 2019, the French bank announced that it will end its relationship with all clients that are planning to expand thermal coal mining, coal-fired power capacity, coal transport infrastructure or coal trading.[4]


Conclusion

From Poland to the Philippines and from Mozambique to Myanmar, local communities are challenging new coal projects in the courts and on the streets. And on September 20th, millions of climate strikers from around the world are calling for an end to coal and other fossil fuels. The Global Coal Exit List shows that the problem of dealing with coal is finite: 746 companies that the finance world needs to leave behind to make the Paris goals achievable.


Interesting Facts & Stats around the Global Coal Industry

Companies:

Out of the 746 parent companies listed on the GCEL, 361 are either running or developing coal power plants, 237 are involved in coal mining and 148 are primarily service companies, which are active in areas such as coal trading, coal processing, coal transport and the provision of specialized equipment for the coal industry.[5]

The 4 countries with the most coal companies are China (164), India (87), the United States (82) and Australia (51).

The world’s largest thermal coal producer is Coal India Limited. Last year, the company produced 534 million tons, accounting for 8% of world thermal coal production.[6] The second largest coal producer is China Energy Investment Corporation with 510 million tons.

27 companies account for half of the world’s installed coal-fired capacity. The largest coal plant operator worldwide is China Energy Investment Corporation with 175,000 MW installed coal-fired capacity.

The world’s largest coal plant developer is India’s National Thermal Power Corporation (NTPC). It plans to build 30,541 MW of new coal-fired capacity.

One of the largest equipment providers for the coal plant pipeline is General Electric. It is involved in the construction of new coal plants in 18 countries, out of which half are frontier countries, with little or no coal-fired capacity as of yet. Another important manufacturer and EPC[7] contractor is Doosan Heavy Industries & Construction from South Korea. Doosan Heavy is involved in at least 8 coal power plants totalling 10,9 GW under construction in South Korea, Indonesia, Vietnam and Botswana.


Countries:

Plans for new coal plants threaten to push 26 “frontier countries” into a cycle of coal-dependency.[8]

The 10 countries with the largest coal plant pipelines are: China (226,229 MW), India (91,540 MW), Turkey (34,436 MW), Vietnam (33,935 MW) Indonesia (29,416 MW), Bangladesh (22,933), Japan (13,105 MW), South Africa (12,744 MW), the Philippines (12,014 MW) and Egypt (8,640 MW).

In the European Union, the country with the largest coal power expansion plans is Poland (6,870 MW). Out of 23 companies, which have coal mining expansion plans in Europe, 7 are expanding in Poland and 9 in Turkey. In both countries the opposition against new coal projects is strong. Villagers from Turkey’s Muğla region are protesting against the plans of Bereket, IC Holding and Limak Energy to extend the lifetimes of polluting coal plants and expand the area’s lignite mines.

Japan is the country with the highest percentage of coal power expansion projects overseas. Out of 30 GW of coal-fired capacity planned by Japanese companies, 51% are being developed abroad.

The country with the largest absolute coal power expansion plans abroad is, however, China. Chinese companies are planning to build new coal plants totalling 54 GW in 20 countries. These account for 24% of the total coal power capacity being developed by Chinese companies.

Out of 99 coal companies operating in Indonesia, 63 are headquartered overseas.


Financial Institutions:

In August 2019, Indonesian citizens filed a court case in South Korea, asking for an injunction to prevent South Korea’s financial institutions from financing the construction of two new coal power plants near Jakarta.

In spite of its new “climate speak”, BlackRock is the world’s largest institutional investor in coal plant developers. In December 2018, it held shares and bonds in value of over US$ 11 billion in these companies.

In June 2019, The Norwegian Government Pension Fund Global took a further step down the coal divestment path by excluding all companies, which operate over 10 GW of coal-fired capacity or produce over 20 million tons of coal annually. The Pension Fund now applies two of the three GCEL criteria to its portfolio. Its total coal divestment since 2015 is estimated to be around € 9 billion.

26 commercial banks have committed to no longer participate in project finance deals for new coal plants. 9 major banks have also committed to ending corporate finance for clients whose coal share of revenue or coal share of power generation is above a designated threshold.[9]

Philippine NGOs have just launched a campaign to move the Bank of the Philippine Islands (BPI) to also stop financing coal.

16 insurers have ended or severely restricted underwriting for coal projects. Reinsurers representing 45% of the global reinsurance market have taken significant coal divestment steps.[10]

First financial institutions have begun announcing dates for a complete phase-out of coal from their investments. Among these are, for example KLP, Storebrand, Nationale Nederlanden, Allianz, Commonwealth Bank of Australia and Crédit Agricole.

________
[1] The world’s installed coal-fired capacity is currently 2,026 GW.
[2] https://www.abc.net.au/news/2019-06-18/billion-dollar-indigenous-led-power-station-revive-qld-coal-town/11194306
[3] https://www.financeasia.com/article/adani-ports-extends-indias-us-dollar-bond-spree/452775
[4] https://www.gtreview.com/news/sustainability/84497/
[5] The numbers are based on companies’ primary role in the coal industry. Many companies on the GCEL are active in two or even all three of these categories.
[6] According to the IEA, world thermal coal production was 6,780 Mt in 2018.
[7] Engineering, Procurement and Construction
[8] Out of these 26 countries, 15 have no coal-fired capacity as of yet and 11 have 600 MW or less.
[9] https://www.banktrack.org/campaign/coal_banks_policies
[10] https://unfriendcoal.com/wp-content/uploads/2018/11/Scorecard-2018-report-final-web-version.pdf

Oversight can make things worse: In the US, 42% of public infrastructure projects report delays or cost overruns; oversight increases delays by 6.1%–13.8% and overruns by 1.4%–1.6%

Oversight and Efficiency in Public Projects: A Regression Discontinuity Analysis. Eduard Calvo, Ruomeng Cui, Juan Camilo Serpa. Management Science, Sep 10 2019. https://doi.org/10.1287/mnsc.2018.3202

Abstract: In the United States, 42% of public infrastructure projects report delays or cost overruns. To mitigate this problem, regulators scrutinize project operations. We study the effect of oversight on delays and overruns with 262,857 projects spanning 71 federal agencies and 54,739 contractors. We identify our results using a federal bylaw: if the project’s budget is above a cutoff, procurement officers actively oversee the contractor’s operations; otherwise, most operational checks are waived. We find that oversight increases delays by 6.1%–13.8% and overruns by 1.4%–1.6%. We also show that oversight is most obstructive when the contractor has no experience in public projects, is paid with a fixed-fee contract with performance-based incentives, or performs a labor-intensive task. Oversight is least obstructive—or even beneficial—when the contractor is experienced, paid with a time-and-materials contract, or conducts a machine-intensive task.