Monday, October 23, 2017

Death and failure: A cautionary tale of death anxiety and alternate causality

Death and failure: A cautionary tale of death anxiety and alternate causality. Christopher Michael Jackson. A thesis submitted for the degree of Doctor of Philosophy at the University of Otago, Dunedin, New Zealand. September, 2017. https://otago.ourarchive.ac.nz/bitstream/handle/10523/7590/JacksonChristopherM2017PhD.pdf?sequence=1&isAllowed=y

Abstract

Many believe that the fear of death is central to the human experience. Theoretically, this fear stems from the human cognitive capacity to project ourselves into the future and contemplate the world without us in it. Awareness —either conscious or unconscious— of our mortality is the central cause of what researchers call death anxiety, which we manage on a day-to-day basis by protecting our cultural worldviews. These views (which range in diversity from a belief in God to the belief that America is the grea test country on earth) act as a crutch to lean on when confronted with terrifying reminders of our mortality.

The data on the fear of death and death anxiety are inconsistent. Some data suggests that we are afraid of death, but the majority of data suggest that death anxiety is low. The leading thanatocentric theory, Terror Management Theory (TMT), makes the claim that we do not show death anxiety because we are well practised at suppressing the terrifying thoughts of death; however, this claim is non-falsifiable.

The present research does its best to test these claims against the competing theory, the Meaning Maintenance Model (MMM), which stipulates that thoughts of our mortality threaten our meaning framework. We know how the world works and reminders of death make us question that certainty, although death is only one example of a thing that makes us question ourselves. This thesis uses the inconsistent data as a starting point and asks, "Are we actually afraid of death?" in two parts. Part one (which includes Studies 1, 2, and 3) proposes the question philosophically and empirically. Study 1 directly asked participants what they were afraid of. 'Death' was listed by approximately 27% of the respondents ('One's own death' was listed by approximately 21%) and death anxiety scores were moderate. 'Failure' was the most prevalent fear. It was listed by approximately 61% of the participants. Study 2 , more indirectly, analysed written reflections on the ir mortality. When asked about how their own death made them feel, participants wrote more negative emotional words than positive emotional words. Both positive and negative emotional words were more prevalent when writing about death than writing about neutral controls. Study 3 had participants speak about their own deaths —or a neutral television condition— in front of a camera. Facial recognition software was unable to detect any meaningful emotional differences between those two conditions. These studies look ed for (and fail ed to find) direct signs o f death anxiety. Some indirect signs of death anxiety were found (e.g., increased negative emotional word usage), but nothing that suggests a ubiquitous and universal fear of death.

Part two, which includes Studies 4 and 5, explores an alternate cause of death anxiety from Study 1 : failure. The final two studies explore the mediated relationship between personal failure, the need for closure, and death anxiety. Closure is a construct that links TMT and the MMM. Study 4 asked participants to think about personal life successes or personal life failures and then complete need for closure and death anxiety scales. Need for Closure (NFC) mediated the relationship. Participants that thought about life failures showed an increased need for closure, which subs equently led to an increase in death anxiety. Study 5 tested the relationship between death and failure by ad ding a mortality salience condition to the previous study. This final study failed to replicate the find ings of Study 4. It did, however, find a link between NFC and death anxiety.

Taken together, these studies reiterate that the terror from TMT seems to be missing. Failure was the most commonly cited fear , though it is unclear whether death and failure are related. The relationship between NFC and death anxiety is the most promising finding. The implications of these relationships as they relate to existing theories on death and dying are discussed.

Perceived Shared Condemnation Intensifies Punitive Moral Emotions

Perceived Shared Condemnation Intensifies Punitive Moral Emotions. Naoki Konishi et al. Sci Rep. 2017; 7: 7289. Published online 2017 Aug 4. doi:  10.1038/s41598-017-07916-z

Abstract: Punishment facilitates large-scale cooperation among humans, but how punishers, who incur an extra cost of punishment, can successfully compete with non-punishers, who free-ride on the punisher’s policing, poses an evolutionary puzzle. One answer is by coordinating punishment to minimise its cost. Notice, however, that in order to effectively coordinate their punishment, potential punishers must know in advance whether others would also be willing to punish a particular norm violator. Such knowledge might hinder coordination by tempting potential punishers to free-ride on other punishers. Previous research suggests that moral emotions, such as moral outrage and moral disgust, serve as a commitment device and drive people to carry out the costly act of punishment. Accordingly, we tested whether the perception of socially shared condemnation (i.e., knowledge that others also condemn a particular violator) would amplify moral outrage and moral disgust, and diminish empathy for the violator. Study 1 (scenario-based study) revealed that perceived shared condemnation was correlated positively with moral outrage and moral disgust, and negatively with empathy. Study 2 experimentally demonstrated that information indicating that others also condemn a particular norm violation amplified moral outrage. Lastly, Study 3 (autobiographical recall study) confirmed the external validity of the finding.

Limiting Consumer Choice, Expanding Costly Litigation: An Analysis of the CFPB Arbitration Rule

US Treasury Dept
Limiting Consumer Choice, Expanding Costly Litigation: An Analysis of the CFPB Arbitration Rule
Oct 23 2017
https://www.treasury.gov/press-center/press-releases/Pages/sm0186.aspx

WASHINGTON – The U.S. Treasury Department today released a report that examines the Consumer Financial Protection Bureau’s (CFPB) arbitration rule. The Treasury report delves into the analysis CFPB used to prohibit mandatory arbitration clauses.  It outlines important limitations to the data behind CFPB’s rule and explains that CFPB did not appropriately consider whether prohibiting arbitration clauses would advance consumer protection or serve the public interest.

The Treasury report found that:
  • The CFPB’s rule will impose extraordinary costs—generating more than 3,000 additional class action lawsuits over the next five years, imposing more than $500 million in additional legal defense fees, and transferring $330 million to plaintiffs’ lawyers;
  • The CFPB’s data show that the vast majority of class action lawsuits deliver no relief to the class—and that consumers very rarely claim relief available to them;
  • The CFPB did not show that its rule will achieve a necessary increase compliance with the federal consumer financial laws, despite the rule’s high costs; and
The CFPB failed to consider less onerous alternatives to its ban on mandatory arbitration clauses across market sectors.


---
Nearly a century ago, Congress made private agreements to resolve disputes through arbitration “valid,irrevocable, and enforceable” underthe Federal Arbitration Act.

This longstanding federal policy in favor of private dispute resolution serves
the twin purposes of economic efficiency and freedom of contract. In the Dodd-
Frank Act, Congress authorized the Consumer Financial Protection Bureau to limit or ban the use of arbitration agreements in consumer financial contracts only if the Bureau concludes that its restrictions are “in the public interest and for the protection of consumers.”
Against this background, in July 2017, the Bureau issued its final rule (the “Rule”) prohibiting consumers and providers of financial products and services from agreeing to resolve future disputes through arbitration rather than class-action litigation.
The Rule follows the Bureau’s study of arbitration,
summarized in a 2015 report to
Congress. The Arbitration Study attempted
an empirical analysis of
both the arbitral awards and
class action settlements that consumers obtained for a variety of claims.
But the data the Bureau
considered
were
limited in ways that raise
serious questions about its conclusions
and undermine
the foundation of the Rule its
elf
. More fundamentally, the Bureau failed to
meaningfully
evaluate
whether
prohibiting mandatory
arbitration clauses in consumer financial contracts would serve
either consumer protection
or
the public interest
—its two statutory mandates. Neither the St
udy
nor
the Rule
makes
that
requisite
showing. Instead,
on closer inspection,
the Study and the Rule
demonstrate that:
The Rule will impose extraordinary costs—based on the Bureau’s own incomplete
estimates
.
The
Bureau
projects that
the
Rule
will
generate more than 3
,000
additional
class action lawsuits
over the next five years
. Meanwhile,
affected businesses
will spend more than $500
million
in additional legal defense fees, $330 million in payments to plaintiffs’ lawyers,
and $1.7
billion in additional settlements
.
Remarkably,
the Bureau’s
estimates do not account for
expected
increases in state court litigation.
Affected businesses are unlikely to simply absorb
these new financial burdens.
T
he Office of the Comptroller of the Currency recently reported
that
the
Bureau’s own data show that the Rule’s costs will
very
likely be passed
through
to
consumers in the form of higher borrowing costs
for credit card users, among other burdens
.
The vast m
ajority of consumer class actions deliver zero relief to the putative members of
the class.
According to the Bureau’s own data, only 13% of consumer class action lawsuits
filed
result in class
-wide recovery—
meaning tha
t in 87% of cases, either no plaintiffs or only
named plaint
iffs receive relief of any kind.
The Bureau projects that, out
of the 3,000
additional class actions the Rule will generate, four
in five
cases
will
yield
no
recovery for the
putative class of con
sumers.
In the fraction of class actions that generate class
-wide
relief, few affected consumers
demonstrate interest in recovery.
On average, only
4% of plaintiffs entitled
to claim
class
settlement funds actually do so. This suggests that consumers value class action litigation far
less than the Bureau believes they should.
This is not surprising given that plaintiffs who do
claim funds from class action settlements receive, on average, $32.35 per person.
2
The
Rule will effect a large wealth trans
fer to plaintiffs’ attorneys
.
O
n average, plaintiff
-
side
attorneys’ fees account for approximately 31% of the
payments
that
plaintiffs receive from
class action settlements
—and in many types of cases, much more. In an average case,
plaintiffs’ attorneys
collect
more than $1 million; actual plaintiffs receive $32 each.
The
Bureau’s data indicate that the Rule will transfer an additional $330 million over five years
from affected businesses to the plaintiffs’ bar.
The Bureau failed reasonably to consider whether improved disclosures regarding
arbitration would serve consumer interests
better
than its regulatory ban.
The Bureau’s own
data show that the
financial
marketplace offers choices to consumers
regarding arbitration
; the
vast majority of contracts in the major
market segments
do not contain
mandatory arbitration
clauses.
If the Bureau is concerned that consumers are unaware of arbitration clauses, more
prominent disclosure of such clauses would be a lower cost, choice
-preserving
means to
advance consumer protection
.
The Bureau did not adequately
assess the share of class actions that are without merit.
Courts and commentators have long recognized that defendants settle even
meritless lawsuits.
As Justice Ruth Bader Ginsburg has explained, the class mechanism
“places pressure on the
defendant to settle even unmeritorious claims.”
1
The Bureau
overlooked
the force of this
argument
and failed to assess the costs of meritless litigation that the Rul
e will generate
.
The Bureau offered
no foundation for its assumption that the Rule will
improve
compliance
with federal consumer financial laws.
The Bureau “
assumes
that the current level of
compliance in consumer finance markets is generally sub
-optimal”
2
and insists that
the
Rule
will protect consumers by remedying that assumed compliance gap.
But
after years of study,
the Bureau has identified no evidence indicat
ing that firms that do not use arbitration clauses
treat their customers better or have higher levels of compliance with the law. As a result, the
Bureau cannot credibly claim that the
Rule would yield
more efficient levels of
compliance.
In view of these defects, it is clear that
the Rule does not satisfy
the statutor
y prerequisites
for banning
the use of arbitration agreements
under the Dodd-
Frank Act
. The Bureau has not made
a reasoned showing that increased consumer class action litigation will
result in a net
benefit
to
consumers or to
the public as a whole. B
ased on the Bureau’s own data, it is far more likely that
the Rule will generate massive economic costs
—borne by businesses and consumers alike
—that
dwarf the
speculative benefits of
the Bu
reau’s theorized increase in
compliance
.

Is the trait trustingness affected the effect of fishy (vs unpleasant, and neutral) odors on suspicion, creative reasoning, and perceptions of other’s trustworthiness?

In the nose, not in the beholder: Embodied cognition effects override individual differences. Prem Sebastian, LeahKaufmann, and Xochitl de la Piedad Garcia. https://www.researchgate.net/publication/313997207_In_the_nose_not_in_the_beholder_Embodied_cognition_effects_override_individual_differences


Lee and Schwarz (2012) found a relationship between fishy odor and suspicion.
• Study 2: Students in a hallway sprayed with fish smell invested significantly less in an economic game relying on trust compared to students in either a fart spray or control condition.

Lee, Kim and Schwarz (2015) also found that a fishy odor affected critical reasoning via suspicion.
• Study 2: Participants exposed to an incidental fishy odor were more likely utilise negative hypothesis testing, and avoid confirmation bias, than those in a control condition as demonstrated by performance on the Wason would provide the first evidence of an interaction between metaphorical effects and individual differences.

These studies demonstrated the effect of the embodied metaphor of fishiness.

To date, embodied cognition has focussed on effects observed in moment-to-moment bodily states. However, it seems likely individual differences may affect the extent to which bodily states are affected by embodied effects. For example, is the degree to which fishy odor motivates suspicion a function of participant’s own trustingness?

The interaction between individual differences and embodied effects have yet to be considered.

The aim of the current study was to examine whether trait trustingness affected the effect of fishy (vs unpleasant, and neutral) odors on suspicion, creative reasoning, and perceptions of other’s trustworthiness.

The extent and causes of academic text recycling or ‘self-plagiarism’

The extent and causes of academic text recycling or ‘self-plagiarism’. S.P.J.M. Horbach amd W. Halffman. Research Policy, https://doi.org/10.1016/j.respol.2017.09.004

Highlights
•    Text recycling is a common form of dubious behaviour in journal publications.
•    The extent of text recycling varies considerably between research fields.
•    The extent of text recycling is positively related to author’s productivity.
•    Problematic text recycling occurs more often in articles with few co-authors.
•    Existence of editorial policy statements reduces the extent of text recycling.

Abstract: Among the various forms of academic misconduct, text recycling or ‘self-plagiarism’ holds a particularly contentious position as a new way to game the reward system of science. A recent case of alleged ‘self-plagiarism’ by the prominent Dutch economist Peter Nijkamp has attracted much public and regulatory attention in the Netherlands. During the Nijkamp controversy, it became evident that many questions around text recycling have only partly been answered and that much uncertainty still exists. While the conditions of fair text reuse have been specified more clearly in the wake of this case, the extent and causes of problematic text recycling remain unclear. In this study, we investigated the extent of problematic text recycling in order to obtain understanding of its occurrence in four research areas: biochemistry & molecular biology, economics, history and psychology. We also investigated some potential reasons and motives for authors to recycle their text, by testing current hypotheses in scholarly literature regarding the causes of text recycling. To this end, an analysis was performed on 922 journal articles, using the Turnitin plagiarism detection software, followed by close manual interpretation of the results. We observed considerable levels of problematic text recycling, particularly in economics and psychology, while it became clear that the extent of text recycling varies substantially between research fields. In addition, we found evidence that more productive authors are more likely to recycle their papers. In addition, the analysis provides insight into the influence of the number of authors and the existence of editorial policies on the occurrence of problematic text recycling.

Sunday, October 22, 2017

Study of birds shows that corrections to air pollution start before 1950, as we already knew but couldn't measure

Bird specimens track 135 years of atmospheric black carbon and environmental policy. Shane G. DuBay and Carl C. Fuldner. Proceedings of the National Academy of Sciences, Early Edition, doi: 10.1073/pnas.1710239114



Significance: Emission inventories of major climate-forcing agents like black carbon suffer high uncertainty for the early industrial era, thereby limiting their utility for extracting past climate sensitivity to atmospheric pollutants. We identify bird specimens as incidental records of atmospheric black carbon, filling a major historical sampling gap. We find that prevailing emission inventories underestimate black carbon levels in the United States through the first decades of the 20th century, suggesting that black carbon’s contribution to past climate forcing may also be underestimated. This study builds toward a robust, spatially dynamic inventory of atmospheric black carbon, highlighting the value of natural history collections as a resource for addressing present-day environmental challenges.

Abstract: Atmospheric black carbon has long been recognized as a public health and environmental concern. More recently, black carbon has been identified as a major, ongoing contributor to anthropogenic climate change, thus making historical emission inventories of black carbon an essential tool for assessing past climate sensitivity and modeling future climate scenarios. Current estimates of black carbon emissions for the early industrial era have high uncertainty, however, because direct environmental sampling is sparse before the mid-1950s. Using photometric reflectance data of >1,300 bird specimens drawn from natural history collections, we track relative ambient concentrations of atmospheric black carbon between 1880 and 2015 within the US Manufacturing Belt, a region historically reliant on coal and dense with industry. Our data show that black carbon levels within the region peaked during the first decade of the 20th century. Following this peak, black carbon levels were positively correlated with coal consumption through midcentury, after which they decoupled, with black carbon concentrations declining as consumption continued to rise. The precipitous drop in atmospheric black carbon at midcentury reflects policies promoting burning efficiency and fuel transitions rather than regulating emissions alone. Our findings suggest that current emission inventories based on predictive modeling underestimate levels of atmospheric black carbon for the early industrial era, suggesting that the contribution of black carbon to past climate forcing may also be underestimated. These findings build toward a spatially dynamic emission inventory of black carbon based on direct environmental sampling.