Saturday, March 23, 2019

How to define and measure the extent to which human cognition is rational

Cognitive Success: A Consequentialist Account of Rationality in Cognition. Gerhard Schurz, Ralph Hertwig. Topics in Cognitive Science, January 2019. https://doi.org/10.1111/tops.12410

Abstract: One of the most discussed issues in psychology—presently and in the past—is how to define and measure the extent to which human cognition is rational. The rationality of human cognition is often evaluated in terms of normative standards based on a priori intuitions. Yet this approach has been challenged by two recent developments in psychology that we review in this article: ecological rationality and descriptivism. Going beyond these contributions, we consider it a good moment for psychologists and philosophers to join forces and work toward a new foundation for the definition of rational cognition. We take a first step in this direction by proposing that the rationality of both cognitive and normative systems can be measured in terms of their cognitive success. Cognitive success can be defined and gauged in terms of two factors: ecological validity (the system's validity in conditions in which it is applicable) and the system's applicability (the scope of conditions under which it can be applied). As we show, prominent systems of reasoning—deductive reasoning, Bayesian reasoning, uncertain conditionals, and prediction and choice—perform rather differently on these two factors. Furthermore, we demonstrate that conceptualizing rationality according to its cognitive success offers a new perspective on the time‐honored relationship between the descriptive (“is”) and the normative (“ought”) in psychology and philosophy.

1 How psychologists measure rational cognition

For a number of decades, psychologists have typically employed only one experimental method to study whether human cognition is rational (Lopes, 1991). Their approach—devising two or more alternative hypotheses and a crucial experiment with alternative possible outcomes, each excluding one or more of the hypotheses—has been interpreted as enabling a strong inference (Platt, 1964). In research on the rationality of human cognition this means that the experimental set‐up has been designed such that the data, people's cognitive behavior (reasoning, inference, judgment, or choice), support one of two possible results: Either individuals behave in accord with the chosen benchmark of rationality, or their cognitive behavior, measured against the benchmark, is irrational (and sometimes either deviation from the benchmark has been treated as a sign of irrationality as, for instance, in the case of the conjectures that people neglect base rates or pay too much attention to them or that people suffer from both the gambler's fallacy and the hot‐hand fallacy; see Hertwig & Todd, 2000). Crucially, the benchmarks against which these evaluations are made—and human cognition is found to be rational or not—are commonly assumed to be incontrovertible. That is, the benchmarks are understood to be relatively universal, purpose invariant, content free, and domain general. Their claim to legitimacy often rests on “a priori intuitions”—a notion to which we return later. One of these seemingly incontrovertible benchmarks is the canon of classical logic. Take, for illustration, Wason's influential work on human reasoning (e.g., Wason, 1959, 1960). Far from mincing their words, Wason and Johnson–Laird argued that

    a fallacious inference, in fact, is in some ways like both an optical illusion and a pathological delusion. … and like most pathological delusions, we have encountered cases in which the subjects seem to reveal a stubborn resistance to enlightenment. (Wason & Johnson‐Laird, 1972, p. 6)

This dooming verdict is especially notable because long before Wason, other psychologists concerned with the investigation of reasoning processes strongly opposed the use of logic to define rational thought. An example is Wilhelm Wundt, who equally unequivocally argued that

    at first it was thought that the surest way would be to take as a foundation for the psychological analysis of the thought‐processes the laws of logical thinking, as they had been laid down from the time of Aristotle… . These norms … only apply to a small part of the thought‐processes. Any attempt to explain, out of these norms, thought … can only lead to an entanglement of the real facts in a net of logical reflections. (1912/1973, pp. 148–149)

Wundt doubted that classical logic could serve as the bedrock for descriptive theories of reasoning beyond a “small part” of cognition. By extension, he rejected logic's normative claim for the bulk of cognition. But even the greats could not agree. Jean Piaget, for instance, brushed Wundt's view aside. Inhelder and Piaget (1958) proposed that the mental structures required to process experience develop in a stage‐like progression from infancy to adolescence. Once children reach the highest stage, they possess “Euclid's understanding of geometry, Newton's … understanding of space, time, and causality, and Kant's understanding of logic” (Flanagan, 1991, p. 145). For these developmental psychologists, logic was a key descriptive and normative foundation of the mind's highest stage of reasoning. Moreover, cognitive psychologists and scientists, from Bruner (Bruner, Goodnow, & Austin, 1956) to Fodor (1975) to Evans (1982), took theory testing based on deductive logic, thus following Popper (1959/2005), as the key to human learning. When Wason (1969) examined adults’ reasoning and found discrepancies from the rules of logical deduction, he and other contemporary cognitive psychologists did not challenge the normativity of logic but inferred that something in his carefully constructed selection task “predisposes people to regress temporarily to less sophisticated modes of cognitive functioning” (p. 478).

Yet Wundt's (1912/1973) rejection of logic as the foundation of cognition experienced a renaissance in psychology, although on the basis of very different arguments. Specifically, the normativity of logic came under attack in two ways in the late 1980s and 1990s. According to Cosmides (1989), natural selection did not evolve general‐purpose cognitive algorithms but rather cognitive algorithms that succeed in solving recurrent adaptive problems, such as the threat of being cheated in a social exchange. From this perspective, reasoning obeys a Darwinian and not a formal deductive logic. The second challenge arose in terms of a probabilistic approach being taken to purportedly logical reasoning tasks (Oaksford & Chater, 1994). According to this view, the conclusion that humans reason irrationally results from comparing “apparently irrational behavior … with an inappropriate logical standard” (Oaksford & Chater, 2001, p. 349). Specifically, people's reasoning is better understood in the Wason selection task in terms of a process of inductive hypothesis testing (and a Bayesian model of optimal data selection) than in terms of an “outmoded falsificationist philosophy of science” (Oaksford & Chater, 1994, p. 608). Consequently, probability theory rather than logic should be the normative benchmark. It is, of course, not without irony here that human reasoning has also been famously observed to deviate from the norms of probability theory (Barbey & Sloman, 2007; Kahneman, 2011; Kahneman & Tversky, 1972). Yet, like in research on reasoning, both the evidence for people's proneness to errors in statistical reasoning (Peterson & Beach, 1967) and the appropriateness of the invoked probabilistic norms for human rationality (Gigerenzer, 1996) have been hotly debated among psychologists as well.

There has thus been a history of opposing views on whether classical logic should serve as a universal benchmark for human rationality. Similar arguments have been raised with regard to probability, coherence, and other benchmarks of rationality (see Arkes, Gigerenzer, & Hertwig, 2016; Hertwig & Volz, 2013). We believe that now is a good moment for psychologists and philosophers to join forces and work toward a new foundation for the definition of rational cognition. This article represents a first step in this direction, with one author being a philosopher and one a psychologist. We begin by briefly outlining two recent developments in psychology—ecological rationality and descriptivism—that contribute to the ongoing debate about appropriate frameworks of rational cognition.


2 Ecological rationality and descriptivism

One development is the concept of “ecological rationality” (Arkes et al., 2016; Gigerenzer, Todd, & the ABC Research Group, 1999; Hertwig, Hoffrage, & the ABC Research Group, 2013; Hertwig, Pleskac, Pachur, & the Center for Adaptive Rationality, in press; A. Kozyreva & R. Hertwig, unpublished data; Todd, Gigerenzer, & the ABC Research Group, 2012). This view endorses the premise that rationality is evaluated against some benchmark but argues that contrary to a frequent assumption in psychology, there are no universal benchmarks. What are treated as universal benchmarks—for instance, consistency, coherence‐based rules, modus ponens, or Bayes's rule in probability theory—do not suffice to evaluate behavior as rational. Instead, rationality should be measured in terms of the organism's success—accurate predictions or competitive decisions—in the world. Ecological rationality thus aims to shift the researcher's methodological strategy from the a priori imposition of content‐free norms to studying the organism's goals and achievements within the context of specific environmental structures as well as the mind's undeniable cognitive constraints. Researchers would thus ask: Under what environmental structure is a given cognitive strategy (e.g., heuristic, rule, routine) for the task at hand more accurate than competing strategies that need more information and computation, and under what structure is it not? A strategy is ecologically rational to the degree that it is adapted, in the context of the task, to the informational and statistical structure of an environment. This also means that any strategy is no longer good or bad, rational or irrational per se, but rather it is or is not adapted to the specific task and environment. In addition, it means that a strategy is commonly being tested against some other strategies that may or may not be even better adapted to the specific task and environment (e.g., Gigerenzer & Brighton, 2009; Spiliopoulos & Hertwig, in press).

Although Elqayam and Evans (2011) classified the concept of ecological rationality among nonnormativist positions, they criticized it for being in danger of committing the dubious inference from “is” to “ought” (see also Elqayam & Over, 2016, p. 46). The position they advocate, descriptivisim (Elqayam & Evans, 2011; Elqayam & Over, 2016), is meant to escape the “is–ought” inference trap. The escape is realized by completely eschewing normative concerns. Elqayam and Over proposed that

    the psychology of reasoning and decision making would be better off letting go of normative concerns altogether. Instead of measuring rationality by normative standards, the descriptivist position is that rationality should be measured by the achievement of personal goals. (Elqayam & Over, 2016, p. 7, emphasis added)

To this end, Evans and Over (1996) proposed a distinction between rationality1, measured in terms of achieving one's goals, and rationality2, measured against a priori normative standards, such as classical logic or probability theory. Rationality1 is postulated to be personal and contextual, resulting in instrumental rationality, meaning that an individual behaves in such a way as to achieve his or her personal goals (see also Elqayam, 2012).

In our view, the opposition between descriptivism (rationality1) and normativism (rationality2) that Elqayam and Evans (2011) invoked is misleading because the character of “instrumental rationality” is ambiguous (for other critical objections see Hertwig, Ortmann, & Gigerenzer, 1997). Ordinarily, the assertion that an action is instrumentally rational means that it is rational because it is the appropriate means for some end that, in turn, is assumed to be of value. Thus understood, instrumental rationality does involve a normative dimension insofar as it shifts the normative weight from the end to the means (i.e., to the action; Schurz, 1997, sect. 6.1). There is a second, purely descriptive understanding of instrumental rationality according to which the proposition that an action is instrumentally rational for a given end simply means that the action is appropriate for reaching this end, even if this end is bizarre from a commonsense or intuitive viewpoint. For example, it would sound odd to describe “heavy smoking” as instrumentally rational in regard to the goal of increasing the chances of developing lung cancer or frequent casual sex as instrumentally rational in regard to the goal of contracting a sexually transmitted disease. Yet such descriptive statements would be perfectly fine in this second understanding.

Notwithstanding this criticism, the notions of ecological and instrumental rationality and descriptivism have one thing in common: They object to the reduction of rationality to allegedly universal normative systems, which are, in turn, founded on a priori intuitions that are inaccessible to further justification in terms of their cognitive functionality or success in the world. Next, we turn to the difficulties such intuitions face. To this end, let us dip our toes in some philosophical waters.

3 The problems of justifying rational cognition from a priori norms or intuitions

To appreciate the full force of the foundational issues in question, it helps to briefly consider normative ethics and more specifically the classical distinction between deontological and consequentialist justifications of ethical norms (see Broad, 1967; Frankena, 1963). In deontological frameworks, the justification of norms is rooted in general a priori intuitions about values and duty principles that are assumed to be “good in themselves” (e.g., Kant, 1785/2012). These principles are obligatory, irrespective of the consequences that might follow from our actions. In consequentialist frameworks, in contrast, how correct our moral conduct is will be determined solely by a cost–benefit analysis of the action’s consequences. One example of such a framework is (act) utilitarianism, according to which an action is morally justified if the action’s total good consequences outweigh its total bad consequences (e.g., Mill, 1863/1998). Let us employ the distinction between deontological and consequentialist justification in the context of rational cognition (see Goldman, 1986, p. 97). As in deontological theories of ethics, in apriorist accounts of rational cognition (note that the term deontological is reserved for the domain of ethics), norms are justified by reference to a priori intuitions. Such foundational intuitions could be, for instance, necessity, consistency, or coherence. Generally speaking, a norm or an intuition concerning the rationality of a given cognitive strategy can be described as a priori if either it is considered evident without further justification, or its justification is based on other intuitions that are independent of the consequences of this strategy in a given environment. In contrast, consequentialist accounts of rational cognition justify their benchmarks in terms of what one could call “cognitive success.” This means that these benchmarks acquire “normative legitimacy” through the success of their consequences and not through agreement with some norm such as coherence that is imposed a priori (e.g., transitivity, property alpha, procedural invariance; see table 1 in Arkes et al., 2016).
3.1 Equilibrium justifications and the problem of circularity

In our view, it is highly problematic to enlist a priori intuitions as the foundation for justification of rational norms. Let us explain our concern. After five centuries of failed attempts in the history of rationalist philosophy, including Kant's (1781/1998), to justify principles a priori, there is wide consensus in contemporary epistemology: It is impossible to justify cognitive principles from nothing, which was Kant's understanding of “a priori.” Thus, contemporary philosophers in the rationalist tradition have put the coherence of intuitions at the basis of rationalist justifications that are considered a priori in the sense explained.1
The method of justifying a priori intuitions by the coherence with other intuitions has been called, perhaps somewhat euphemistically and sidestepping the term intuition, the “method of reflective equilibrium” (Cohen, 1981; Goodman, 1955/1983; Rawls, 1971):

    The key idea underlying this view of justification is that we “test” various parts of our system of beliefs against the other beliefs we hold, looking for ways in which some of these beliefs support others, seeking coherence among the widest set of beliefs, and revising and refining them at all levels when challenges to some arise from others. For example, a moral principle or moral judgment about a particular case (or, alternatively, a rule of inductive or deductive inference or a particular inference) would be justified if it cohered with the rest of our beliefs about right action (or correct inferences) on due reflection and after appropriate revisions throughout our system of beliefs. By extension of this account, a person who holds a principle or judgment in reflective equilibrium with other relevant beliefs can be said to be justified in believing that principle or judgment. (Daniels, 2018, sect. 1)

From a consequentialist viewpoint, however, there is a vigorous objection to such “equilibrium justifications.” They are circular. In reply to this objection, several philosophers have argued that even circular justifications may have epistemic value (e.g., Goldman, 1999, p. 85; Psillos, 1999, p. 82). However, there are strong counterarguments showing that such hopes are in vain. Before turning to one, let us clarify that we do not deny that certain justification structures that have been called circular in the literature can have epistemic value (see Hahn, 2011); yet these are of a different sort from the circles involved in equilibrium justifications.2

3.2 Circular justifications and the problem of contradictory intuitions

One key counterargument to the view that circular justifications have epistemic value demonstrates that contradictory rules can be pseudojustified by the same circular argument structure. For example, the circular inductive justification of induction goes as follows: Inductions were successful in the past, whence, by induction, they will be successful in the future. If one accepts this justification, then—to avoid inconsistency—one must equally accept a counterinductive justification of counterinduction3 that runs as follows: Counterinductions were not successful in the past, whence by counterinduction they will be successful in the future (see Douven, 2011, sect. 3; Salmon, 1957; Schurz, 2018). Eventually, circular justification may also be given for fundamentalist doctrines, such as the “rule of blind trust in God's voice,” which a person may hold in reflective equilibrium with the intuition that “God's voice in me tells me that I should blindly trust his voice.”

The fact that equilibrium justifications can easily support contradictory intuitions demonstrates that circular justifications are highly problematic. Because of their circularity, these contradictory intuitions cannot be meaningfully correlated with the world but are rather inescapably subjective in nature. A striking example of an intuition‐based account of rationality in psychology and cognitive science is Cohen's (1981) article “Can Human Irrationality Be Experimentally Demonstrated?” According to Google Scholar, this article has been cited a total of 1,414 times (June 23, 2018). The philosopher Cohen vehemently argued against the bleak implications for human rationality implied especially by the research in psychology on probabilistic reasoning (Kahneman & Tversky's heuristics‐and‐biases program; Kahneman, 2011) and deductive reasoning. For Cohen, rules of logical and probabilistic reasoning such as modus ponens, modus tollens, and Bayes's theorem are based on intuitions about correct reasoning. He put it as follows: “The presence of fallacies in reasoning is evaluated by referring to normative criteria which ultimately derive their own credentials from a systematization of the intuitions that agree with them” (1981, p. 317). From this follows, Cohen argued, that if people's reasoning deviates from such rules, then this merely means that they have different intuitions about correct reasoning than logicians or probability theorists do, and therefore experimenters “risk imputing fallacies where none exist” (1981, p. 330).

The subjective nature of intuition‐based justifications raises the problem of how to arbitrate between competing normative systems. Some have diagnosed this arbitration problem as essentially unsolvable because cognitive norms, goes the argument, will necessarily be based on intuitions, without external standards of cognitive success (Elqayam & Evans, 2011). Consequently, an intuition‐based justification of rationality is doomed to result in a strong form of cognitive relativism (“anything goes”)—a position whose consequences the philosopher Stich (1990) worked out.

3.3 Why a consequentialist account of rational cognition is indispensable

What follows from this discussion? First, we do not deny that intuitions are needed in some areas, for example, in ethics where one inevitably must define what counts as intrinsically valuable. However, the appropriateness of cognitive systems should be evaluated not by intuitions but, so we argue, by demonstrations that these systems have successful consequences in the real world. Cognitive success is thus a concept that brings a consequentialist perspective to the justification of norms for rational cognition. Remember consequentialism (as used in ethics) means that the moral rightness of an act depends only on the consequences of that act. By analogy, an act of a cognitive system is rational insofar as its consequences bear success. For instance, the validity of the rule of modus ponens is established not “by intuition” but by the semantic proof of its strictly truth‐preserving nature: If “p” and “p implies q” are true, then “q” will be true as well, no matter the environment you are in. This justification of modus ponens is consequentialist in nature. Cohen (1981, p. 319) objected that the “if–then” of classical logic deviates from the if–then in natural language. Therefore, according to Cohen (1981, p. 319), intuitions need to be invoked to determine the “right” meaning of the conditional. To this argument, the consequentialist reply is that to assume that there is an “objectively right” meaning is a “rationalistic illusion”—there are only more or less cognitively successful meanings and these can change across contexts (see also Hertwig, Benz, & Krauss, 2008; Hertwig & Gigerenzer, 1999). It is well known that the if–then of natural language has a number of different semantic interpretations (cf. Bennett, 2003). The question of which is cognitively most appropriate should be answered not by reference to intuition but by replacing the ambiguous if–then of natural language with semantically well‐defined conditionals (e.g., strict, uncertain, indicative, counterfactual) and investigating their cognitive properties. In later sections we investigate the cognitive success of different systems of strict and uncertain conditional reasoning, with surprising results. Investigations of this sort are impossible as long as these systems are merely evaluated and justified on the basis of intuition, in particular since a growing dissent of intuitions has emerged in the area of conditional reasoning (see Pelletier & Elio, 1997).

In sum, taking intuitions as sacrosanct would hinder empirical research and rational criticism. We suggest that the better justification of norms of rational cognition is consequentialist in nature. Within a consequentialist account, the severe problems of arbitration, cognitive relativism, and the indeterminate correspondence of intuitions and the world are either removed or less grave—at least so we claim. The reason lies in the promise that all normative systems of reasoning can be measured on a commensurable metric that we call cognitive success. What is it and how can it be measured?

4 What is cognitive success?

Next, we propose a consequentialist account of rational cognition. Our account is in line with Quine's naturalized epistemology (1960) but goes beyond it in its explication and applications of the notion of cognitive success, as well as in its new understanding of the interplay between its descriptive and normative components. What distinguishes the present proposal from all naive sorts of pragmatism is that cognitive systems are evaluated in terms of cognitive rather than practical success indices (such as moneymaking). What is measured by cognitive success is the “cognitive part” of rationality. Cognitive rationality is a precondition for practical rationality, but unlike practical rationality, it abstracts from the question of what ends are normatively right or intrinsically good. In contrast, practical rationality, in the philosophical understanding of this concept, attempts to answer this question. For example, knowing the optimal temperature for roasting meat is “cognitively rational,” but the assessment of the practical rationality of roasting meat depends on one's ethical attitude toward a vegetarian versus nonvegetarian diet.

A consequentialist approach to the definition of rational cognition faces two main challenges. First, how can the value of cognitive success be justified without again presupposing normative intuitions, thus inheriting all the problems outlined above? According to philosophical arguments harking back to Hume (1739/40) and Moore (1903), it is impossible to derive norms solely from the “is,” that is, from empirical facts (Schurz, 1997). Consequently, every instrumental justification of particular norms must assume, besides factual information, more general norms. For example, inferring that calisthenics is good from the fact that it improves fitness assumes that fitness is a general norm. Does this argument then not thwart any attempt to ground the notion of cognitive success in anything but, again, normative intuitions?

Although this objection—every instrumental justification of particular norms must assume more general norms—is logically correct and has useful applications in ethics (Schurz, 2014), we argue that it does not apply to psychology and cognitive science for the following reason: Cognitive success is instrumental for all—or at least most4—purposes. Every real‐world decision problem involves, as a part of it, a ubiquitous cognitive task, namely, predicting which of the available actions will have the maximum expected payoff, in light of a given reward function.5 Greater success in this cognitive task will, by and large, lead to greater success in one's actions, independently of the goals pursued (Schurz, 2014). Is the premise that cognitive success is instrumental for almost all purposes really sufficient for the normative justification of cognitive success? Logically speaking, no, because this premise is descriptive and (as explained above) no “ought” can follow from an “is” by rules of logic alone. However, the missing normative premise that fills the logical gap is relatively harmless: We assume that it is by‐and‐large good to help people attain their personal goals. This is indeed a fundamental and widely shared intuition, though not a cognitive but a moral one.

Moreover, the insight that cognitive success is instrumental for almost all practical purposes helps solve the problem of the apparent relativity of instrumental rationality to one's assumed purposes, which for many authors constitutes a fundamental obstacle to the objectivity of this notion (e.g., Stich, 1990, p. 131). We suggest that the purpose‐invariant core of all forms of instrumental rationality is precisely their cognitive rationality (Kornblith, 2002, p. 158). Thus, there are no separate forms of instrumental rationality for cooks, clerks, and pilots, or for right‐wing and left‐wing politicians. What is common to all these applications of instrumental rationality is their cognitive success. This means that cognitive success is not to be mistaken for moral rightness.

This brings us to the second challenge to a consequentialist approach to defining rational cognition: the meaning of cognitive success. The details will depend on the cognitive task at hand. Yet there must be a core meaning of “cognitive success” that is common to all competing systems of rational reasoning; otherwise, it would be impossible to compare them using the same currency. Above, we argued that every real‐world decision problem involves—or can be reformulated in terms of—some kind of prediction problem. On the basis of this premise, we suggest the following definition:

    The core meaning of the cognitive success of a system (including algorithms, heuristics, rules) is defined in terms of successful predictions, assuming a comprehensive meaning of prediction that includes, besides the predictions of events or effects, predictions of possible causes (explanatory abductions) and in particular predictions of the utilities of actions (decision problems).

Characterizing a decision problem in terms of a prediction task might seem narrow. Yet much of what people do is predicated on implicit or explicit forecasts about how the future will unfold. Choosing a job, getting married, having children and investing in their education, purchasing an apartment, voting for a party, saving for old age, choosing a medical treatment—all these decisions and many others are reached on the basis of predictions about what the future holds. Moreover, focusing on predictions by no means implies that important cognitive processes are ignored. Since reliable predictions are based on an inductive inference from sufficiently informed premises, they engage various nonpredictive subprocesses such as search, memory retrieval, and language processing. Importantly, the major purpose of the predictive reformulation of decision tasks is to measure their cognitive success on a commensurable scale. For example, consider the decision problem of buying the “best” car (relative to the buyer's preferences) where the buyer encounters two websites offering two competing decision methods, M1 and M2, to potential car buyers. Then the claim that method M1 is more appropriate for a certain group of car buyers (e.g., males between the ages of 20 and 30) amounts to the testable prediction that the degree of future satisfaction of car buyers in this group, having used method M1, is significantly higher than those who used method M2.

Upon closer inspection, the predictive success of a cognitive system or (more generally) a cognitive method depends on two components that are commonly in competition and whose optimization thus involves a trade‐off. In the psychological literature, this trade‐off is reflected in the distinction between the ecological validity of a prediction method (Brunswik, 1952; Gigerenzer et al., 1999) and its applicability.6
More precisely, a method's cognitive success can be factorized into the product of these two components as follows:

cognitive succ = ecological validity x applicability,

where applicability is the percentage of targets for which the method renders a prediction, among all intended targets of prediction, and ecological validity is the sum of scores divided by the number of all predictions rendered, and

score (per prediction) = max - loss,

where max is the maximal score that a perfectly accurate prediction can obtain and loss is a monotonically increasing function of the distance between the predicted and the actually observed value of the event variable. From this it follows that

cognitive succ = sum of scores divided by number of all intended targets of prediction.


Ecological validity and applicability of a cognitive method are in competition. One can increase the ecological validity of a method by having it apply only to those few target domains for which the method's predictions are known to be accurate because, for instance, the method has been fitted to this domain. Likewise, one can increase the applicability of a method by applying it also to target domains for which its error rate remains unknown or even known to be high, or by permitting the method to make a random guess in cases where the algorithm does not reach a decision (i.e., in this sense is not applicable). Also note that the definition of the concept of applicability is related to all intended target domains but not to all possible target domains. Thus, a method's cognitive success cannot be deemed to be low because it does not apply to domains that were never intended to be part of the class of target domains. Consider, for illustration, the analogy of a hammer—its “success” is not diminished by the fact that the hammer is not suitable to drill holes. We also emphasize that a method's class of intended targets domains is not an invitation to propose arbitrary reference classes but rather is empirically inferred in terms of the method's purposes across all users. Thus, a method's cognitive success cannot be arbitrarily boosted by winnowing down its intended targets to “easy ones.”

The score that a method earns for each prediction is its maximally achievable score (max) minus its distance to the observed value (loss). The type of loss function7
and max are specified by type and context of the given task.8 Often max is identified with the greatest possible loss; this entails that min, that is, the minimal score, is zero. If loss is identified with the absolute distance function, max is given as the width of the observable value range. For example, if the task consisted in forecasting the next day's mean temperature with values lying in the range between −20°C and +40°C and the loss function is given as the absolute difference between predicted and actual mean temperature, then max is 60°C. If the task is the prediction of probabilities such as that it will rain tomorrow, then, according to a famous result of Brier (1950), the appropriate loss function is not the absolute but the squared distance between the predicted probability and the truth value of the predicted event; thus, max = 1 (true) and min = 0 (false). In the example of people intending to buy a car, a natural loss function might be the absolute difference between mean degree of satisfaction (in an unbiased sample) with the car type recommended by method M1 and that recommended by method M2, with degree of satisfaction measured on a scale ranging, say, from min = 0 to max = 10.

4.1 Some possible objections to cognitive success

Let us freely admit that intuitions can play a role in determining the details of the scoring function. However, robust results should be largely invariant to changes of the scoring functions (see the section on uncertain conditionals below). Another objection to the concept of cognitive success is that it downplays the role of explanations relative to predictions. This challenge can serve as a further test case for our account. Salmon (1984) argued that what distinguishes explanations from predictions is that, whereas predictions can be based on noncausal correlations, explanations must spell out the causes of the event to be explained. Although we agree, we emphasize that causality can easily be embedded into the concept of cognitive success. What distinguishes a causal from a noncausal correlation between a variable X and another one Y is that the effect of an intervention on X will be transmitted to Y only if X is a cause of Y (this is a consequence of the causal Markov condition; see Pearl, 2009). Thus, the cognitive success of causal information resides in its capacity to predict the consequences of (human) actions.

Another account identifies good explanations with argument patterns that unify many empirical phenomena (Kitcher, 1981). However, it can be shown that empirical unification correlates with empirical confirmation and this, in turn, correlates with predictive success (Schurz & Lambert, 1994). The only notions of explanation that are not and should not be covered by our account are those that make the quality of an explanation dependent on its coherence with “intuitions of understanding” and that are inexplicable in terms of causal or unificatory concerns.

The two core components of our notion of cognitive success, ecological validity and applicability, are related to a number of further important evaluative dimensions:

    A method with high ecological validity has a high truth rate9 in those situations where it is applicable; thus, high ecological validity is connected with low risk of error.
    A method with high ecological validity may nevertheless have low cognitive success if it can rarely be recruited due to low applicability.
    A method with high applicability renders predictions possible across many predictive contexts. High applicability therefore suggests that the method has a high information output.
    On the other hand, a method's applicability is inversely related to its cognitive costs, measured in terms of the information input needed and the effort required to process it. The higher the cognitive cost of a method, the more often it will be inapplicable because it exceeds the upper bound of agents’ cognitive resources (see also Payne, Bettman, & Johnson, 1993).

The threefold tensions between risk of error, information output, and cognitive costs create a fitness landscape10
that can explain many facets of the pros and cons of competing systems of rational reasoning. How these cognitive fitness factors interact in concrete cognitive tasks will be discussed next. In particular, the tension between these factors explains why cognitive science needs not a monism but a pluralism of cognitive methods, and why the evaluation of those methods’ advantages and weaknesses should rely not on intuition but on careful comparison of their respective success. Next, we illustrate this point by applying the notion of cognitive success in the domains of classical material conditionals, uncertain conditionals, Bayesian probabilities, and prediction and choice.

4.2 Cognitive success and deductive reasoning

Let us return to classical logic, our introductory example of what many psychologists considered a universal norm of rational cognition in the 20th century. Deductive inferences are, by definition, completely valid—that is, they have maximum ecological validity (1.0): In all situations in which all premises are true, the derived conclusion will invariably be true. Yet this ideal validity of deductive inferences stands in stark contrast to their very low applicability, as emphasized by Wundt (1912/1973; see above). That is, the prevalence of deductive inferences with nontrivial conclusions is low. As an example, consider inferences of propositional logic involving the classical (material) conditional “If P, then Q” (semantically equivalent to “not‐P or Q”). It can be shown that this inference can have a nontrivial conclusion insofar as it is possible to confirm each premise by observations that do not already contain the conclusion. This will be the case if the following condition is satisfied: The verification of the conditional premise “If P, then Q” is based not on the observation of “not‐P” or of “Q,” but rather on an inductively supported belief that expresses (at least implicitly11
) a strict (exceptionless) generality of the form “For all x in a given domain: If P(x), then Q(x)” (see Schurz, 2014, sect. 5.1). Exceptionless regularities (i.e., conditional probabilities of 1.0) are known to be rare in empirical (nonmathematical) domains. What does this mean? It simply means that inferences of propositional logic with nontrivial conclusions are rare. Therefore, their overall cognitive success will be low in these domains, notwithstanding their maximum validity. Only if one could demonstrate that in a specific environment the applicability of deductive reasoning is high could one argue in favor of this system's high cognitive success in this environment. One such environment may be cheater detection, where people can be, under specific circumstances, remarkably successful when measured in terms of modus tollens reasoning (Cosmides & Tooby, 1992). Another domain may be consistency checks in legal reasoning (Arkes et al., 2016).

4.3 Cognitive success and reasoning with uncertain conditionals

Uncertain conditionals are conditionals of the form “If A, then normally B.” They are epistemically acceptable if the associated conditional probability pr(B|A) is “sufficiently” high, that is, higher than a contextually determined threshold α > .5. Systems of probability logic infer further conditionals from sets of uncertain conditionals. There are four well‐known systems of reasoning with uncertain conditionals: O, P, Z, and QC. System O (Hawthorne & Makinson, 2007) is the only system that preserves epistemic acceptability from premises to conclusion for any chosen acceptability threshold. System P is the famous system of probability logic developed by Adams (1975). It guarantees to preserve epistemic acceptability only if the sum of the premises’ conditional uncertainties is smaller than 1.0 minus the acceptability threshold (where uncertainty is defined as 1.0 minus probability; Oaksford & Chater, 2007, p. 111). System Z goes back to Pearl (1990) and makes additional default assumptions that, roughly speaking, maximize the entropy of the distribution under the high‐probability constraints dictated by the premise conditionals (Hill & Paris, 2003). System QC (for “quasi‐classical reasoning”) reasons with uncertain conditionals as if they were exceptionless conditionals of classical logic.

For illustration, assume a small world with only four predicates: “being a bird” (B), “being able to fly” (F), “having wings” (W), and “being male” (M). The known premises are the two uncertain conditionals (a) B ⇒ F (birds can fly) and (b) B ⇒ W (birds have wings), with associated probabilities pr(F|B) = pr(W|B) = .95. System O draws only trivial inferences such as B&(M∨¬M) ⇒ F from Premise a, meaning birds that are either male or not male can fly, with an associated probability of .95. In addition to the previous inference, system P draws the inference B&W ⇒ F from Premises a + b, meaning, birds having wings can fly, with an associated probability of .9. System P does so by applying the law of “cautious monotonicity” and the uncertainty sum rule. In addition to the previous inferences, System Z draws the inferences B&M ⇒ F and B&¬M ⇒ F from Premise a, meaning male birds as well as nonmale birds can fly, with an associated probability of .95. It does so by making the default assumption that the predicates “male” and “being able to fly” are statistically independent (likewise in application to Premise b). Finally, in addition to all previous inferences, System QC draws the “risky” inference of contraposition ¬F ⇒ ¬B, meaning nonflying objects are not birds. This follows from Premise a with an associated probability of .95 (similarly in application to Premise b).

These four systems differ significantly in their predictive power. They become increasingly powerful and, at the same time, more risky and error prone. That is, the applicability (number of derived conclusions) and error probability (number of mistakes made) increase from O to P to Z to QC. From a consequentialist viewpoint, the question is not which of these systems is the right or true one, but which is superior with regard to cognitive success. Schurz and Thorn (2012) performed a cognitive‐success analysis of the Systems O, P, Z, and QC. In their computer simulation an environment with four binary variables a, b, c, d and a randomly generated probability distribution was repeatedly simulated. The possible cases (predictive targets) consisted of all 464 conditionals with conjunctions of one, two, or three unnegated or negated variables in their antecedent or consequent. The task on which the four systems were compared was the derivation of conditionals from four randomly selected base conditionals with conditional probabilities ≥ .7, together with a prediction of their associated conditional probabilities.12
Thus, there were at most 460 conditional probabilities to be predicted. Four different scoring rules for cognitive success were compared. Table 1 presents the results for the ACG (advantage compared to guessing) score, defined as the absolute difference between the predicted and the actual conditional probability for each of the derived conditionals. Although the ordering of the four systems according to their ecological validity is Q > P > Z > QC, their applicability ordering is precisely the inverse, QC > Z > P > Q. The resulting cognitive success ordering is Z > QC > P > O.

Table 1. Cognitive success analysis of four systems of reasoning with uncertain conditionals
System     Applicability (% of 460 intended predictions)     Sum of Scores (ACG score)a     Ecological Validity (range [0, 1])     Cognitive Success (range [0, 1])
O     1.0     4.6     0.92     0.009
P     1.4     5.2     0.82     0.011
Z     10.5     22.5     0.47     0.049
QC     22.9     8.5     0.08     0.018
Note

    a For normalization purposes, the ACG scores in table 2 of Schurz and Thorn (2012) were multiplied by 3.

In light of these results, Schurz and Thorn (2012) concluded that System Z achieves the optimal balance in the trade‐off between deriving true and informative conclusions and avoiding false or uninformative ones.13
Schurz and Thorn (2012) and Thorn and Schurz (2014) investigated three additional scoring rules: PIR (price is right), sPIR (subtle price is right), and EU (expected utility). The qualitative orderings of the ecological validity, applicability, and cognitive success of the four systems were the same across all four success measures, demonstrating the robustness of the results.14

4.4 Success and Bayesian probabilities

Bayesian probabilities are internally coherent degrees of subjective beliefs. Following arguments by Ramsey (1926/1990) and De Finetti (1937/1964), coherence is usually justified as follows: If one interprets degrees of beliefs as fair betting quotients, one is guaranteed to never accept a system of bets that exacts a logically guaranteed loss, that is, a “Dutch book.”15
The Dutch book argument is thus indeed a consequentialist justification as it ties the consequences of a person's subjective probabilities back to monetary outcomes. However, what is thus justified is merely the coherence of probabilities, and this means only that they need to satisfy the basic (Kolmogorovian) probability axioms.16 This is indeed a necessary constraint on rational degrees of beliefs, but by itself it is not sufficient for rational degrees of belief to yield cognitive success. The condition of a coherent fair betting quotient depends solely on the gambler's subjective beliefs and preferences. It does not involve any adaptation to the environment, that is, to the true frequency or statistical probability (frequency limit) of the events betted on. Consider, for example, a subjectivist who repeatedly offers betting odds of 1:1 that she will roll a 6 with an unbiased die. She considers this bet to be fair and is equally willing to accept the opposite bet that she will not roll a 6. She is coherent and will remain coherent even after she has lost her entire fortune. She will be puzzled that while everybody readily accepted her first bet, nobody accepted the opposite bet, even though both are equally fair in her view. Thus, if she ignores the frequentistic chances of the events betted on, she will be unable to explain why she lost everything and others won.

As this, admittedly engineered, example illustrates, the problem of subjective degrees of belief is not their low applicability (an individual's beliefs could discriminate between many states of the world) but their potentially low ecological validity. The Bayesian coherence requirement is too weak to exclude cognitively unsuccessful behavior if one's degrees of belief are not connected with objective truth‐chances (i.e., statistical probabilities; Knight, 1921). There are pertinent methods in Bayesian statistics of establishing this connection (less well‐known than the Dutch book arguments), such as Lewis's “principal” principle (1980/1986) or De Finetti's (1937/1964) equivalent “exchangeability” principle. These principles demand that a person's rational degree of belief (Pr) in an event (E) should have the value r, given that all that the person knows is that the statistical probability (pr) of the corresponding event type E is r, more formally, Pr(E | pr(E) = r) = r (for arbitrary r ∈ [0, 1]). One can prove that the satisfaction of this principal principle is equivalent to the assumption of Bayesian statistics that degrees of belief can be represented as weighted averages of statistical probabilities. Subjective probabilities that satisfy this condition are known to converge toward the true statistical frequencies when the evidence increases infinitely, independently of the assumed prior distributions (Gillies, 2000, p. 71ff; Howson & Urbach, 1993, chapter 14; Schurz, 2013, pp. 165, 236f). It is only if this connection between subjective and objective probabilities is established that Bayesian reasoning can be cognitively successful and decisions based on maximization of subjectively expected utility can maximize one's average utility.

4.5 Cognitive success in prediction and choice

Perhaps more than in any other research area in psychology the tension between apriorist and consequentialist accounts of rational cognition unfolded in the debate about the meaning of bounded rationality in general and the role of heuristics in particular. The heuristics‐and‐biases research program (Kahneman, 2011), possibly the most influential research program in psychology of the last five decades, has consistently invoked the rules of probability theory and statistics as a priori norms for human rationality. Deviations from these norms in people's reasoning were taken as manifestation of irrationality. In Kahneman's (2003) portrayal of the program's research, it “attempted to obtain a map of bounded rationality, by exploring the systematic biases that separate the beliefs that people have and the choices they make from the optimal beliefs and choices assumed in rational‐agent models” (p. 1449). Many of the systematic biases were attributed to the operation of heuristics (e.g., availability, representativeness, and anchoring‐and‐adjustment) that although “quite useful,” sometimes “lead to severe and systematic errors” (Tversky & Kahneman, 1974, p. 1124). On this view, a heuristic's rationality is evaluated exclusively on the basis of its conformity to the norms and not in terms of its potential cognitive success.

This changed with the arrival of the ecological rationality research program, which has redefined the normative study of heuristics; by extension, it interprets bounded rationality in terms of the match between a heuristic and an environment, the two blades in Simon's (1990, p. 7) scissors metaphor. On this view, this match determines the performance and thus the cognitive success of a heuristic. In order to measure cognitive success, researchers of heuristics’ ecological rationality have conducted a wide range of tournaments between simple heuristics and complex strategies commonly considered to be normative. These computer simulations encompass, for instance, the analysis of heuristic inferences about real‐world quantities (e.g., which of two cities has a larger population size; Gigerenzer & Brighton, 2009; Gigerenzer & Goldstein, 1996; Katsikopoulos, Schooler, & Hertwig, 2010) and more recently the analysis of choices between uncertain lottery options (Hertwig, Woike, Pachur, & Brandstätter, in press) and of choices in strategic games (Spiliopoulos & Hertwig, in press). For illustration, consider the tournament involving choice strategies choosing between uncertainty lottery options (Hertwig et al., in press). The simulations implemented 20 choice environments (defined by different payoff and probability distributions) and randomly generated 6,000 choice problems per environment. The innovation in this simulation was that all strategies (with the exception of the omniscient expected value model) learned about the properties of each problem by sequentially taking one draw at a time from each of the options per problem. The strategies then chose what they inferred to be the best option after each sample (learning stopped after 50 rounds).

Table 2 presents the cognitive success of each of the six choice strategies. The normative benchmark for human beings is either the omniscient expected value theory, or, more realistically, the sampling‐based expected value theory. In light of the cognitive success measure, Hertwig et al. (in press) concluded that under uncertainty (when all strategies have incomplete knowledge and need to sample the environment), some simple choice heuristics nearly approximate the performance of the sampling‐based expected value theory—even though they may not take entire swaths of information into account. The well‐performing equiprobable heuristic, for instance, ignores all probabilities and merely calculates the mean of all outcomes within each option, then chooses the option with the highest mean. Indeed, the research on ecological rationality has repeatedly demonstrated that simple heuristics, which curtail search for information and reach decisions without complex calculations, can lead to surprisingly good inferences and predictions relative to complex algorithms based on the principles of logic, probability theory, and maximization.

Table 2. Cognitive success analysis of choice strategies in choice environment requiring learning of the properties of the choice options
Strategya     Applicability (in %)b     Cognitive Success (range [0%, 100%])c
N = 5     N = 20      N = 50
Equiprobable     100     93.1     94.6     93.7
Probable     100     86.4     92.3     93.5
Lexicographic     100     86.4     87.9     88.0
Least‐likely     100     54.2     61.5     64.3
Sampling‐based expected value theoryd     100     94.0     98.3     99.3
Omniscient expected value theory     100     100     100     100

Notes
    a All strategies are described in detail in Hertwig et al. (in press).
    b In this analysis all strategies were always applicable because they could either select the options or choose randomly.
    c Average performance across all 20 choice environments and for N = 5, 20, and 50 samples taken per option (two options with two, four, and eight outcomes) from the environment; the cognitive success metric is normalized such that 100% means that a strategy always selected the option with the higher expected value (as did the omniscient expected value model) and 0% means that a strategy always selected the option with the lowest expected value.
    d The sampling‐based expected value theory can also be implemented in terms of a simple heuristic (i.e., natural‐mean heuristic; see Hertwig et al., in press).

The strategies in Table 2 selected an option randomly in cases where their policy and the information available did not render a choice, that is, were not applicable. For this reason their applicability is always 100% and their ecological validity and cognitive success are identical. In other tournaments measuring cognitive success some competing methods have low applicability, whereas others are always applicable. This is particularly the case in tournaments including meta‐inductive selection strategies. The account of meta‐induction (Schurz, in press; Schurz & Thorn, 2016) is in an important sense complementary to the research program of ecological rationality: Meta‐induction is a general meta‐cognitive strategy designed to choose, in each situation in which it is applicable, a locally optimal method from a given toolbox of candidate methods. Two important meta‐inductive strategies are take‐the‐best17and success‐weighting. Take‐the‐best applies in each round of the tournament. It selects the prediction method that is applicable (i.e., renders a prediction) and that has the best success record in the past. Success‐weighting predicts a weighted average of the predictions of those methods that rendered a prediction in the given round of the tournament, with weights reflecting the methods’ past successes. Table 3 presents the results of applying take‐the‐best and success‐weighting to the results of the Monash University footy tipping competition (MUFTC18). The predictive target was forecasting the 3‐valued results (1, 0, or tie) of matches of the Australian Football League. The tournament included the predictions of 1,071 human participants (Table 3 reports the five human forecasters with the highest success rates, Forecasters 1–5) as well as the predictions of the different meta‐induction strategies including take‐the‐best and success‐weighting.

Table 3. Cognitive success analysis of the Monash University footy tipping competition (after 1,514 rounds)a
Predictor     Applicability in %     Sum of Scores     Ecological Validity (range [0, 1])     Cognitive Success (range [0, 1])
Success‐weighting     100     877     0.579     0.579
Take‐the‐best     100     873     0.577     0.577
Forecaster 1     39     839     0.640     0.554
Forecaster 2     27     811     0.637     0.536
Forecaster 3     13     789     0.666     0.521
Forecaster 4     12     789     0.676     0.521
Forecaster 5     13     787     0.658     0.520

Notes
    a Target was forecasting the results of 1,514 matches of the Australian Football League over eight seasons from 2005 to 2012. The tournament included the predictions of 1,071 human participants and the predictions of various meta‐induction strategies including take‐the‐best and success‐weighting.

The five best human forecasters displayed high performance only in certain rounds and refrained from making predictions in other rounds. The meta‐inductive strategies utilized the predictions of the best human forecaster in each round, with the result that their applicability was 100% and their cognitive success surpassed that of the best human forecasters (with a slight advantage of success‐weighting over the simpler take‐the‐best strategy).

4.6 What does cognitive success mean for the is–ought relationship?

Cognitive success’ intent to focus on the consequences of rational cognition suggests a new view of the relationship between the normative (“ought”) and the descriptive (“is”) dimensions of theories of reasoning. According to the traditional division of labor, it is the task of armchair philosophy to address normative issues, and that of empirical psychology to answer descriptive questions. From the consequentialist perspective of cognitive success, however, empirical results can become normatively relevant, and normative innovations can suggest new empirical questions (see Corner & Hahn, 2013).19
The relationship between the normative and the descriptive, as conceptualized in different accounts of rationality in reasoning—such as apriorism, descriptivism, or ecological rationality—emerges most clearly from the answers they give to the following question: What should one infer from a conflict between descriptively observed and normatively recommended behavior in the context of a cognitive task?

In general, the answers depend on the theoretical positions adopted. Thus, how scholars respond to the gap between “is” and “ought” is diagnostic with regard to the justification of the rationality norms they endorse. Let us assume that an experiment reveals a divergence between how people reason and how they ought to reason according to some standard rational benchmark such as norms of probability theory, logic, or axioms of rational choice. If a scholar adopts an intuition‐based justification of rational cognition, the normatively recommended behavior is not defined in terms of its cognitive success but by reference to a priori intuitions. Thus, the cognitive behavior observed will be judged to be irrational. Alternatively, a scholar may endorse a strong relativism of different systems of intuition. In this case, however, no strong rationality inferences can be drawn (Cohen, 1981; Shier, 2000, p. 78).

In consequentialist accounts, in contrast, both the empirically observed reasoning and the “reasoning” of the normative systems will be evaluated with regard to cognitive success. For example, if logistic regression were regarded as the normative standard for predicting the value of a criterion based on a set of cues, then the cognitive success of this normative standard, assuming some statistical knowledge base, could be measured against the cognitive success of people's predictive inferences from the same input (see also Gigerenzer & Brighton, 2009). This opens up a new option for responding to conflicts between empirical observations and normative recommendations. The cognitive success of observed reasoning is not necessarily worse than that of the normative system. Indeed, observed reasoning may, in fact, even outperform normative recommendations. As mentioned before, evidence for the latter has been compiled in research on bounded and ecological rationality (Gigerenzer & Brighton, 2009; Gigerenzer, Hertwig, & Pachur, 2011; Hertwig et al., 2013, in press; Todd et al., 2012). If the observed cognitive behavior outperforms the normative system, the consequentialist would need to conclude that the assumed “normative system” is second best and, thus, can no longer be invoked to derive normative recommendations.20
If, however, observed behavior scored lower on cognitive success than the normative system, the consequentialist's conclusion would depend on a second theoretical choice that is open to consequentialists but not to intuition‐based accounts, namely, attitudes toward cognitive adaptationism. This position assumes that human cognition is near‐optimally adapted to its relevant environments. Therefore, a consequentialist proponent of cognitive adaptationism may be inclined to argue that the assumed measure of cognitive success is inappropriate and in need of revision. In contrast, a nonadaptationist consequentialist would conclude, faced with evidence that observed behavior's cognitive success is surpassed by that of the normative system, that human cognition is below par.

Let us explain the relationship between cognitive consequentialism and cognitive adaptationism in more detail. One might think that cognitive consequentialism entails cognitive adaptationism because the former evaluates cognitive systems by their cognitive success and cognitive success entails being well adapted. This reasoning, however, mistakes “is” and “ought.” Cognitive consequentialism makes the normative claim that cognitive systems should be evaluated in terms of their cognitive success. This implies the normative claim that, ceteris paribus, cognitive systems should be well adapted to their environment. However, cognitive adaptationism is not a normative requirement but an empirical thesis, stating that because humans are the product of evolutionary selection processes, they will be cognitively well adapted. This may or may not be the case—an issue to which we return below. For the present discussion, however, it is important to note that cognitive adaptationism is not entailed by the normative requirement of cognitive consequentialism. Consequently, a cognitive consequentialist can be more or less inclined toward the choice to assume cognitive adaptationism. This choice, in turn, will determine the response to an instance in which actual cognition scores lower on cognitive success than the normative system.

4.7 Cognitive consequentialism and the issue of adaptationism

Let us finally discuss cognitive adaptationism as found in Anderson's (1990, 1991a,b) work in more detail because, prima facie, his rational analysis shares significant resemblances with our account of cognitive consequentialism. Anderson's method consists of five iterative steps (Anderson, 1991a, p. 473):

    Specify the goals of the cognitive system.
    Develop a model of the environment to which the system is adapted.
    Make minimal assumptions about computational limitations, such as memory storage and computation time.
    Derive the optimal behavior given in (1)–(3) above.
    Finally, test empirically whether the predictions of the optimal behavior derived in (4) are confirmed by human cognitive performance; if not, then the task–environment model developed in (1) + (2) has to be revised.

Anderson has applied rational analysis to domains such as memory analysis, categorization theory, causal inference, and problem solving.21Regarding Step 1, in these domains Anderson identifies the goal of the cognitive system as some kind of predictive inference. Thus, there is agreement between Anderson's analysis of cognitive goals and our definition of cognitive success. Also Steps 2 and 3 are consistent with a cognitive success analysis, except that in Step 3 we would argue that “minimal” assumptions should be replaced by realistic assumptions about computational limitations (see Simon, 1990). The first crucial difference to our account appears in Step 4. The consequentialist account does not intend to “derive” the optimal method from the description of the task and environment, because apart from very simple cases this is impossible. In the area of prediction methods, the nonexistence of a universally optimal method is the content of Wolpert's (1996) famous no free lunch theorem (cf. Schurz, 2017). Simon (1991) demonstrated that what does the real work in Anderson's derivations are specific auxiliary assumptions about the cognitive system and its environment. We mostly agree with Simon's critique of Anderson's optimal adaptation hypothesis: All that rational analysis can do is consider all available but not all possible competing methods for a given task and investigate their cognitive success. This is what the consequentialist account suggests.

Cognitive success, unlike optimal adaptation, thus behaves similarly to how Simon portrayed natural selection:

    The theory of natural selection is not an optimizing theory for two reasons. First, it can, at best, produce only local optima, because it works by hill‐climbing up the nearest slope. It has no mechanism for jumping from peak to peak… . Second, it selects only among the alternatives that are available to it. (1991, p. 29)

This brings us to Anderson's Step 5. This step presupposes the adaptationist thesis that human cognitive behavior is nearly optimally adapted (Anderson, 1991a, table 1, p. 473). As a general claim, such an “evolutionary optimism” is difficult for several reasons. First, evolutionary selection sometimes produces suboptimal and even dysfunctional adaptations (Ridley, 1993, p. 343f). Second, although genetic evolution optimizes the biological reproduction rate, it is less clear how this process relates to cognition. Anderson acknowledged the fact that evolutionary selection does not find a global optimum but merely a local one. However, there is a world of difference between a global and a local maximum: It can be as large as the difference between a sand hill and Mount Everest. All single constraints on cognitive processes dictated through the biological architecture of humans’ brains (see Jones & Love, 2011) are concealed in this difference.

In contrast to rational analysis, cognitive consequentialism does not imply cognitive adaptationism, though it is compatible with it. Obviously, the human brain is well adapted in many respects, but not in all. Therefore, we suggest that the view of rational cognition that appears most defensible and productive for contemporary cognitive science is that of a cognitive consequentialism that is not bound to strong adaptationist assumptions. In conclusion, the consequentialist account proposes to modify Anderson's Steps 4 and 5 as follows:

    4' Derive the consequences of the available competing cognitive methods [given the output of (1)–(3)] and test their cognitive success.
    5' Compare the locally optimal method [i.e., the output of (4)] with the actual human behavior.
        5'.1 If they agree, recommend the locally optimal method and infer that human cognition is well adapted.
        5'.2 If they disagree, two cases are possible:
            5'2.1 If human behavior outperforms the locally optimal method, search for better cognitive methods (to thus eventually explain human behavior): Backtrack to (4) and iterate.
            5'2.2 If human performance is worse than the locally optimal method, search for local constraints on the mind's cognitive mechanisms that can explain the disagreement. Backtrack to (3), add these constraints, and iterate. At the same time, recommend the locally optimal method as a rational improvement of intuitive human cognition that can be learned by cognitive training.

In other words, at this point cognitive consequentialism potentially has educational implications.

5 Conclusion: New questions about rational cognition

The study of human cognition and its rationality seems inseparable from the question of how successful it is. In psychology and economic research, rationality of human cognition has often been equated with coherence, that is, rules for internal consistency, often defined by propositional logic and probability theory (see Arkes et al., 2016). However, for decades, psychologists have disagreed over how well coherence‐based normative systems describe human cognition and which coherence‐based systems (logic, probability theory, or decision theory) should be granted the status of normative benchmarks for cognition (see our introduction). We have discussed the problems that arise when normative systems are justified by reference to a priori intuitions, as is typically the case. As an alternative and a potential response to some of these problems, we propose a consequentialist account of normative systems. The major tenets of this account are as follows:

    Traditional normativism fails because a priori intuitions are inadequate as justifications of norms of rational cognition.

    Traditional descriptivism fails because norms of rational cognition are inevitably needed as benchmarks for successful reasoning.

    Norms of rational cognition are better justified from a consequentialist perspective, that is, in terms of their cognitive success.

    The concept of cognitive success of a cognitive method assumes that all decision tasks can be reformulated as prediction tasks.

    Cognitive success is defined as the product of ecological validity and the method's applicability. Determining the available method's cognitive success permits one to compare them on the same scale.

    Cognitive consequentialism is related to ecological rationality to the extent that cognitive success depends on the given cognitive task and a specific environment.

We hope and believe that this approach (or a similar consequentialist concept) offers a way to overcome the trite division of labor between the empirical study of the mind and philosophy. This approach raises new and interesting questions. For instance, what does cognitive success imply for research that has drawn strong conclusions about the (ir)rationality of human cognition (e.g., the heuristics‐and‐biases research program; Kahneman, 2011)? Or, assuming that the success of different cognitive methods depends on specific environments and that no method succeeds in all environments, will there be metamethods that are able to select the best method for the environment in question (see “meta‐induction”; Schurz, in press)? Finally, to what extent is cognitive success adaptive in an evolutionary sense, and how does the answer to this question influence the understanding of the relation between is (observed cognitive behavior) and ought (normatively recommended behavior)? We do not have answers to these and other questions, but we hope that we have convinced the reader that it is timely to ask these questions, thus leaving some skirmishes of the “rationality wars” behind us (Samuels, Stich, & Bishop, 2012) and turning the question of what rational cognition is into a less dogmatic and a more empirical one.

Voters frequently evaluate objective conditions through a perceptual screen, seeing a stronger economy & more peaceful world when their party rules; this assessment is polarizing increasingly

Partisanship, Political Awareness, and Retrospective Evaluations, 1956–2016. Philip Edward Jones. Political Behavior, March 23 2019. https://link.springer.com/article/10.1007/s11109-019-09543-y

Abstract: A long line of research shows that voters frequently evaluate objective conditions through a perceptual screen, seeing a stronger economy and more peaceful world when their party is in power. We know less about how and why these partisan perceptual differences have changed over recent history, however. This paper combines ANES measures of retrospective evaluations from 1956 to 2016 and shows that partisan differences (1) have increased significantly over the past few decades across all types of assessments; (2) are greatest, and have changed the most, amongst the most politically aware; and (3) closely track changes in elite polarization over this time period. The extent of partisan disagreement in retrospective evaluations is thus not constant, but rather contingent on attributes of the voter and the political context. Greater political awareness and more polarized politicians result in larger partisan perceptual differences, as the most engaged citizens are the most likely to receive and internalize cues about the state of the world from their party’s elites.

Keywords: Retrospective evaluations Partisanship Political awareness

I find that sharing a gender identity not only fails to unite women who are partisan rivals but, in fact, further highlights their rivalry

When Common Identities Decrease Trust: An Experimental Study of Partisan Women. Samara Klar. American Journal of Political Science, June 6 2018. https://doi.org/10.1111/ajps.12366

Abstract: How does sharing a common gender identity affect the relationship between Democratic and Republican women? Social psychological work suggests that common ingroup identities unite competing factions. After closely examining the conditions upon which the common ingroup identity model depends, I argue that opposing partisans who share the superordinate identity of being a woman will not reduce their intergroup biases. Instead, I predict that raising the salience of their gender will increase cross‐party biases. I support my hypotheses with a nationally representative survey of 3,000 adult women and two survey experiments, each with over 1,000 adult women. These findings have direct implications for how women evaluate one another in contentious political settings and, more broadly, for our understanding of when we can and cannot rely upon common identities to bridge the partisan divide.

Once more: We are aggressive, but that is nothing compared to many other species. Interview with Anthropologist Richard Wrangham

Interview with Anthropologist Richard Wrangham. Johann Grolle. Der Spiegel, March 22, 2019. http://www.spiegel.de/international/interview-with-anthropologist-richard-wrangham-a-1259252.html

'Those Who Obeyed the Rules Were Favored by Evolution'

British anthropologist Richard Wrangham believes our humanity began with the murder of a tyrant. In an interview with DER SPIEGEL, he explains why homo sapiens are so murderous, while also being among the most peaceful species.

DER SPIEGEL: Professor Wrangham, you're interested in aggression. Why did you go to the Ugandan jungle to research it for it?

Wrangham: The jungle is full of aggression. One of my favorite things is to go for a walk in the forest, close my eyes and just listen. I hear the trills of birds, the chirping of insects, the calls of monkeys, sometimes even the sound of an elephant. And what is it that is so soothing and pleasant for us? Most of it is produced by males shouting out their maleness, their dominance. In other words, it is the language of aggression.

DER SPIEGEL: In your studies of chimpanzees, did you always focus on aggression?

Wrangham: No. First, I studied how these animals feed - -- which is a completely different subject. But when you follow a chimpanzee from dawn till dusk, the importance of aggression just hits you in your face. Compared to us humans, the rate of their aggression is enormously higher.
Advertisement

DER SPIEGEL: Can you give an example?

Wrangham: Take the behavior of males on the threshold of adulthood: They almost ritually attack one female after the other until each one shows signs of submission. Only when a male has achieved physical dominance over every single female is he able to enter the male hierarchy and compete there for the highest possible rank.

DER SPIEGEL: And does this brutality enable them to succeed?

Wrangham: Absolutely. Once a male is fully adult, he continues to attack females even when they give submissive signals to him. The male who beats a particular female the most is the most likely to be the father of her next offspring.

DER SPIEGEL: How do you feel when you watch something like this?

Wrangham: It upsets me. I do love chimpanzees, and I am fascinated by them. I appreciate the wonderful moments I experience with them. But they also have nasty sides. Watching a male beating up a female is horrendous - -- just as it is horrendous watching these animals tearing the guts out of a monkey that is still alive. The essential violence of chimpanzees' carnivory is thoroughly off-putting. It is why I haven't eaten meat for 40 years.

DER SPIEGEL: Is it fair for a scientist to describe this behavior as "nasty"?? Is aggression evil?

Wrangham: It would seem inhuman not to recognize that some of the chimpanzees' behaviors are deeply unpleasant. And is aggression evil? Yes, I think so, at least when it involves physical violence that is inflicting pain. Violence is the opposite of virtue. I think that a major object of human endeavor and societal ambition should be to reduce violence.

DER SPIEGEL: In this sense, at least, we humans seem to have taken a step away from chimpanzees.

Wrangham: Yes. If you trapped 300 chimpanzees who did not know each other in a plane for eight hours, many of them probably wouldn't leave it alive. People, on the other hand, barely touch each other on a long-haul flight. The level of violence has been monitored in a group of hunters and gatherers in Australia. These Aborigines suffer from strong social upheavals, alcohol abuse is high. Still, even under these conditions, the frequency of violence among these people is 500 to 1,000 times lower than among chimpanzees.

DER SPIEGEL: Though people are also capable of torturing each other, and our history is replete with war and genocide. Given this, how can you speak of the placidity of human nature?

Wrangham: You're right, it seems paradoxical. But it is important to understand that there are two different types of aggressiveness. Violence in war is mostly planned, deliberate and cold-blooded. The everyday aggression of chimpanzees, on the other hand, is spontaneous, short-tempered and born directly from the moment. Here we have one of the great mysteries of human nature: Why do we treat each other so peacefully in everyday life when we are capable of such a degree of deliberate cruelty?

DER SPIEGEL: Is this deliberate, planneding form of aggression a peculiarity of humans?

Wrangham: Not at all. You find it among chimpanzees as well. Konrad Lorenz believed that animals do not kill each other. But he was proven wrong. I was part of Jane Goodall's team, who first observed how chimpanzees attacked members of neighboring groups to kill them deliberately. The discovery of warlike patterns of aggression among chimpanzees was one of the big surprises of our research.

DER SPIEGEL: In other words, we have inherited the brutal part of our nature from our ancestors, whereas the peaceful part distinguishes us from them?

Wrangham: Yes, that's one way of putting it.

DER SPIEGEL: You write that humans owe their placidity to their own domestication. What do you mean by that?

Wrangham: We humans exhibit a number of biological characteristics that are more typical of pets than of wild animals, including a very low rate of face-to-face aggression. The reason that I attribute our peaceableness to our having been domesticated is because we share with our pets and farm animals some of these other characteristics, which we now call a domestication syndrome. Charles Darwin was already fascinated by this phenomenon. He studied domestic animals, and he noticed that they share a multitude of peculiarities not found in wild animals.

DER SPIEGEL: Like what?

Wrangham: Many pets have white spots in their fur. They often have floppy ears, a short face or a curved tail. All these traits are rare among wild animals, but common among pets. And the interesting thing is that humans didn't select their pets specifically for these traits.

DER SPIEGEL: How do you know? Maybe Stone Age farmers were fond of pigs with floppy ears or cows with spots.

Wrangham: We know it thanks to the ingenious experiments of the Russian geneticist Dmitri Belyaev. He bred silver foxes with only one trait in mind: he He selected the most tame and peaceful individuals from each generation. And behold, many of the other typical characteristics of pets emerged on their own.

DER SPIEGEL: Cuddly foxes sound cute. Have you seen them yourself?

Wrangham: No, unfortunately I haven't. But my student Brian Hare went to Novosibirsk, where Belyaev's team continues to work. He wondered why dogs understand human signals better than wolves. He assumed that it might be a result of targeted selection by humans. But then he examined Belyaev's foxes and found that they also know how to interpret human signals. So the ability to understand human signals seems to be another trait, like white patches of fur, that emerges as a side effect of selection for less aggression.

DER SPIEGEL: You claim humans are also domesticated. What makes you think that? We don't have white spots, or floppy ears, or a curly tail.

Wrangham: You're right. We have no tail, so it can't bend. But if you look at our skeleton, you will find a lot of peculiarities that are characteristic of pets. Four of them stand out compared to our ancestors: a shorter face; smaller teeth; reduced sex differences, with males becoming more female-like; and, finally, a smaller brain. This latest development is particularly fascinating. In fact, the evolution of humans is naturally characterized by a continuous increase in brain size. But it turns out this trend has reversed in the last 30,000 years.

DER SPIEGEL: How could a package of traits like that develop when it was not under any selection pressure?

Wrangham: We are still not sure what biological mechanisms produce the domestication syndrome. But we have circumstantial evidence. It is noticeable, for example, that many of the domestication traits are typical for young animals ...

DER SPIEGEL: ... in other words, dogs resemble wolf pups, just as we resemble Neanderthals who never reached adulthood?

Wrangham: Yes. Young animals are usually characterized by a lower level of reactive aggression. One way nature might evolve reduced aggressiveness is by allowing creatures to reach adulthood while still being emotionally juvenile. All the other juvenile traits are then nothing but side effects of the reduction of aggression.

DER SPIEGEL: You said earlier that our brain began shrinking 30,000 years ago. Is this when humans started getting tamed?

Wrangham: No. We can follow the process of domestication pretty thoroughly in the fossil record. According to that, the development started about 300,000 years ago. Brain size only began to decrease at the very end.

DER SPIEGEL: Obviously, we domesticated dogs, horses and cats, but who domesticated us?

Wrangham: The word "domestication" is somewhat misleading. It implies a relationship with humans. But Belyaev's fox experiments show us that only the selection of non-aggressive behavior is important. Whether this selection happens in captivity or the wild doesn't matter. While some species have been domesticated by humans, others have been domesticated, in the sense of reducing their aggressiveness, on their own. We are one of the species that domesticated ourselves.

DER SPIEGEL: What kind of domesticated animals are there in the wild?

Wrangham: The best example can be found among our closest relatives: the bonobos. They look very similar to chimpanzees, but their skulls show the characteristics of domestication: a shorter face, smaller teeth, a smaller brain, and reduced differences between the sexes.

DER SPIEGEL: And their behavior is more peaceful?

Wrangham: Dramatically so. When a bonobo male attacks a female, she will call for help, and within minutes the male will face an alliance of females who put him in his place.

DER SPIEGEL: Females bonobos domesticated the males?

Wrangham: Yes, probably. The bonobos live in a habitat that allows females to travel together all the time, unlike chimpanzees. This has favored social alliances among the females.

DER SPIEGEL: Are bonobos the better chimpanzees?

Wrangham: They're much nicer to each other, that's true. But of course bonobos also have some dark sides. There was this guy in France who started a commune based on the principle of living the bonobo way. He ended up in prison for pedophilia.

DER SPIEGEL: What about humans? Did women civilize us men as well?

Wrangham: That seems unlikely. There are many mythological memories of an era in which power was in the hands of women, but today there is no such thing as matriarchy anywhere in the world, and we have no evidence that there ever was.

DER SPIEGEL: If it wasn't women, who tamed men?

Wrangham: Here we enter the terrain of speculation, because fossils don't tell us exactly what happened. What we have to do instead is to see how today's hunters and gatherers treat individuals that behave aggressively. There are, in fact, even in these generally peaceable peoples, some individuals who, like alpha chimpanzees, try to dominate the others by violence. How do the members of such a community react - -- without prisons, without military, without police? There is only one way for them to defend themselves against the determined perpetrator: He is executed. The killing is done by agreement among the other men in the society.

DER SPIEGEL: You argue that this is how aggressiveness was systematically eradicated from the gene pool of mankind?

Wrangham: Well yes, aggressiveness was reduced, even if it was not eradicated. Virtue seems to have evolved from something as violent as killing. But don't misunderstand. I am not advocating executions in today's world. Justice is fallible, so the death penalty inevitably leads to the killing of innocent people; furthermore, there is no evidence that it really effectively deters people from committing crimes.

DER SPIEGEL: It is quite a daring hypothesis to argue that the death penalty has made us what we are. How did you come up with it?

Wrangham: It was when I read a book by Christopher Boehm entitled "Hierarchy in the Forest". In this book, he describes how aggression in communities of hunters and gatherers is controlled by executions. My goodness, I thought when I read this, maybe this mechanism has even shaped our evolution?

DER SPIEGEL: If anyone who strives for power is killed, does that mean there are no chiefs in communities of hunters and gatherers?

Wrangham: Yes, hunter-gatherers are very egalitarian in their relationships among men.

DER SPIEGEL: So when the fathers of the American constitution famously proclaimed, "All men are made created equal,", they were really just reanimating a principle that has shaped our species over many millennia?

Wrangham: Yeah. Isn't that fascinating? And even the fact that the Declaration of Independence only mentions men, but not women, corresponds to the situation in communities of hunters and gatherers. Egalitarianism among them only applies to men. Women, on the other hand, are dominated by men.

DER SPIEGEL: And how do you think it all began? Why did the men of lower rank eventually join together to kill the tyrant?

Wrangham: Well, it's quite dangerous to rebel against the alpha male. The one who throws the first stone will risk his life. No lion or chimpanzee would dare to do that. Only humans were able to squat together and whisper: "Let's meet at the big stone, then attack and kill him."

DER SPIEGEL: In other words, language facilitated the rebellion of the underdogs?

Wrangham: Yes, because only by discussing and planning how to kill the tyrant could they be sure that they wouldn't be harmed themselves.

DER SPIEGEL: Unlike all animals, man is capable of moral action. In your book, you claim that this is another consequence of the beta male's uprising against the alpha?

Wrangham: Yes. At some point the community of beta men united against the powerful. Then they realized that from now on they themselves had the power to kill everyone in the group. They established rules for living together, and anyone who violated them had to fear death. In this way, those who obeyed the rules were favored by evolution.

DER SPIEGEL: Submission made us moral beings?

Wrangham: You put it in a handy phrase. It may be disillusioning. But I'm afraid it was like this: Morality was born in an effort not to be targeted by the justice of the community.

DER SPIEGEL: And little by little, cowardice wrote itself into our genes.

Wrangham: Yes. And the fossil record suggests that the domestication process even accelerated.

DER SPIEGEL: Is it now complete? Or is man still taming himself?

Wrangham: There is, at least, no indication that the process has come to a halt.

DER SPIEGEL: What will we humans look like after another 10,000 generations?

Wrangham: That's speculative, of course. But if the domestication process continues as it has, we will probably look even more childlike than we do nowadays. The juvenile features will be even more exaggerated: the high forehead, the big eyes, the narrow chin.

DER SPIEGEL: The ultimate in anti-aging.

Wrangham: That's one way of looking at it.

DER SPIEGEL: Professor Wrangham, thank you for this interview.

---
Check also The phylogenetic roots of human lethal violence. José María Gómez et al. Nature volume 538, pages 233–237 (October 13 2016), https://www.bipartisanalliance.com/2018/02/the-phylogenetic-roots-of-human-lethal.html

Attractive financial analysts have better performance (partly through privileged access to information from firm management); have better connections to institutional investors; receive more support from employers

Cao, Ying and Guan, Feng and Li, Zengquan and Yang, Yong George, Analysts’ Beauty and Performance (February 26, 2019). Management Science, https://ssrn.com/abstract=3341835

Abstract: We study whether sell-side financial analysts’ physical attractiveness is associated with their job performance. We find that attractive analysts make more accurate earnings forecasts than less attractive analysts. Moreover, more attractive analysts make stock recommendations that are more informative in the short run and more profitable in the long run. Further analyses reveal that attractive analysts attain their better job performance at least partly through their privileged access to information from firm management. For the sources of the beauty effect, we find that more attractive analysts gain more media exposure, have better connections to institutional investors, and receive more internal support from their employers. Additional evidence suggests that analysts’ physical appearance per se at least partly explains our findings. Overall, our study demonstrates that physical attractiveness has a profound impact on the job performance and information access of sell-side financial analysts.

Keywords: Beauty premium; financial analyst; analyst forecast
JEL Classification: G10, G23, M40

Liberals are prone to bias about relatively low-status groups & specifically are biased against information that portrays a high-status group more favorably

Low-status groups as a domain of liberal bias. Bo Winegard et al. March 2019. https://www.researchgate.net/publication/326144740

Abstract: Recent scholarship has challenged the long-held assumption in the social sciences that Conservatives are more biased than Liberals, contending that predominantly liberal social scientists overlooked liberal bias. Here, we demonstrate that Liberals are prone to bias about relatively low-status groups (e.g. Blacks, women), and specifically are biased against information that portrays a high-status group more favorably than a lower status group. Six studies (n=2,921) support this theory. Liberals consistently evaluated the same study as less credible when the results concluded that a high-status group (men and Whites) had higher intelligence than a lower status group (women and Blacks) than vice versa. Ruling out alternative explanations of Bayesian (or other normative) reasoning, significant order effects in within-subjects designs in Studies 5 and 6 (preregistered) suggest that Liberals think that they should not evaluate identical information differently depending on which group is said to have a superior quality, yet do so.


Check also how both conservatives & liberals resist & accept societal changes, depending on the extent to which they approve or disapprove of the status quo on a given issue; we challenge assumptions on general, context‐independent psychological differences underlying ideologies
Liberalism and Conservatism, for a Change! Rethinking the Association Between Political Orientation and Relation to Societal Change. Jutta Proch, Julia Elad‐Strenger, Thomas Kessler. Political Psychology, https://www.bipartisanalliance.com/2018/12/both-conservatives-liberals-resist.html

Friday, March 22, 2019

People who learn that a newspaper does not suppress information exhibit a lower demand for news from it; the idea that people read partisan news because they see those papers as more informative seems wrong

Do People Value More Informative News? Felix Chopra, Ingar Haaland, Christopher Roth. March 2, 2019. https://www.briq-institute.org/wc/files/people/chris-roth/working-papers/do-people-value-more-informative-news.pdf

Abstract: We examine how people’s perceptions of media bias affect their demand for news. Drawing on a large representative sample of the US population, we measure and experimentally manipulate people’s beliefs about the extent to which newspapers suppress information. Inconsistent with the“more-information-is-better principle,” we find that people who learn that a newspaper is less likely to suppress information have a lower demand for news from this newspaper. Our results demonstrate that people have a demand for biased news, consistent with a desire to confirm pre-existing beliefs.

Keywords: Information, Belief polarization, Media Bias, News Consumption,Motivated Beliefs

---
1 Introduction

What drives people’s demand for news? A core principle in economics is that more information is always better. While people’s demand for news articles should thus be strictly increasing in the informativeness of the news, a large literature has documented that newspapers report news in a biased way by slanting their news stories towards the beliefs of their readers (Gentzkow and Shapiro,2010). There are several ways to rationalize why people tend to read slanted news (Xiang and Sarvary, 2007). First, it could reflect a desire for better informationas they perceive news that are closer to their prior beliefs as more informative(Gentzkow and Shapiro, 2006). Second, it could reflect that people have other motives for reading the news that conflict with expanding their knowledge. For instance, people might receive utility from reading news that confirm their pre-existing beliefs (Golman et al., 2016; Loewenstein and Molnar, 2018).

Causally identifying people’s people’s motivation for reading news is difficult. First, to understand people’s motivations for reading biased news articles, oneneeds data on subjective perceptions of biases in reporting. Second, one needsexogenous variation in these perceptions to rule out omitted variable bias andreverse causality. For example, people may distort their stated beliefs to justifytheir news consumption habits. Third, one needs to measure people’s demandfor real-world news and their actual consumption of this news, holding constanttheir information set about news articles. We address these challenges by usingan experimental approach with real news articles which allows us to test whetherconsumers indeed value more informative news in a setting with high externalvalidity.

Drawing on a large representative sample of Americans, we first elicit peo-ple’s beliefs about the extent to which theNew York Timessuppresses information. For that purpose, we tell our respondents that the Congressional Budget Office(CBO), Congress’s official nonpartisan provider of cost and benefit estimates for legislation, published a report about the “Trump Healthcare Plan” (the AmericanHealth Care Act of 2017). We then tell them that the CBO estimated that thiswould (i) decrease the federal deficit by $119 billion and (ii) leave 23 millionmore people uninsured. We truthfully tell our respondents that Republicansclaimed that the the plan would decrease the federal deficit—but not increase thenumber of people without health coverage—while Democrats claimed that theplan would not decrease the deficit and increase the number of people withouthealth coverage. Subsequently, we ask our respondents to estimate the percentchance that theNew York Timesreported only the figure on the number of unin-sured people, only the figure on the deficit decrease, or both figures. This allowsus to quantify people’s beliefs about the extent of media bias in theNew YorkTimes. To introdude exogenous variation in people’s perceptions of media bias,we inform a random subsample of our respondents that theNew York Timesreported both estimates from the CBO. Finally, we measure our respondents’demand for news from theNew York Timesby asking them whether they wouldlike to read an article in the newspaper about the Trump Tax Plan based onestimates from the CBO. The “more-information-is-better principle” predictsthat people’s demand for news about the CBO should increase for respondentswho learn that the newspaper is less likely to suppress information from CBOreports.

The key finding of this paper is that respondents who learn that the NewYork Times does not suppress information significantly reduce their demand for reading an article in this newspaper by 3.4 percentage points. This corresponds to a reduction in the demand for news of 12 percent. The time spent reading the article does not vary significantly across treatment arms, suggesting that the treatment did not affect how carefully people read the article. The reduction indemand for news is driven by respondents who initially thought that the New York Timeswas more likely to suppress information and is absent for respondents with more accurate pre-treatment beliefs about the extent of media bias in the2 New York Times. Consistent with models of motivated beliefs, our results aredriven by respondents who—in light of their prior beliefs about the directionof the bias in reporting and their political affiliation—have a stronger motive toavoid news from an unbiased source. For example, among Republican-leaning respondents the reduction in the demand for news is driven by those respondents who initially thought that the New York Timesis more right-wing biased.

We leverage two tailored measures of beliefs about newspaper reportingto shed light on mechanisms. We provide evidence that treated respondents significantly update their beliefs about the biasedness of the reporting of the  NewYork Times. Our treated respondents are 6.9 percentage points more likely to think that theNew York Times does not suppress any information about the CBO reporton the Trump Tax Plan. Respondents are also 3.7 percentage points less likely to think that theNew York Times did not cover a CBO report highlighting the negative budget consequences of granting citizenship to young undocumented immigrants. We also provide evidence that our results are inconsistent witha series of alternative explanations: Respondents do not update their beliefsabout the technicality of reporting, the complexity of the article, or about thecharacteristics of the CBO. Several patterns in our data are inconsistent withalternative mechanisms, such as cognitive constraints, uncertainty about sourcequality, curiosity, and motives for diversifying news sources.

We contribute to the literature on media bias (Allcott and Gentzkow, 2017;DellaVigna and La Ferrara, 2015; DellaVigna and Kaplan, 2007; Enikolopov etal., 2011; La Ferrara et al., 2012; Gentzkow and Shapiro, 2006, 2010; Gentzkowet al., 2015, 2018; Gerber et al., 2009; Mullainathan and Shleifer, 2005; Qin etal., 2018) and the demand for slanted news (Durante and Knight, 2012; Garzet al., 2018). Gentzkow and Shapiro’s (2010) seminal work introduces a newindex of media slant that measures the similarity of a news outlet’s language tothat of a congressional Republican or Democrat. Their model-based estimatesreveal that readers have a strong preference for like-minded news, but this pattern3 is consistent both with rational Bayesian updating about the informativenessof news (Gentzkow and Shapiro, 2006) and a behavioral preference for beliefconfirmation (Golman et al., 2016). We contribute to this literature by providingthe first causal evidence on the question of whether people value more informativenews. Specifically, we provide evidence that people who learn that the New York Times does not suppress information exhibit a lower demand for news from this newspaper. Our results are inconsistent with the idea that people read partisannews because they perceive partisan newspapers as more informative, as proposedby Gentzkow and Shapiro (2006).1


6    Conclusion

Our paper provides novel evidence on whether people value more informativenews. The main finding of this paper is that respondents who learn that theNewYork Timesdoes not suppress information reduce their demand for articles fromthis newspaper. This is inconsistent with the normative benchmark prediction ofthe “more-information-is-better principle.” Our results are driven by individualswith initially larger biases in beliefs about the extent of media bias, and thosewho in expectation should receive the largest negative belief utility shock when reading an unbiased article. Our empirical findings are consistent with models of motivated beliefs according to which people mainly consume news in order to confirm their prior beliefs,and inconsistent with models according to which people mainly consume news to receive better information. Our findings have important policy implications: Our evidence suggests that transparency about media bias might backfire and actually increase political belief polarization by shifting people’s consumption of news towards more biased sources.

Indefinite life extension: Men supported it more than women, whereas women reported greater belief in an afterlife

Women Want the Heavens, Men Want the Earth: Gender Differences in Support for Life Extension Technologies. Uri Lifshin et al. Journal of Individual Differences, March 21, 2019. https://doi.org/10.1027/1614-0001/a000288

Abstract. Efforts are being made in the field of medicine to promote the possibility of indefinite life extension (ILE). Past research on attitudes toward ILE technologies showed that women and more religious individuals usually have more negative attitudes toward ILE. The purpose of this research was to investigate whether gender differences in attitude toward indefinite life extension technologies could be explained by religiosity, afterlife beliefs, and general attitudes toward science. In four studies (N = 5,000), undergraduate participants completed self-report questionnaires measuring their support for life extension as well as religiosity, afterlife beliefs, and attitude toward science (in Study 3). In all studies, men supported ILE more than women, whereas women reported greater belief in an afterlife. The relationship between gender and attitude toward ILE was only partially mediated by religiosity (Studies 2–4) and by attitudes toward science (Study 3).

Keywords: life extension, gender differences, religion, attitudes toward science

From 2018: Could Human Evolutionary Changes Be Behind Mental Disorders?

Could Human Evolutionary Changes Be Behind Mental Disorders? Charles Choi. Discover Magazine, August 9, 2018. http://blogs.discovermagazine.com/d-brief/2018/08/09/human-evolution-changes-caused-mental-disorders/#.XJTLerh7nIU

[...]

Scientists have long suspected that common ailments like lower back, knee and foot pain are likely due to the evolution of upright walking in the human family tree. And there may be a connection between the fact that 70 percent of adults develop impacted wisdom teeth and the evolutionary reduction of jaw size in the human lineage and modern changes in diet.

“Similarly, rapid expansion of brain size and cognitive abilities in humans has been key to our evolutionary success,” says study senior author David Kingsley, a developmental geneticist at Stanford University. However, at the same time, bipolar disorder and schizophrenia impact more than 3 percent of the world population. Kingsley reasoned this vulnerability to mental disorders might also stem from recent evolutionary changes controlling human brain size and structure.

To find out, Kingsley and his colleagues focused on DNA regions found in humans but not other animals. “We knew we might be onto something when a particular human-specific sequence was located right at one of the places that has previously been associated with common psychiatric diseases in human populations,” he says.

Hope For Treatment

Specifically, the scientists focused on the gene for a protein called CACNA1C, which helps direct the flow of calcium in and out of cells. Calcium influences the electrical activity of neurons and helps control the release of the neurotransmitters that neurons use to communicate with each other. Previous research has tied CACNA1C to risks for both schizophrenia and bipolar disorder, as well as anxiety, depression, obsessive-compulsive symptoms and autism.

The researchers focused on the so-called “non-coding” parts of this gene – these are the ones that don’t carry instructions for building the CACNA1C protein. When they compared the standard human genome used as a reference guide with the diverse range of human genomes from across the globe collected for the 1,000 Genomes Project, they discovered a significant variation in one particular region of the gene.

The research team’s analysis of the 1,000 Genomes Project’s data suggested that changes in this particular region could be increasing or decreasing the activity of the CACNA1C gene in ways that might influence risk for mental disorders. “Fifteen years after the initial sequencing of the human genome, we are still finding important pieces of the genome that have been missed in previous studies,” Kingsley says.

[...]

The Netherlands’ pensions have high participation, good retirement income, strong capitalization & sustainability; greater risk-taking & choice in managing pension savings could help w/self-employed

Self-Employment and Support for the Dutch Pension Reform. Izabela Karpowicz. Working Paper No. 19/64. March 19, 2019. https://www.imf.org/en/Publications/WP/Issues/2019/03/19/Self-Employment-and-Support-for-the-Dutch-Pension-Reform-46663

Summary: The Netherlands’ pension system is characterized by high participation rates, adequate retirement income, strong capitalization and sustainability. Pressure points are arising, however, due to population aging and untransparent intergenerational transfers inherent in the system. Moreover, the Dutch pension system needs to adapt to the changing labor market landscape with an increasing share of workers in self-employment not covered by any pension arrangement. The government has proposed replacing collective defined-benefits schemes with personal accounts, and abolishing uniform premia and constant accrual rates. The micro-data analysis shows that allowing greater risk-taking and freedom of choice in managing pension savings could crowd self-employed into pension schemes.

The influence of daily news exposure on emotional states: Making us unhappy

Is the news making us unhappy? The influence of daily news exposure on emotional states. Natascha de Hoog, Peter Verboon. British Journal of Psychology, March 21 2019, https://doi.org/10.1111/bjop.12389

Abstract: There is evidence that exposure to negative news is making people feel bad, but not much is known about why this only affects some people or whether this also applies to everyday news exposure. This study examined the direct and indirect effects of daily news exposure on people's affective states. Using ecological momentary assessment (EMA), 63 respondents (24 men and 39 women) reported their news exposure and affective states five times a day for 10 days. In addition, personal relevance of the news and personality characteristics, neuroticism and extraversion, were assessed. Results showed that negative news perceptions were related to more negative affect and less positive affect, and these effects were moderated by personal relevance, but not personality characteristics. The implications of these outcomes are discussed.

Background

These days, news seems to be everywhere. People can be updated about the latest developments in the world during the entire day and seven days a week. News is not only received by television, newspapers, and through online news coverage, but also through social media. Even people who do not follow regular news updates can still be confronted by news events through the people they follow on social media (Kramer, Guillory, & Hancock, 2014). Even though news facts can have positive, neutral, or negative content, the majority of news coverage concerns topics with a negative valence (Haskins, Miller, & Quarles, 1984; Zillmann, Chen, Knobloch, & Callison, 2004), including topics like natural disasters, crime, the bad economy, terrorism, or war. Not only is the majority of news topics negative, people also tend to pay more attention to negative news (Zillmann et al., 2004). In addition, the majority of negative news coverage is directed towards people's emotions (Philo, 2002), and the sensationalism and confronting nature of news coverage have increased drastically over the last decades (Wang, 2012).

All this exposure to negative information about the state of the world is likely to have an impact on our state of mind, our moods, or even our general happiness (Galician, 1986). Surprisingly, not much research has been conducted on this topic. Even though there are many studies on news perception, the focus has mainly been on cognition, with studies looking at information processing and memory (Gerend & Sias, 2009), as well as framing (Sun, Krakow, John, Liu, & Weaver, 2016), and motivation (Lee & Chyi, 2014) or attitudes (Hollbert, Zeng, & Robinson, 2017), while the topic of emotions has received much less attention. When emotions do play a role, studies usually focus on emotions used in news (Brosius, 1993), rather than as an outcome of news exposure.

The studies available on the relationship between news exposure and affect do generally support the notion that exposure to news reports affects our moods and state of mind. More specifically, a direct relationship between negative news exposure and negative emotional states was found in a number of experimental studies (Balzarotti & Cicero, 2014; Johnston & Davey, 1997; Marin et al., 2012; McIntyre & Gibson, 2016; Szabo & Hopkinson, 2007; Unz, Schwab, & Winterhoff‐Spurk, 2008; Veitch & Griffitt, 1976). After being exposed to negative news reports, positive affect decreased, whereas negative affect, sadness, worries, and anxiety increased. Other studies have found indirect effects on psychological distress and negative affect through an increase in stress levels and irrational beliefs (McNaughton‐Cassill, 2001) or depression (Potts & Sanchez, 1994).

Non‐experimental research on the topic has mainly focused on the impact of very severe news events, like terrorist attacks. A study on the Boston Marathon terrorist attack (Holman, Garfin, & Silver, 2014) showed people's stress levels were higher after exposure to news about the attack for four weeks compared to stress levels right after the attack. Similarly, PTSD was found to increase after continuous news exposure about the 9/11 attacks (Ahern, Galea, Resnick, & Vlahov, 2004; Piotrkowski & Brannen, 2002). Similar findings are reported in studies on anthrax attacks (Dougall, Hayward, & Baum, 2005), children exposed to news about terror attacks (Pfefferbaum et al., 2002), and news coverage on infectious diseases like SARS (Hansen, 2009).

Thus, there is empirical evidence that exposure to negative news is making one feel bad, but why is that? Does this also apply to everyday news exposure? And does this affect everyone in the same way? The present research attempts to answer these questions by looking into the direct and indirect effects of daily news exposure on people's emotional states.
Theoretical background

Despite a number of studies on the impact of negative news exposure on emotional states, no theoretical explanation has been proposed for this effect. We postulate that cognitive appraisal theory might be a relevant framework in this context. Negative news can be seen as a stressor that needs to be evaluated and reacted to. As argued by cognitive appraisal theory (Ellsworth & Scherer, 2003; Lazarus & Folkman, 1984), when someone is exposed to a stressor, the stressor is appraised in order to elicit an appropriate emotional response. The cognitive appraisal process consists of two parts: (1) primary appraisal in which one establishes the importance (severity and relevance) of the stressor and (2) secondary appraisal that assesses the ability to cope with the stressor (Lazarus & Folkman, 1984). In other words, when confronted with news reports, someone (1) evaluates the valence and severity of the stressor (e.g., negative and very serious) as well as the extent to which the news affects them (e.g., very relevant) and (2) whether this news is something within or beyond their control (e.g., little control). Together, this determines the affective response that follows.

When it comes to appraisal of news stories, we propose it is mainly primary appraisal that is of importance. Most news events are likely to be perceived as outside the person's control (Kleemans, de Leeuw, Gerritsen, & Buijzen, 2017; Maguen, Papa, & Litz, 2008), making secondary appraisal less relevant to investigate as it is unlikely to vary much from person to person. For example, news about wars, poverty, and the recession are all things a recipient cannot change or has any influence over. However, people tend to differ in how severe they perceive certain news facts, and they especially differ in personal relevance. This is amplified by later theories of cognitive appraisal (Lazarus, 1991; Smith & Kirby, 2000) that have argued it is mainly the extent to which a stressor is personally relevant to someone that affects the intensity of the emotions elicited by a stressor. The importance of personal relevance was also established in a broad range of studies, showing personal relevance as an important factor when it comes to attention to, processing of, and evaluation of information (Balzarotti & Cicero, 2014; De Hoog, 2013; Van t Riet, Ruiter, & De Vries, 2012). More specially, studies on news perception have found personal relevance to be a moderator of the effect of news valence on affective response (Balzarotti & Cicero, 2014; Marshall et al., 2007).

This corresponds with the notion of information processing theories (Chen, Duckworth, & Chaiken, 1999) that personal relevance is a crucial factor in determining how critical and intensive information is processed and evaluated. In dual process models (Evans & Frankish, 2012), as well as in later versions of cognitive appraisal theory (Lazarus, 1991), the relationship between cognitions and affect is seen as a continuous bidirectional process, wherein cognitions about information affect emotions that in turn affect cognitions about the information. People who are exposed to similar news information on a daily basis can end up in a downward spiral of appraisals leading to negative affect, negative affect leading to more negative appraisals of the news etc., which might explain why studies on continuous exposure to news about terrorist attacks found people felt worse after weeks of exposure than just after the fact (Ahern et al., 2004). It also corresponds with studies showing people who are anxious or depressed are more likely to focus on negative information or information that corresponds with their mental state (Davey & Wells, 2006), which in turn only increases their anxiousness or depression. It has to be pointed out that some studies have found the opposite effect, with people selecting to read news stories that are contrary to their current mood (Biswas, Riffe, & Zillmann, 1994; Kaspar, Ramos Gameiro, & König, 2015).

Even though daily exposure to negative news can affect people negatively, not everyone is affected in the same way. While some people feel the burden of all that is wrong in the world, others seem to be able to brush it off and remain rather unaffected emotionally by the media they consume (Valkenburg & Peter, 2013). Individual differences in the cognitive appraisal process can partly explain this (Gross & John, 2003; Kuppens & Tong, 2010; Scherer, 2001), as studies have shown people with certain traits appraise situations differently and have dissimilar affective responses to stressors (Bolger & Schilling, 1991; Scheier & Carver, 1985; Tong, 2010).

Two personality characteristics that are especially relevant when it comes to appraisal and reactions to negative news are neuroticism (Bolger & Schilling, 1991; Tong, 2010) and extraversion (Gallagher, 1990; Rafienia, Azadfallah, Fathi‐Ashtiani, & Rasoulzadeh‐Tabatabaiei, 2008). Neuroticism is the general tendency to react in an anxious and negative matter to everyday stressors. Neuroticism has been linked to heightened negative affect, anxiety, and fear, as well as a general lower well‐being. In addition, neuroticism has been shown to negatively affect the primary appraisal process (Oliver & Brough, 2002), with people high in neuroticism reacting more strongly and negatively to stressors than people low in neuroticism (Bolger & Schilling, 1991; Tong, 2010). Thus, it was expected they would perceive news as more negative and feel more personally affected by it. Extroverts are known to be social, impulsive, optimistic, and easy‐going (Sanderman, Arrindell, Ranchor, Eysenck, & Eysenck, 2012). More specifically, extroverts report higher well‐being and experience more positive affect and less negative affect than introverts (Gallagher, 1990; Stafford, Ng, Moore, & Bard, 2010). In addition, extraversion is related to lower stress and fear levels (Penley & Tomaka, 2002). Indirectly, extraversion has been shown to be a moderator in the affective processing of information as well as the influence of affect on cognition (Rafienia et al., 2008; Stafford et al., 2010). Thus, it was expected they would perceive news as less negative.

The present research

So far, studies have shown that exposure to negative news reports can negatively affect one's emotional state, but these studies have mainly been experimental in nature or have focused on very serious events, like terrorist attacks. Not much is known about the effect of daily exposure to everyday news and why some people are more affected by news exposure than others. More research is needed into the possible negative effects of daily news exposure and the conditions under which they occur. Therefore, the present research looks at the direct and indirect effects of daily news exposure on people's emotional states.

The design of the study was derived from ecological momentary assessment (EMA) methodology (Conner, Tennen, Fleeson, & Barrett, 2009), and, to our knowledge, it is the first study that looks at the effects of news perception on emotional states using an intensive longitudinal design. This method uses a structured diary‐type set‐up used to assess people's thoughts, moods, and the exact context in real time, for a certain period of time, and has been shown to be very effective in capturing people's daily reality (Myin‐Germeys et al., 2009). Benefits of this method also include the minimization of bias in recall compared to assessments of mood and emotional states by traditional methods. In addition, compared to experimental studies, this method increases ecological validity, while also being able to assess causal effects.

The aim of the present study was to examine whether daily exposure to negative news would negatively affect people's emotional states. It was also explored whether personal relevance, extraversion, and neuroticism moderated this effect. We expected daily exposure to negative everyday news to affect emotional states. More specifically, we expected a positive relationship between how negative the news was perceived to be and negative affect (and a negative relationship for positive affect; hypothesis 1). In addition, we expected the impact of negative news on emotional states to be stronger when personal relevance is high (hypothesis 2), and for people who score high on neuroticism (hypothesis 3) or low on extraversion (hypothesis 4).


Discussion

The present study adds to the growing amount of literature on the effects of media exposure on well‐being and emotional states. The main aim of this study was to examine daily, everyday news exposure by testing whether negative affect and positive affect were influenced by daily news perceptions. In addition, we tested whether personal relevance of the news moderated the effect of the news perception and whether the personal difference variables, neuroticism and extraversion, were relevant in these associations. As expected, it was found that when daily news was perceived as more negative, people reported more negative affect and less positive affect. This corresponds with previous experimental studies (Balzarotti & Cicero, 2014; McIntyre & Gibson, 2016; Szabo & Hopkinson, 2007), as well as cross‐sectional and longitudinal studies on severe news facts (Ahern et al., 2004; Dougall et al., 2005; Holman et al., 2014). The results of the present study add to these findings by showing these same effects are found when looking at daily exposure to everyday news. Thus, news does not have to be very severe or shocking for people to be affected by it emotionally.

In addition, it was found that when personal relevance of the news was high, the reported negative affect also tended to be higher, stressing the importance of personal relevance in general and in appraisal of news especially (Balzarotti & Cicero, 2014; De Hoog, 2013; Lazarus, 1991; Smith & Kirby, 2000). Moreover, as expected, personal relevance of the news moderated the association of news valence on reported negative affect and positive affect, respectively, with negative news having a stronger impact on affect when personal relevance was high. This is in line with studies on news perception showing personal relevance to be an important moderator (Balzarotti & Cicero, 2014; Marshall et al., 2007).

These findings support cognitive appraisal theory (Ellsworth & Scherer, 2003; Lazarus, 1991; Lazarus & Folkman, 1984) as a relevant framework for explaining the effect of news perception on emotional states. As we postulated, when exposed to news facts, primary appraisal takes places, wherein someone assesses the severity and relevance of the news facts that in turn affect the emotional response. As the findings of the present study show, the more severe the news was perceived and the higher perceptions of personal relevance, the stronger the affective response. Although not the main focus of our study, additional analyses also showed support for our reasoning that when it comes to everyday news, secondary appraisal in the form of coping with the stressor plays a much smaller role, as most news stories are seen as outside of the person's control. Indeed, no direct or indirect effect of coping on affect was found, besides a small direct effect of coping on positive affect. Following the reasoning of cognitive appraisal theory, this implies that in order for people to be less affected by news exposure, the news either needs to be perceived as less severe or more under people's control. One way to achieve this could be for the media to stop stressing the negativity and severity of daily news and to provide more information about how people could cope with certain information, a concept recently described as constructive journalism (McIntyre & Gibson, 2016). Even though viewers might not have much control over the news, they do have control over how they cope with their emotional responses. Further studies should therefore look into the role of emotion‐focused coping in news exposure.

Because not everyone is affected in the same way by news exposure (Valkenburg & Peter, 2013), and individual differences seem to play an important role in this (Gross & John, 2003; Kuppens & Tong, 2010; Scherer, 2001), we explored the importance of two personality characteristics, namely neuroticism and extraversion (Bolger & Schilling, 1991; Gallagher, 1990; Tong, 2010). Neuroticism had a relatively large effect on both affect measures. People with higher scores on neuroticism reported more negative and less positive affect. However, even though neuroticism had a large effect on affect in general, neuroticism did not moderate the effect of news exposure on affect, nor did it affect perceptions of personal relevance. In addition, extraversion only was a moderator for positive affect. Even though previous studies have established the role of both personality factors in affective responses (Rafienia et al., 2008; Stafford et al., 2010), neither seems to have a strong effect on people's news perception. Extraversion makes people exposed to negative news have more positive affect, but not less negative affect. This seems to imply that extraverts still have the same response of negative emotions to exposure to negative news as everyone else, but they just do not let it affect their positive emotions. Neuroticism just makes people experience more negative affect in general (Bolger & Schilling, 1991).
Limitations and recommendations

Even though the results of this study show important insight into the effect of daily, everyday news exposure on affective responses, some limitations need to be mentioned. First of all, a convenience sample was used in this study, limiting the generalizability of the results. The sample had a representative distribution of gender and age, but mainly included people with a higher education. Thus, the sample was not very representative of the Dutch population. Future studies should attempt to use a more representative sample of people, especially to establish news effects in lower educated people. Secondly, even though we used an intensive longitudinal design (Conner et al., 2009) that is known for being able to capture people's daily experiences effectively, as well as minimizes bias and has more ecological validity than experimental studies (Myin‐Germeys et al., 2009), it is also a very intensive research method asking a lot of the investment of participants. As a consequence, compliance with the study instructions in EMA studies is known to be less than in cross‐sectional surveys. However, enough data points to detect moderate‐to‐large effects were still available to produce valid results when using ESM data (Delespaul, 1995). Thirdly, because we wanted to limit the burden of participants, we restricted the number of items to measure the relevant constructs. Even though some of these measures have been validated (Van der Steen et al., 2017) or appear to be reliable measures, we cannot be certain that personal relevance, which was assessed with a single item, was measured reliably. In future studies, a more extensive and reliable measure of personal relevance needs to be used.

This study is the first, to our knowledge, that looks at the effect of everyday news exposure, using an intensive longitudinal design (Conner et al., 2009). More research should be conducted using these – or similar – designs in order to truly capture the continuous nature of news exposure. These days, people do not just read or watch single news reports, but they are constantly exposed to news information, and the way we research this phenomenon should reflect the research designs we use. In addition, more research is needed into possible moderating or mediating factors. A clear picture that comes from this study, as well as previous studies, is that news exposure can negatively affect our moods; however, not enough is known about why some people are more affected by this than others.

So far, we know that factors that are important are personal relevance, but more individual difference measures need to be explored in order to get a better picture. Some interesting variables to consider include traits that could possibly affect how the news is perceived like locus of control (Bollini, Walker, Hamann, & Kestler, 2004) or optimism (Forgeard & Seligman, 2012), and specific variables related to cognitive appraisal and emotional responses such as coping style (Ben‐Zur, 2009), affective self‐regulatory efficacy (Bandura, Caprara, Barbaranelli, Gerbino, & Pastorelli, 2003), or emotion regulation (Gross & John, 2003). Besides individual differences, social influences should be considered. How news is received and perceived has a lot to do with one's social surroundings, like indirect news exposure through social media (Kramer et al., 2014). Surprisingly, relatively little research has been done on the role of social influence, like peer groups or social identity, in the effects of media exposure (Valkenburg, Peter, & Walther, 2016).

In conclusion, the present study showed the effect of daily news exposure on negative and positive affect and explored possible moderators. Negative news perception is related to more negative affect and less positive affect, and these effects are moderated by personal relevance. Thus, daily exposure to everyday news facts makes people feel bad, especially when they consider the news to be personally relevant. These results implicate we need to look more carefully at the way (negative) news is presented in the media, as well as the frequency of exposure to the news, in order to prevent people from being negatively affected by it.

The present study showed that having a happier spouse is associated not only with a longer marriage but also with a longer life

Having a Happy Spouse Is Associated With Lowered Risk of Mortality. Olga Stavrova. Psychological Science, March 21, 2019. https://doi.org/10.1177/0956797619835147

Abstract: Studies have shown that individuals’ choice of a life partner predicts their life outcomes, from their relationship satisfaction to their career success. The present study examined whether the reach of one’s spouse extends even further, to the ultimate life outcome: mortality. A dyadic survival analysis using a representative sample of elderly couples (N = 4,374) followed for up to 8 years showed that a 1-standard-deviation-higher level of spousal life satisfaction was associated with a 13% lower mortality risk. This effect was robust to controlling for couples’ socioeconomic situation (e.g., household income), both partners’ sociodemographic characteristics, and baseline health. Exploratory mediation analyses pointed toward partner and actor physical activity as sequential mediators. These findings suggest that life satisfaction has not only intrapersonal but also interpersonal associations with longevity and contribute to the fields of epidemiology, positive psychology, and relationship research.

Keywords: life satisfaction, mortality, dyadic analyses, couples, open materials

Research has consistently shown that life satisfaction is associated with longevity (for a review, see Diener & Chan, 2011). For example, meta-analyses of long-term prospective studies have shown that higher life satisfaction predicts lower risk of mortality over decades (Chida & Steptoe, 2008). Although this literature has demonstrated an intrapersonal effect of life satisfaction (i.e., an effect of an individual’s life satisfaction on that individual’s mortality), it is less clear whether life satisfaction has interpersonal effects as well. In particular, does an individual’s life satisfaction affect the mortality risk of his or her spouse?

Epidemiological studies have demonstrated the importance of contextual characteristics (e.g., neighborhood characteristics; Bosma, Dike van de Mheen, Borsboom, & Mackenbach, 2001) for individuals’ longevity. Adopting the interpersonal perspective (Zayas, Shoda, & Ayduk, 2002), I propose that the characteristics (e.g., life satisfaction) of the people who are close to an individual can also make up that person’s context and, potentially, affect his or her life outcomes. For example, life satisfaction has been associated with healthy behaviors such as physical exercise (Kim, Kubzansky, Soo, & Boehm, 2017). Given that spouses tend to affect each other’s lifestyle (Jackson, Steptoe, & Wardle, 2015), having a happy spouse might increase one’s likelihood of engaging in healthy behaviors. In addition, happiness has been associated with helping behavior (O’Malley & Andrews, 1983). Hence, having a happy partner might be related to experiencing support from that partner and, consequently, might improve one’s health and longevity.

Indeed, a recent study found that spousal life satisfaction was associated with individuals’ self-rated health (Chopik & O’Brien, 2017), although such interpersonal effects were not detected for doctor-diagnosed chronic conditions (Chopik & O’Brien, 2017) or for inflammation markers (Uchino et al., 2018). None of the existing studies have explored whether spousal life satisfaction predicts individuals’ mortality. The present research examined this question using panel data of approximately 4,400 elderly couples in the United States. In addition, a set of exploratory mediation analyses tested the role of partner support as well as partner and actor physical activity as potential mechanisms for such an association.

Finally, it is possible that the level of spousal life satisfaction per se matters much less than the extent to which it is similar to individuals’ own life satisfaction. A growing body of research has underscored the level of congruence between partners’ dispositional characteristics as an important factor for their relationship and life outcomes (Dyrenforth, Kashy, Donnellan, & Lucas, 2010). Therefore, in an additional set of analyses, I explored whether the level of actor-partner similarity in life satisfaction was associated with actor mortality.

Discussion

Previous research has shown that individuals’ career success and relationship and life satisfaction are predicted by their spouses’ dispositional characteristics (Dyrenforth et al., 2010; Solomon & Jackson, 2014). The present research suggests that spouses’ reach might extend even further. A dyadic survival analysis using the data from 4,374 couples showed that having a spouse who was more satisfied with life was associated with reduced mortality.

What explains this interpersonal effect of life satisfaction? Exploratory mediation analyses established partner and actor physical activity as sequential mediators. One partner’s life satisfaction was associated with his or her increased physical activity, which in turn was related to increased physical activity in the other partner, which predicted that partner’s mortality. Yet, given the correlational nature of these data, these results should be interpreted with caution.

It is noteworthy that the effect of spousal life satisfaction was comparable in size to the effects of other well-established predictors of mortality, such as education and income (in the present study, HRs = 0.90 for partner life satisfaction, 0.93 for household income, and 0.91 for actor education). In fact, spousal life satisfaction predicted mortality as strongly as (and even more robustly than) an individual’s own life satisfaction and as strongly as basic personality traits, such as neuroticism and extraversion, predicted mortality in previous work (Jokela et al., 2013).

Although most existing research on predictors of mortality has focused nearly exclusively on individuals’ own characteristics, the present analyses revealed that the characteristics of a person who is close to an individual, such as a spouse, might be an equally important determinant of that individual’s mortality. Continuing this line of research, future studies might explore whether the interpersonal effect of life satisfaction on mortality is restricted to (marital) dyads or whether it extends to larger social networks.

To conclude, happiness is a desirable trait in a romantic partner, and marriage to a happy person is more likely to last than is marriage to an unhappy person (Lucas, 2005). The present study showed that having a happier spouse is associated not only with a longer marriage but also with a longer life.


[... ]


Alcohol Use May be Beneficial after all
If one restricts the focus to alcohol-related illnesses, it makes sense that any level of alcohol consumed increases the rate of these illnesses. Even if alcohol always increases the risk of alcohol-related diseases, it may still be associated with a boost in overall health.
This would explain why wealthy individuals, and affluent countries, both consume more alcohol and have a longer life expectancy.
In my own unpublished analysis of the connection between alcohol and life expectancy at birth, I found no evidence that countries with a higher proportion of drinkers, or with higher alcohol consumption per person, paid a price in lost life expectancy.
When the analysis was restricted to the wealthier half of countries - that drink more - I found that those countries that consumed more alcohol had a significantly higher life expectancy (even with national wealth, and religion, statistically controlled). Residents of countries where more of the people drank alcohol also lived significantly longer.
[...]

Alcohol and Health: Despite the controversy, abstinent countries have the biggest health problems

Alcohol and Health: Controversy Continues. Nigel Barber. Psychology Today, Mar 21 2019. https://www.psychologytoday.com/intl/blog/the-human-beast/201903/alcohol-and-health-controversy-continues
Abstinent countries have the biggest health problems.

Excerpts:

We often hear how many people die of alcohol-related diseases. Such research considers the only pathology and ignores the possibility that alcohol has health benefits. Such benefits appear to be enjoyed mainly by the affluent.

In an earlier post, I argued that very damaging drug use is selected against. The best example of this is the phenomenon of genetic alcohol intolerance in Asian populations that had been bedeviled by excessive alcohol use thanks to the easy availability of homemade rice wine (1).

Learning also matters. Tobacco use declines via social learning once its adverse health effects become widely known (2). Why would so many people drink alcohol if it is so bad for health?

Alcohol Use and Adaptation

Alcohol is consumed in most countries and by over 40 percent of the population in countries around the world. Recent research, published in Lancet, found that any level of alcohol consumption increases morbidity and mortality from alcohol-related diseases.

Many scholars argue that this is another case of human behavior going badly off the rails in modern societies that are very different from ancestral environments to which we are supposedly adapted (2).

Yet, this picture of human limitations may be excessively bleak. Humans and other mammals are a great deal more adaptable to their current environments than this approach suggests. Moose growing up in locations where wolves are extinct loose all fear of their ancestral arch enemy.

Humans are more flexible than other species and quickly learn to avoid foods and drugs that are harmful, including highly addictive drugs such as tobacco (2).

Alcohol has complex health effects and there may not be a simple linear effect of increasing alcohol consumption undermining health as the Lancet study concludes.

The U-shaped Function

Research on cardiovascular disease found that there is a U-shaped relationship between illness and alcohol consumption (3). This means that people who drink unusually little have worse health than those who consume a moderate amount whereas heavy consumption is associated with a heavy health cost.

These findings are inconsistent with the conclusion that alcohol in any amount is harmful.

Yet, there is a way in which the contradiction could be resolved. Even if alcohol is always toxic, its use in moderate amounts could have beneficial effects for health if (a) it facilitates social interactions and thereby reduces isolation and increases bonding and social support and (b) the beneficial consequences outweigh the toxicity costs.

Hence wealthy people consume more alcohol than average but also enjoy much better health and longevity than poorer segments of the population.

Who Is Prejudiced, and Toward Whom? The Big Five Traits and Generalized Prejudice

Who Is Prejudiced, and Toward Whom? The Big Five Traits and Generalized Prejudice. Jarret T. Crawford, Mark J. Brandt. Personality and Social Psychology Bulletin, March 21, 2019. https://doi.org/10.1177/0146167219832335

Abstract: Meta-analyses show that low levels of Openness and Agreeableness correlate with generalized prejudice. However, previous studies narrowly assessed prejudice toward low-status, disadvantaged groups. Using a broad operationalization of generalized prejudice toward a heterogeneous array of targets, we sought to answer two questions: (a) Are some types of people prejudiced against most types of groups? and (b) Are some types of people prejudiced against certain types of groups? Across four samples (N = 7,543), Openness was very weakly related to broad generalized prejudice, r = −.03, 95% confidence interval (CI) [−.07, −.001], whereas low Agreeableness was reliably associated with broad generalized prejudice, r = −.23, 95% CI [−.31, −.16]. When target characteristics moderated relationships between Big Five traits and prejudice, they implied that perceiver–target dissimilarity on personality traits explains prejudice. Importantly, the relationship between Agreeableness and prejudice remained robust across target groups, suggesting it is the personality trait orienting people toward (dis)liking of others.

Keywords: Big Five, agreeableness, openness, prejudice, generalized prejudice