Sunday, May 9, 2021

Weight systems and the emergence of the first Pan-European money

Nicola Ialongo et al, A small change revolution. Weight systems and the emergence of the first Pan-European money, Journal of Archaeological Science (2021). DOI: 10.1016/j.jas.2021.105379

Highlights

• Metal trade in Europe increases in the course of the Bronze Age.

• Systematic fragmentation of metal objects increases in the course of the Bronze Age.

• The spread of weighing technology is correlated to the spread of fragmentation.

• Cosine Quantogram Analysis and Monte Carlo simulations show that metal fragments comply with weight systems.

• Metal fragments were likely used as money.

Abstract: In the Bronze Age (c. 2300–800 BC), European communities gave up their economic independence and became entangled in a continental trade network. In this paper, we will test the hypothesis that the adoption of a ‘Pan-European’ currency has favoured the development of such a network. We define a methodology to test the money-hypothesis in pre-literate economies, based on analogies with the material characters of metallic money in the Ancient Near East. The statistical properties of metals from European hoards are compared with those of balance weights, in order to test the following expectation: if they were used as money, complete objects and fragments are expected to comply with standard weight systems. The results meet the expectation, and indicate that bronze fragments possess the same statistical properties as hack-silver money in the Ancient Near East. The sample includes approximately 3000 metal objects, collected from two test-areas: Italy and Central Europe. The sample of balance weights includes all the items known to date for pre-literate Bronze Age Europe, collected within the framework of the ERC Project ‘Weight and Value.’


Popular version: Scrap for cash: Bronze Age witnessed revolution in small change across Europe (phys.org)

4. Discussion

4.1. Weight regulation

The results of the statistical analysis support the money-hypothesis for Late BA Europe, showing that metal objects were probably intentionally fragmented in order to comply with weight systems. The analogy with the Ancient Near East suggests that bronze fragments in Europe had the same function as silver fragments in Mesopotamia, i.e. they performed the basic functions of money. The joint analysis of balance weights and bronze objects suggests that monetary patterns of exchange existed in BA Europe, that they complied with a Pan-European index of value, and that they were based on the use of metals as standard media of exchange. The actual relevance of the phenomenon is difficult to quantify. For the time being, we can only observe that metal fragments were used as money frequently enough to leave measurable traces in the archaeological record; how frequently is still not possible to define. Unlike proper balance weights – that produce sharp, neatly separated clusters of mass values (Ialongo and Rahmstorf, 2019) – bronze fragments produce small clusters that stand out from a diffused background noise. FDA suggest that weight-regulated fragmentation is not very accurate. Based on the statistical analysis, we can derive that systematic fragmentation tends to produce fragments with mass values that are multiples of weight units, and that complete objects and ingots were cast with no specific mass prescription. Just like in the Mesopotamian world, the compliance with weight systems is an indirect consequence of trade, rather than a pre-defined regulation. Assessing the actual relevance of the phenomenon will require further research and a larger number of regional samples.

As to why fragments tend to assume regular mass values, we can propose a hypothesis. The typical trade situation – documented in Mesopotamian texts – includes two agents, each provided with their own weighing equipment (Peyronel, 2011). Metal fragments do not need to be weight-regulated, since each agent can easily quantify the value of the transaction. However, pre-weighed fragments would speed up the operations, by preventing the calculation of a remainder. Breaking bronze objects is relatively simple, and does not require any particular metallurgical knowledge. Experimental results show that a socketed axe made of tin bronze (8%) breaks into pieces with three blows if heated up at c. 560°, the temperature of a medium-sized campfire (Knight, 2017). The required temperature is lower if copper is alloyed with lead (Knight, 2019). In order to produce accurate fragments, one can progressively break off small bits and repeat the weighing until the desired mass is obtained. The repetitiveness of the operation increases the skill of the operator, and skill increases accuracy. Fragmentation can be performed either before or during the transaction, depending on the situation. Archaeological evidence suggests that, at least in the Late BA, bringing along small stocks of metal fragments was a rather common habit. Many burials in central and northern Europe include small boxes of organic materials, containing metal fragments and scraps, blunt tools suited for breaking metals and, sometimes, scale beams and balance weights (Pare, 1999Roscio et al., 2011). One of such boxes was recently identified among the finds of the Late BA battlefield in the Tollense Valley (Northern Germany, c. 1350–1200 BC) (Fig. 2), which attests that their use was not limited to the burial rite (Uhlig et al., 2019). These containers offer a suggestive picture of how metallic money could be carried around for everyday purposes.

For the complete objects, the absence of weight-regulation has a perfectly logical explanation. The subsample between 7 and 200 g includes every type of ornament, tool and weapon with the only exception of swords, which are exclusively represented in fragments. Since the size of a useable object is dictated by its function, there is no reason to assume that its mass should be regulated in order to fit a predetermined value.

CQA does not give significant results for ingots and ingot fragments in the shekel-range. This can depend on several factors. The fact that ingots are, on average, thicker than any other object in the sample could imply that they are more difficult to break into pieces with a predetermined mass. It could also mean that ingots and ingot fragments were not used as currency. Ingots are usually made of pure copper (e.g. Pernicka et al., 2016), and thus might have been used exclusively as raw material. It should be noted, however, that we could not compare the ingots to the balance weights in the mina-range. It is possible that, since they are on average significantly more massive than other object categories, their mass was measured in fractions of the mina rather than in multiples of the shekel. Finally, one has to consider that the main use of heavy balance weights is to weigh out bulks of goods, rather than single objects. The fact that individual ingot fragments do not comply with weight systems does not rule out the likely possibility that they were traded in weighed bulks. One way to test this could be to analyse the total weight of hoards, and verify if they conform to multiples of the European mina. Unfortunately, it is not easy to determine if a hoard is in pristine conditions, or if it underwent modifications before or after its recovery.

4.2. The value of money

Our results show that the value of bronze was quantified according to a shared frame of reference. What we define as ‘bronze,’ however, is an umbrella term for many different copper alloys, mostly containing variable proportions of tin and lead. It is theoretically possible that different alloys had different values, which could somehow hamper the circulation of metals as money. For BA Europe the puzzle is difficult to solve, as we have no direct evidence of value equivalences between different types of bronze and between bronze and other commodities.

An argument in favour of the hypothesis of the different values could be that, since bronze fragments are made of different alloys, their indiscriminate use as currency would hamper or entirely prevent their reuse as raw material. The argument, however, fails to correctly account for the evidence. Substantial quantities of fragments exist in the archaeological record and fragments were undoubtedly exchanged, regardless of whether or not one accepts their monetary use. At the same time, it would seem that metallurgists did not have problems in finding the right alloy for their purposes. This either implies that they were able to determine the alloy of fragments, or that they used fragments in limited quantities and mostly relied on other forms of raw metal, such as ingots. Hence, the argument also fails to acknowledge the relevant role of recycling (e.g. Radivojević et al., 2018). A second argument could be that some metals used in bronze alloys are rarer than others, and thus more expensive. Lead, for example, is alternatively assumed to be expensive (e.g. Johansen, 2016) or cheap (e.g. Needham and Cook, 1988), depending on whether it had to be imported (Scandinavia) or was locally available (British Isles). But if distance played such a determinant role, how can one explain, for instance, the rich metallic record of Denmark, which completely lacks copper and tin sources? The problem of value is extremely complex, and bears far-reaching implications; it basically implies theorising a continent-wide market economy for BA Europe, and cannot be addressed here. Hence, while these arguments are worthy of consideration, they should not play, for now, a decisive role in the interpretation, in one way or another.

An alternative solution could be to assume that the problem of value is not correlated to the diversity of alloys, but rather to different modes of circulation. Different models on the circulation of metals tend to focus perhaps too much on metallurgy: If fragments were mainly exchanged as money their main purpose was to circulate, not to be recycled. Moreover, their circulation was certainly not limited to metallurgists. In theory, a single fragment could circulate for decades without ever ending up in a metallurgist's hands.

Economic theory does not explain why money has value. One way to justify why worthless pieces of paper can have the same monetary function of precious metals (the so-called ‘Hahn's problem’; Hahn, 1965) is to admit that ‘if people believe that money has value, it does’ (Velde, 2021: 201). Once money is acknowledged to be valuable, however, the market dictates how much value money has. It follows that, if money can be a worthless substance, its market value is not necessarily correlated to the substance of which money is made. Since the value of money (and commodities alike) is regulated by the market, that value will be determined by the most frequent use that the market makes of the substance of which money is made. Hence, if metallurgy is the prevalent use, then it is possible that different alloys will have different values. If monetary use is prevalent, then the difference of alloys will play a minor role in the determination of value.

4.3. The origin of money? Towards a theoretical framework for money in BA Europe

Weighed currency was not the first form of money in Europe. It was, however, the first that could be potentially accepted anywhere, provided that weighing technology was available. In other words, weighing technology does not originate money, but simply provides a universally-valid frame of reference for the quantification of its value (Rahmstorf, 2016). The idea that the origin of money is correlated to technology or to the complexity of socio-economic systems implies an evolutionary paradigm, which hampers our understanding of the functions of pre-modern economies. There is nothing in economic theory that prevents the emergence of money in any given market, in economies of any complexity, and at any point in history (e.g. Jones, 1976Velde, 2021). Economic evolutionism has been shown to be based on a substantial misunderstanding of the social dynamics that regulate the modern economy (Bloch and Parry, 1989). On the contrary, it has been contended that the modern western economy is still as much embedded in social institutions as only ‘primitive’ economies were previously believed to be (e.g. Appadurai, 1986), and that the economy in ‘primitive’ societies is substantially less embedded than the evolutionary paradigm would predict (e.g. Granovetter, 1985). The contemporary approach to prehistoric money owes much to a seminal article by G. Dalton (1965), in which the author traces a sharp distinction between ‘primitive’ and ‘modern’ money. Soon after, however, J. Melitz – an economist – demonstrated that Dalton failed to acknowledge that supposedly ‘primitive’ and ‘modern’ monies have, in fact, the same functions and limitations (Melitz, 1970). Since then, the functional equivalence between ‘primitive’ and ‘modern’ money is generally accepted by most economists, economic historians, and economic anthropologists (e.g. Jones, 1976Velde, 2021Zelizer, 2000).

Money is not an evolutionary milestone, but simply a solution to the practical problem of the ‘double coincidence of wants’ (Jevons, 1875:3), stating that no one can trade with anyone who does not need or does not want whatever it is that they have to offer in payment. For example, if a pig breeder wants wheat and has only pigs to offer, they cannot obtain wheat if the crop farmer does not need or want pigs. The problem can be solved by agreeing upon using a third medium that everyone will eventually come to accept, as it is widely documented in many so-called primitive economies (e.g. Einzig, 1966Pryor, 1977). Money is not inevitable but it is convenient, as it has no requirements other than being customarily accepted by most agents in a given market. Metals represent only a limited part of all the pre-coinage monies documented either historically or ethnographically, which include perishable materials such as barley (e.g. BA Mesopotamia: Steinkeller, 2004), textiles (e.g. Classic Maya: Baron, 2018), bark-cloth (e.g. Early Colonial West Africa: Pallaver, 2015), and dried fish (e.g. Medieval Iceland: Mehler and Gardiner, 2021). Money is merely a convention, whose embodied physical media can have intrinsic value (such as silver coins) as well as none at all (i.e. banknotes) (Velde, 2021).

Concerning BA Europe, money is often believed to appear in some forms with the spread of mass metallurgy. The so-called Ösenringbarren (a type of ingot-like objects shaped as open rings common in Central Europe in the Early BA, c. 2150–1700 BC) are a common example. They show a noticeable regularity in shape, mass, and composition (Lenerz-de Wilde, 1995), and are regarded by some scholars as evidence of the earliest money in Europe (e.g. Pare, 2013Kuijpers and Popa, 2021). Since they exist several centuries before the introduction of weighing technology in Central Europe, their approximate regularity can be explained by relatively standardised moulds, and by the intuitive ability of experienced users to determine the approximate mass-equivalence of two objects simply by holding them in both hands (Kuijpers and Popa, 2021). If Ösenringbarren were indeed money – which we have no reason to doubt – their function was no different from the later, weight-regulated metal currencies. The difference is in their circulation. Since weighing technology was not available (until proven otherwise) there was no way to assess their value objectively or, for example, to calculate fractions and multiples. One can think of Ösenringbarren as a form of ‘fiduciary money’, i.e. a standard medium of exchange whose value is conventionally agreed upon, and whose intrinsic value is not relevant for the quantification of their exchange value. As it was recently proposed, fiduciary currencies can predate the emergence of commodity currencies, contrary to the common belief (Bresson, 2021). Whether classifying different types of money may or may not be the point, we would like to draw attention on the reasons why some monies are accepted in some regions and not in others. The Ösenringbarren relied on their recognisable shape, approximate size, and peculiar chemical composition in order to be accepted, because these characters were well-known and understood in the limited area where they were in common use, i.e. between southern Germany and the Czech Republic. The reason why we do not find Ösenringbarren outside this area is because those same characters were not recognised as ‘valid’ elsewhere. The spread of metallurgy undoubtedly expands trade networks, and increases the potential utility of money. The introduction of weighing technology, on the other hand, exponentially expands the user base by providing an objective frame of reference that transcends traditional cultural boundaries. At the same time, weighing technology does not alter the functions of money, and does not imply the erasure of other patterns of monetary and non-monetary exchange. Finally, if weighing technology is not a requisite for money, neither is metallurgy. By the same token, we should not prejudicially rule out a wide range of perishable commodities that are invisible in the archaeological record, but which could have been used as money before, during and after the introduction of metallurgy.

People who are involved in politics tend "to be attracted by its intellectual challenge," and tend "not to be easily shamed, nor to be particularly humble"

Bromme, Laurits, and Tobias Rothmund. 2021. “Mapping Political Trust and Involvement in the Personality Space – A Systematic Review and Analysis.” PsyArXiv. May 2. doi:10.31234/osf.io/hrk8f

Abstract: Individual differences in political trust and involvement in politics have been linked to Big Five personality dispositions. However, inconsistent correlational patterns have been reported. As a systematic review is still missing, the present paper provides an overview of the current state of the empirical literature. A systematic review of 43 publications (N = 215,323 participants) confirmed substantial inconsistency in the correlational patterns and corroborated a suspicion that the frequent use of low-bandwidth personality short scales might be responsible, among other reasons. In a second step, we conducted two empirical studies (N1 = 988 and N2 = 795), estimating latent correlations between the Big Five and political trust and involvement at different hierarchical levels. We found that personality relations were consistent across different subdimensions of trust (e.g., trust in politicians, institutional trust) and involvement (e.g., political interest, political self-efficacy, participation propensity) and are therefore best estimated at aggregated levels (i.e., general political trust and involvement). Meanwhile, correlational patterns differed substantially between Big Five facets, confirming that previous inconsistencies can be partly attributed to a misbalanced representation of facets in Big Five short scales and indicating that associations should be estimated at lower levels of the personality hierarchy.


Rolf Degen summarizing... Is everybody's sin nobody's sin? The prevalence of behaviors has only very small effects on their evaluation as morally good or bad

Do behavioral base rates impact associated moral judgments? Carl P. Jago. Journal of Experimental Social Psychology, Volume 95, July 2021, 104145. https://doi.org/10.1016/j.jesp.2021.104145

Abstract: In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

Keywords: MoralityMoral judgmentBlameCharacterHarmNormBase rateNormalityDescriptive normInjunctive normSocial influenceCorruptConformityInfluenceMoral norm


Check also: Making moral principles suit yourself. Matthew L. Stanley, Paul Henne, Laura Niemi, Walter Sinnott-Armstrong & Felipe De Brigard. Psychonomic Bulletin & Review, May 4 2021. People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations

Up to the magical number seven (maximum number of items in short term memory): Physical limitations? Evolutionary optimal number?

Up to the magical number seven: An evolutionary perspective on the capacity of short term memory. Majid Manoochehri. Heliyon, Volume 7, Issue 5, May 2021, e06955. https://doi.org/10.1016/j.heliyon.2021.e06955

Abstract: Working memory and its components are among the most determinant factors in human cognition. However, in spite of their critical importance, many aspects of their evolution remain underinvestigated. The present study is devoted to reviewing the literature of memory studies from an evolutionary, comparative perspective, focusing particularly on short term memory capacity. The findings suggest the limited capacity to be the common attribute of different species of birds and mammals. Moreover, the results imply an increasing trend of capacity from our non-human ancestors to modern humans. The present evidence shows that non-human mammals and birds, regardless of their limitations, are capable of performing memory strategies, although there seem to be some differences between their ability and that of humans in terms of flexibility and efficiency. These findings have several implications relevant to the psychology of memory and cognition, and are likely to explain differences between higher cognitive abilities of humans and non-humans. The adaptive benefits of the limited capacity and the reasons for the growing trend found in the present study are broadly discussed.

Keywords: Working memoryShort term memoryEvolution of memoryEvolution of cognitive system

7. Hypotheses concerning the capacity

Why memory span has a limited capacity or why there is an increasing trend of capacity towards humans? In the first place, I will argue the potential reasons for the limited capacity. In order to provide a more explicit discussion, the relevant studies are divided into two groups: those that based their discussion on a capacity about seven items or a temporary, passive storage (i.e., STM) and those that based their discussion on a capacity about three to four items or the focus of attention (i.e., WM).

7.1. Hypotheses of the limited capacity

7.1.1. STM hypotheses of the limited capacity

To begin with, some previous studies have suggested that “short-term memory limitations do not have a rational explanation” (Anderson, 1990, pp. 91/92) or larger capacities are biologically expensive or impossible. For instance, it has been postulated that greater STM size may have required additional tissue, which increases body mass and energetic expenditure, and therefore it is impossible with the biological characteristics of humans (e.g., Dukas, 1999). Other researchers rejected both of these assumptions (Todd et al., 2005). Moreover, the second assumption (i.e., assuming larger capacities as biologically expensive/impossible options) does not seem reasonable considering the diversity of extraordinary physiological and behavioral characteristics of different animal species. Also, if any of these suggestions is correct, we should perhaps be able to find various capacities of STM in different animals, which the present study does not indicate it.

One of the studies concerning the capacity of STM has been conducted by MacGregor (1987). Using a mathematical model, he highlighted the importance of efficient retrieval for STM. According to him, the limited capacity of STM could be the consequence of an efficiency of design. He argued that chunking facilitates retrieval when there are seven or five items in an unorganized memory. In a memory system evolved for efficiency, there is an upper effective limit to STM and a capacity beyond this limit would not be required.

In another study, Saaty and Ozdemir (2003) argued that in making preference judgments on pairs of elements in a group, the number of elements in the group should be no more than seven. The mind is sufficiently sensitive to improve large inconsistencies but not small ones and the most inconsistent judgment is easily determined. When the number of elements is seven or less, the inconsistency measurement is relatively large with respect to the number of elements involved. As the number of elements being compared is increased, the measure of inconsistency decreases slowly. Therefore, in order to serve both consistency and redundancy, it is best to keep the number of elements seven or less. When the number of elements increases past seven, the resulting increase in inconsistency is too small for the mind to single out the element that causes the greatest inconsistency to scrutinize and correct its relation to the other elements.

In a series of studies, Kareev has proposed that capacity limitation maximizes the chances for the early detection of strong and useful relations (Kareev, 19952000Kareev et al., 1997; for a controversial discussion of this hypothesis see Anderson et al., 2005Juslin and Olsson, 2005Kareev, 2005). From his standpoint, a STM capacity of size seven, which characterizes human adults, is of particular value in detecting imperfect correlations between features in the environment. The limited capacity may serve as an amplifier, strengthening signals which may otherwise be too weak to be noticed. He argued that, because correlations underlie all learning, their early detection is of great importance for the functioning and well-being of organisms. Therefore, the cognitive system might have evolved so as to increase the chances for early detection of strong correlations. In addition to the theoretical contribution, Kareev and colleagues in an experimental study found that people with smaller STMs are more likely to perceive a correlation than people with larger STMs (Kareev et al., 1997).

Some of the suggestions for the reason behind the limited capacity can be found in the studies of decision-making cognition. Here, it has been shown that people tend to rely on relatively small samples from payoff distributions (Hertwig and Pleskac, 2010). The size of these samples is often considered related to the capacity of STM (Hahn, 2014Hertwig et al., 2004Hertwig and Pleskac, 2010). In this context, a capacity-limited STM has been proposed as a possible cause (Hahn, 2014Hertwig et al., 2004Hertwig and Pleskac, 2010Todd et al., 2005) or a requirement (Plonsky et al., 2015) for relying on small samples. More relevant to the present discussion, Todd et al. (2005) suggested that the benefits of using small samples or the costs of using too much information resulted in selective pressures that have produced particular patterns of forgetting in LTM and limits of capacity in STM (see also Hahn, 2014). So, what are these costs and benefits? Limited information use can lead simple heuristics to make more robust generalizations in new environments (Todd et al., 2005). Small samples amplify the difference between the expected earnings associated with the payoff distributions, thus making the options more distinct and choice easier (Hertwig and Pleskac, 2010). Relying on small samples has also been suggested to result in saving time and energy (Plonsky et al., 2015Todd et al., 2005). Even if we assume that there is no cost (energy or time) for gathering information, by considering too much information, we are likely to add noise to our decision process, and consequently make worse decisions (Martignon and Hoffrage, 2002Todd et al., 2005). Among these, the one which is perhaps associated with strong selective forces is saving time. There are different occasions that timely decisions play a vital role in the life of animals. But perhaps of most importance is the case of hunting situations. The encounters between prey and predators were an integral part of the daily life of our ancestors through deep evolutionary time. It is also clear that the penalties for any kind of inefficiency in such encounters are immediate and fatal, which thus results in intense selection for particular cognitive abilities and predation avoidance mechanisms (see Mathis and Unger, 2012Rosier and Langkilde, 2011Whitford et al., 2019). For instance, any prey that is attacked by several predators and cannot quickly decide which one to avoid at first or which way and which method to choose for escaping or perhaps defending will be eliminated at once. A similar discussion can be developed for predators (see Lemasson et al., 2009).

Another line of studies has stressed the importance of the limited capacity for foraging activities (e.g., Bélisle and Cresswell, 1997Real, 19911992Thuijsman et al., 1995). According to it, the limited capacity may result in an overall optimization of food search behaviors. Similarly, Murray et al. (2017) have contended that the memory systems of anthropoids have been primarily evolved to reduce foraging errors. Foraging activities, however, do not appear to be the underlying reason for the capacity-limited STM. This is because, if foraging were the fundamental reason, then there would be remarkable sex differences in memory span, similar to that observed, for instance, in spatial abilities (Ecuyer-Dab and Robert, 2007Voyer, Postma, Brake and Imperato-McGinley, 2007). According to the division of labor in ancestral hunter-gatherer societies, men were predominantly hunters and women were gatherers (Ecuyer-Dab and Robert, 2007Marlowe, 2007), and it is likely that each one of these activities demands a different memory span. Namely, because a hunter has to focus on prey and ignore distracting information, while a successful gatherer can, or should, simultaneously consider many stationary targets (e.g., seeds, fruits, etc.). Contrary to this, many studies of sex differences in memory span show no significant difference (GrÉGoire and Van Der Linden, 1997Monaco et al., 2013Orsini et al., 1986Peña-Casanova et al., 2009). Foraging activities, if they were the underlying reason, could also result in remarkable differences among different species. The present study, however, does not indicate such differences. Therefore, although the limited capacity may have provided benefits for foraging activities, it seems reasonable to propose that foraging, after all, is not the main and direct reason for the limited memory space.

Among the hypotheses reviewed here, Kareev's suggestion (i.e., early detection of useful relations) is among the ones that have received relatively more attention. Also, his assumption seems reasonable in a comparative context and appears consistent with the findings of the present review. But of more importance is the fact that a memory system that has the ability of early detection of useful relations is likely to cause higher performance in associative learning and also saving time in decision making. In the case of learning, Kareev himself noted that: “Because correlations underlie all learning, their early detection and, subsequently, accurate assessment are of great importance for the functioning and well-being of organisms” (Kareev, 2000, p. 398). Leaning, certainly, is one of the first and main challenges of any cognitive system. Besides, there are broad similarities in basic forms of learning in different species (Dugatkin, 2013). It is also certain that through deep evolutionary time there has been intense selection for individuals with higher performance in learning. In this regard, Dugatkin (2013) stated that: “The ability to learn should be under strong selection pressure, such that individuals that learn appropriate cues that are useful in their particular environment should be strongly favored by natural selection” (p. 141). In summary, these considerations motivate the idea that associative learning and saving time in decision making are most likely the underlying reasons for the emergence and maintenance of limited capacity.

7.1.2. WM hypotheses of the limited capacity

There are, on the other side, some other studies of the limited capacity that based their analyses on a capacity about three to four chunks or the focus of attention (i.e., WM). Some of them will be briefly reviewed here. Sweller (2003), for instance, proposed that no more than two or three elements can be handled in WM, because any more elements would result in more potential combinations than could be tested realistically. According to him, as the number of elements in WM increases, the number of permutations rapidly becomes very large (e.g., 5! = 120). With random choice, the greater the number of alternatives from which to choose while problem solving, the less likelihood that an appropriate choice will be made.

Many other possibilities have been discussed by Cowan (Cowan, 200120052010). For instance, based on the notion that it is biologically impossible for the brain to have a larger capacity, he declared that the representation of a larger number of items could fail because together they take too long to be activated in turn (Cowan, 2010). Another discussion by Cowan is that the WM capacity limit is the necessary price of avoiding too much interference (Cowan, 2005). According to him, activation of the memory system would go out of control if WM capacity was not limited to about four items at once. A relatively small central WM may allow all concurrently active concepts to become associated with one another without causing confusion or distraction (Cowan, 2010). Oberauer and Kliegl (2006) similarly stated that:

The capacity of working memory is limited by mutual interference between the items held available simultaneously. Interference arises from interactions between features of item representations, which lead to partially degraded memory traces. The degradation of representations in turn leads to slower processing and to retrieval errors. In addition, other items in working memory compete with the target item for recall, and that competition becomes larger as more items are held in working memory and as they are more similar to each other. (p. 624).

7.2. The increasing trend of capacity

Archaeological evidence of an enhancement in the WM system has been presented by Coolidge and Wynn (2005) (see also Coolidge et al., 2013Wynn and Coolidge, 2006). The core idea of their hypothesis is a genetic mutation that affected neural networks approximately 60,000 to 130,000 years ago and increased the capacity of general WM and phonological storage. In the case of phonological storage, which is of more interest in the present review, they stipulated that: “A relatively simple mutation that increased the length of phonological storage would ultimately affect general working-memory capacity and language” (Coolidge and Wynn, 2005, p. 14). They proposed that the enhancement of WM capacities was the final piece in the evolution of human executive reasoning ability, language, and culture. From their point of view, the larger capacity is a necessary precondition for symbolic thought, which selective pressures contributed to the growth of it. They noted that an increase in WM capacities of pre-modern H. sapiens would have allowed greater articulatory rehearsal, consequently allowing for automatic long-term storage, and the beginnings of introspection, self-reflection, and consciousness. In line with Coolidge and colleagues’ hypothesis, Aboitiz et al. (2010) proposed that during the course of human evolution, a development in the phonological loop occurred. They maintained that this development produced a significant increase in STM capacity and subsequently resulted in the evolution of language.

Many researchers, at least in the field of archeology, tend to agree with the idea of enhanced WM (e.g., Aboitiz et al., 2010Haidle, 2010Lombard and Wadley, 2016Nowell, 2010Putt, 2016, for a review of criticisms, see Welshon, 2010), though there seems to be a disagreement on its time. Almost all, however, suggest a time in the Pleistocene about or after the appearance of the genus Homo (Aboitiz et al., 2010Coolidge and Wynn, 2005Haidle, 2010Putt, 2016). Some also suggest a gradual development (Haidle, 2010).

Once we accept the idea of the enhanced WM, important questions arise as to the cause and the process of this phenomenon. The enhancement of WM has been argued as a prerequisite for the evolution of some complex cognitive abilities of humans, such as language (Aboitiz et al., 2010Coolidge and Wynn, 2005) and tool use (Haidle, 2010Lombard and Wadley, 2016). For instance, Aboitiz et al. (2010) pointed out the existence of selective benefits for individuals with larger phonological capacities, which, in their view, were linguistically more apt. From their standpoint:

The development of the phonological loop produced a significant increase in short-term memory capacity for voluntary vocalizations, which facilitated learning of complex utterances that allowed the establishment of stronger social bonds and facilitated the communication of increasingly complex messages, eventually entailing external meaning and generating a syntactically ordered language. (p. 55).

In the case of tool use, Haidle (2010) argued that a basic trait of all object behaviors is the increased distance between problems and solutions. Given this, more complex object behaviors possess longer distances. According to her, during the process of tool use the immediate desire (e.g., getting the kernel of the nut) must be set aside and replaced by one or several intermediate objectives, such as finding or producing an appropriate tool. Thus, thinking must depart from the immediate problem and shift to abstract conceptualizations of potential solutions, which results in sequences of physical actions with objects appropriate to achieve a solution in the near future (see also Lombard and Wadley, 2016). Given her discussion, it is clear how individuals or populations with an enhanced WM system, which provides the possibility of maintaining and manipulating more information, could take advantage of their superiority to excel others in tool-use performance and, consequently, to win competitions.

Arguably, if we assume the enhancement of WM as a gradual process, which has started long before our common ancestor with chimpanzees (as it was found by the present study), neither tool use nor language can be considered as the primary reason for it. But complex problem solving, because of its commonness, can be nominated as the primary cause, which has then been supported by tool use and language (see also Putt, 2016). This assumption can be aided by the evidence indicating the critical role of an elaborate WM system in problem solving tasks (Logie et al., 1994Zheng et al., 2011).

After all, there are many obscure aspects regarding the evolution of WM. Needless to say, deep disagreements in the related fields and issues, such as the process and timeline of language evolution (Progovac, 2019), make the puzzle more difficult to solve. It goes beyond the limits of the present article to pursue this further, but perhaps a possible way to settle this problem is looking for advantages and disadvantages of high and low STM/WM capacities (Engle, 2010). Such findings from experimental psychology in conjunction with archaeological and comparative evidence can shed light on the evolution of the WM system.

8. Conclusion

The first and obvious implication of the present findings is that the limited capacity is the common attribute of different species of birds and mammals. The present results also indicate an increasing trend of capacity from our non-human ancestors to modern humans. Among the potential explanations of the limited capacity, associative learning and saving time in decision making, particularly because of the strong selective forces that associate with them and their vital importance for different species, seem to be more likely. On the other hand, the enhancement of the WM system appears to be a prerequisite for the evolution of some higher cognitive abilities of modern humans, such as languagetool use, and complex problem solving.

A question yet to be answered is whether the current size of STM/WM in humans is the end of the line or not. The current size has been considered by some to be the end point (e.g., MacGregor, 1987). As opposed to it, Cowan declared that it is possible to imagine that larger capacities would have been preferable or doable, but still did not happen. Therefore, our current capacity just reflects our place in the middle of an ongoing evolutionary process, not an end point. If this is the case, one might expect the present capacity to expand in the future, assuming that it offers a sufficient survival advantage (Cowan, 2005). However, the current review suggests that considering the resistance of memory span scores to the Flynn effect, it is difficult to expect substantial changes in small periods of time.

All in all, many of us, instead of the wild nature, are living in artificial and unnatural environments. Are these unnatural environments along with their overwhelming and escalating complex problem solving tasks imperceptibly pushing us towards a WM system with even a larger capacity and, if so, what is the price for that? Here, the most important point to stress is that evolution does not drive towards perfection. However, thanks to our elaborate information processing system and consciousness, we now have the ability to purposefully plan for the future of our own evolution.

The evidence reviewed in this paper shows that many species of birds and mammals are capable of performing memory strategies, although there seem to be some differences between humans and non-humans in terms of flexibility and efficiency. An enhancement in the capacity of the WM system might be the reason, or part of the reason, for the emergence of superior memory strategies in humans.

Striking similarities in the primacy and recency effect in conjunction with other evidence, such as similarities in the size of STM and performing memory strategies, suggest a similar memory structure in different species of birds and mammals. This is in accordance with Wright's inference that there is a qualitative similarity in memory processing across mammals and birds. The present findings have several implications relevant to the psychology of memory and cognition. For instance, the differences found in the ability to perform memory strategies and the size of STM, may provide an explanation for some of the differences between cognitive abilities of humans and non-humans.

People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations

Making moral principles suit yourself. Matthew L. Stanley, Paul Henne, Laura Niemi, Walter Sinnott-Armstrong & Felipe De Brigard. Psychonomic Bulletin & Review, May 4 2021. https://link.springer.com/article/10.3758/s13423-021-01935-8

Abstract: Normative ethical theories and religious traditions offer general moral principles for people to follow. These moral principles are typically meant to be fixed and rigid, offering reliable guides for moral judgment and decision-making. In two preregistered studies, we found consistent evidence that agreement with general moral principles shifted depending upon events recently accessed in memory. After recalling their own personal violations of moral principles, participants agreed less strongly with those very principles—relative to participants who recalled events in which other people violated the principles. This shift in agreement was explained, in part, by people’s willingness to excuse their own moral transgressions, but not the transgressions of others. These results have important implications for understanding the roles memory and personal identity in moral judgment. People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations.


Negative attitudes toward cosmetic surgery prevail, partly because it seems to give people an unfair advantage, similar to performance enhancing drugs in sports

The cosmetic surgery paradox: Toward a contemporary understanding of cosmetic surgery popularisation and attitudes. Sarah Bonell, Fiona Kate Barlow, Scott Griffiths. Body Image, Volume 38, September 2021, Pages 230-240. https://doi.org/10.1016/j.bodyim.2021.04.010

Highlights

• Negative attitudes toward cosmetic surgery prevail despite its surging popularity.

• Idealistic female beauty standards encourage the popularisation of cosmetic surgery.

• These same beauty standards inform negative attitudes toward cosmetic surgery.

• These co-occurring phenomena are described as the cosmetic surgery paradox.

Abstract: Modern women feel compelled to meet near-impossible standards of beauty. For many, this pursuit ultimately culminates in cosmetic surgery – a radical form of beautification that is rapidly becoming popular worldwide. Paradoxically, while prevalent, artificial beauty remains widely unaccepted in contemporary society. This narrative review synthesizes feminist dialogue, recent research, and real-world case studies to argue that female beauty standards account for both the growing popularity of cosmetic surgery and its lack of mainstream acceptance. First, we implicate unrealistic beauty standards and the medicalization of appearance in popularizing cosmetic surgery. Second, we analyze how negative attitudes toward cosmetic surgery are also motivated by unrealistic beauty standards. Finally, we generate a synthesized model of the processes outlined in this review and provide testable predictions for future studies based on this model. Our review is the first to integrate theoretical and empirical evidence into a cohesive narrative that explains the cosmetic surgery paradox; that is, how cosmetic surgery remains secretive, stigmatized, and moralized despite its surging popularity.

Keywords: Cosmetic surgeryPlastic surgeryPhysical appearanceFeminismBody imageFeminine beauty


Changes in age-at-marriage laws rarely achieve the desired outcome; for this change to be effective, better laws must be accompanied by better enforcement & monitoring to delay marriage & protect the rights of women & girls

Trends in child marriage and new evidence on the selective impact of changes in age-at-marriage laws on early marriage. Ewa Batyra, Luca Maria Pesando. SSM - Population Health, May 4 2021, 100811. https://doi.org/10.1016/j.ssmph.2021.100811

Highlights

• Changes in age-at-marriage laws have only a limited impact on delaying marriage age.

• In Benin, Mauritania, Kazakhstan and Bhutan, laws did not curb early marriage.

• In Tajikistan and Nepal results depends on model specification.

• Better enforcement must accompany the implementation of the age-at-marriage laws.

Abstract: This study adopts a cohort perspective to explore trends in child marriage – defined as the proportion of girls who entered first union before the age of 18 – and the effectiveness of policy changes aimed at curbing child marriage by increasing the minimum legal age of marriage. We adopt a cross-national perspective comparing six low- and middle-income countries (LMICs) that introduced changes in the minimum age at marriage over the past two decades. These countries belong to three broad regions: Sub-Saharan Africa (Benin, Mauritania), Central Asia (Tajikistan, Kazakhstan), and South Asia (Nepal, Bhutan). We combine individual-level data from Demographic and Health Surveys and Multiple Indicator Cluster Surveys with longitudinal information on policy changes from the PROSPERED (Policy-Relevant Observational Studies for Population Health Equity and Responsible Development) project. We adopt data visualization techniques and a regression discontinuity design to obtain causal estimates of the effect of changes in age-at-marriage laws on early marriage. Our results suggest that changes in minimum-age-at-marriage laws were not effective in curbing early marriage in Benin, Mauritania, Kazakhstan, and Bhutan, where child marriage did not show evidence of decline across cohorts. Significant reductions in early marriage following law implementations were observed in Tajikistan and Nepal, yet their effectiveness depended on the specification and threshold adopted, thus making them hardly effective as policies to shape girls' later life trajectories. Our findings align with existing evidence from other countries suggesting that changes in age-at-marriage laws rarely achieve the desired outcome. In order for changes in laws to be effective, better laws must be accompanied by better enforcement and monitoring to delay marriage and protect the rights of women and girls. Alternative policies need to be devised to ensure that girls’ later-life outcomes, including their participation in higher education and society, are ensured, encouraged, and protected.

Keywords: Early marriageLawsPolicy changesAge at marriageWomen's statusLMICs

Conclusions and discussion

This study has adopted a cohort perspective to explore the extent to which changes in age-at-marriage laws were effective in curbing early marriage across six LMICS which introduced a policy change regarding the minimum age at marriage. We tackled our research question using survey data from countries located in different regions of the world combined with novel longitudinal information on policy changes. We adopted simple causal inference techniques to obtain estimates of the causal effect of changes in age-at-marriage laws on early marriage, in line with the existing literature on the topic relying primarily on difference-in-differences strategies or regression discontinuity designs (Bellés-Obrero & Lombardi, 2020Collin & Talbot, 2018Dahl, 2010McGavock, 2021). In so doing, we reached two different sets of findings. The first set of results suggests that only in Tajikistan and Nepal there have been noticeable declines in child marriage across cohorts. Despite these declines observed in more recent cohorts, the majority of the countries under investigation – particularly Nepal, Bhutan, Mauritania, and Benin – are far from eradicating early marriage, as even among the youngest cohorts around one third of women marries before the age of 18.

Echoing much of the literature (Collin & Talbot, 2018Kalamar et al., 2016Koski et al., 2017), the second set of findings suggests that even in these previously unexplored countries, changes in minimum-age-at-marriage laws were not effective in curbing early marriage in Benin, Mauritania, Kazakhstan, and Bhutan, where child marriage showed little evidence of decline across cohorts to start with. Significant reductions in early marriage following law implementations were instead observed in Tajikistan and Nepal – showing, respectively, a decreasing trend overtime after the 1975–79 and 1980–84 cohorts – yet their effectiveness depends on the specification and threshold (window) adopted, thus making them hardly effective as policies to shape girls’ later life trajectories. As the literature tends to focus on Sub-Saharan Africa, India, Bangladesh, and Mexico, we believe our results for Nepal and Tajikistan are informative and stress an additional layer of novelty even just from a purely “geographical” standpoint.

Our findings relating to the mixed and context-specific effectiveness of policy changes aimed at curbing early marriage aligns with claims made by Arthur et al. (2018) and Collin and Talbot (2018) that, despite the increasing prevalence of legal provisions aimed at increasing the legal age at marriage, the level of enforcement varies widely, and legal exceptions based on parental consent and customary or religious laws remain in place – alongside high rates of illegal or informal marriages (Bellés-Obrero & Lombardi, 2020Collin & Talbot, 2018) – thus preventing the full effectiveness of the legal provisions. Unfortunately, we did not have data on exact implementation procedures, monitoring, or enforcement, which could also cast light on why reductions in child marriage in Nepal and Tajikistan were more pronounced than in other countries. Nonetheless, several sources suggest that these are serious issues that underlie the very limited effectiveness of changes in legal provisions in countries covered by our analyses overall. In Bhutan, although child marriage is punishable by fine, enforcement is considered to be weak and is exacerbated by lack of reliable marriage registration system (ICRW, 2012). In Nepal, child marriage can carry a prison sentence and/or a fine, but punishment is weakened by wide discretionary sentencing powers given to courts (CRR, 2016). Overall, poor implementation and limited awareness of child marriage legislation are considered to be one of the reasons for the high prevalence of child marriage in South Asia, including Bhutan and Nepal (Khanna et al., 2013). In Tajikistan, child marriages carry a prison sentence of up to six months but, in practice, most cases are only punished by a fine. Moreover, religious ceremonies are sometimes performed without registration of marriages to civil authorities, thus bypassing legislation (UNFPA, 2014). Similar issues around the implementation of laws as in Tajikistan have been identified in Kazakhstan (UNFPA, 2012). We found little information about the working of legislation in Sub-Saharan African countries covered by our study, most notably Benin. Nonetheless, available sources suggest that, in Mauritania, the lack of clarity of the Personal Status Code that sets the minimum age at marriage renders it ineffective in having any meaningful impact on marriage practices.3 Albeit scarce, the available evidence consistently points to weak enforcement of the new legislation across countries included in our study.

Our results combined suggest that there is a long way to go before child marriage is eradicated, and changes in legal provisions are playing only a minimal role, if any. This pushes scholars and policymakers to think about alternative policies that might be more effective in curbing early marriage or delaying age at first union. For instance, Kalamar et al. (2016) found that providing incentives for girls’ education was one of the few interventions that have been shown to effectively prevent child marriage. Many of the studies included in that review were conducted in Sub-Saharan Africa. Randomized evaluations in Kenya, Malawi, and Zimbabwe also found that reducing the cost of schooling by providing school fees, uniforms, or cash transfers conditional on attendance reduced the incidence of child marriage (Baird et al, 20102012Hallfors et al., 2015). A pilot program in Ethiopia that provided girls with mentorship and economic incentives to remain in school – and facilitated community discussion about the harms associated with child marriage – reduced the proportion of girls between ages 10 and 14 who were married by approximately 8 percentage points after a two-year follow-up period, although the program had no measurable effect on the marriage of girls between ages 15 and 19 (Erulkar & Muthengi, 2009).

Preventing early, coerced, and forced marriage has been on the global agenda for several decades, first in 2000 with the Millennium Development Goals (MDGs) highlighting the reduction of child marriage as a global priority, and then in 2015 as part of the global agenda with the establishment of the SDGs. We have here argued that SDG Goal 5 – focusing on gender equality to empower all women and girls – is linked with progress on the elimination of early marriage, yet it is also inextricably linked with SDG Goal 4, related to better access and more gender-equal participation in education, Goal 3 (good health and wellbeing), Goal 8 (decent work and economic growth) and, ultimately, Goal 1 (no poverty). Significant progress is nowhere close, yet a clear implication ensuing from this study is that better enforcement and monitoring of legal provisions concerning the minimum age at marriage – if effective – would have the potential to raise women's status by simultaneously enabling the achievement of multiple goals. We thus posit two clear implications of this research for policy and speculate on a third point. First, the laudable goal of legislation curbing or banning early marriage must be accompanied by capacity-building and resourcing for more legal enforcement. Second, monitoring the efficacy of deterrence, including through exploiting cheap and plentiful micro-level data as we do here, is essential to test and improve the link from laws to ages at marriage, the outcome targeted by policy and the one that matters most for women and girls' later-life outcomes. Third, we speculate on the possibility that national marriage policies might have a more meaningful impact if part of a comprehensive, multi-pronged, and context-sensitive approach targeting poverty and rooted social norms in all their forms – such as some of the complementary interventions (including educational interventions) mentioned above.

What Drives the Dehumanization of Consensual Non-Monogamous Partners? Being perceived as less moral and less committed to their relationship

What Drives the Dehumanization of Consensual Non-Monogamous Partners? David L. Rodrigues, Diniz Lopes & Aleksandra Huic. Archives of Sexual Behavior, May 4 2021. DOI: 10.1007/s10508-020-01895-5

Abstract: We built upon a recent study by Rodrigues, Fasoli, Huic, and Lopes (2018) by investigating potential mechanisms driving the dehumanization of consensual non-monogamous (CNM) partners. Using a between-subjects experimental design, we asked 202 Portuguese individuals (158 women; Mage = 29.17, SD = 9.97) to read the description of two partners in a monogamous, open, or polyamorous relationship, and to make a series of judgments about both partners. Results showed the expected dehumanization effect, such that both groups of CNM partners (open and polyamorous) were attributed more primary (vs. secondary) emotions, whereas the reverse was true for monogamous partners. Moreover, results showed that the dehumanization effect was driven by the perception of CNM partners as less moral and less committed to their relationship. However, these findings were observed only for individuals with unfavorable (vs. favorable) attitudes toward CNM relationship. Overall, this study replicated the original findings and extended our understanding of why people in CNM relationships are stigmatized.


Atheists were perceived as more desirable in short-term mating than long-term mating (preference that did not translate to being preferred in that context over theists); this effect is specific to physically attractive targets

Brown, Mitch. 2021. “Preliminary Evidence for Aversion to Atheists in Long-term Mating Domains.” PsyArXiv. May 2. doi:10.31234/osf.io/gu7xy

Abstract: The centrality of religiosity in selecting long-term mates suggests the espousal of atheism could be undesirable for that context. Given recent findings suggesting the presence of several positive stereotypes about atheists, a largely distrusted group, it could be possible individuals prefer atheists in mating domains not emphasizing long-term commitment (i.e., short-term mating). I conducted two studies tasking participants with evaluating long-term and short-term mating desirability of theists and atheists while assessing perceptions of their personalities. Study 1 indicated atheists were perceived as more desirable in short-term mating than long-term mating, though this preference did not translate to being preferred in that context over theists. Study 2 demonstrated this effect is specific to physically attractive targets. I further found atheists were perceived as more prone to infidelity, especially if they were attractive. I frame results from an evolutionary perspective while discussing the pervasiveness of anti-atheist prejudice.